url
stringlengths 51
54
| repository_url
stringclasses 1
value | labels_url
stringlengths 65
68
| comments_url
stringlengths 60
63
| events_url
stringlengths 58
61
| html_url
stringlengths 39
44
| id
int64 1.78B
2.82B
| node_id
stringlengths 18
19
| number
int64 1
8.69k
| title
stringlengths 1
382
| user
dict | labels
listlengths 0
5
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
2
| milestone
null | comments
int64 0
323
| created_at
timestamp[s] | updated_at
timestamp[s] | closed_at
timestamp[s] | author_association
stringclasses 4
values | sub_issues_summary
dict | active_lock_reason
null | draft
bool 2
classes | pull_request
dict | body
stringlengths 2
118k
⌀ | closed_by
dict | reactions
dict | timeline_url
stringlengths 60
63
| performed_via_github_app
null | state_reason
stringclasses 4
values | is_pull_request
bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/ollama/ollama/issues/585
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/585/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/585/comments
|
https://api.github.com/repos/ollama/ollama/issues/585/events
|
https://github.com/ollama/ollama/pull/585
| 1,910,410,563
|
PR_kwDOJ0Z1Ps5bEQ6b
| 585
|
add the example for ask the mentors
|
{
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.github.com/users/technovangelist/followers",
"following_url": "https://api.github.com/users/technovangelist/following{/other_user}",
"gists_url": "https://api.github.com/users/technovangelist/gists{/gist_id}",
"starred_url": "https://api.github.com/users/technovangelist/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/technovangelist/subscriptions",
"organizations_url": "https://api.github.com/users/technovangelist/orgs",
"repos_url": "https://api.github.com/users/technovangelist/repos",
"events_url": "https://api.github.com/users/technovangelist/events{/privacy}",
"received_events_url": "https://api.github.com/users/technovangelist/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-09-24T22:59:46
| 2023-10-09T20:58:15
| 2023-10-09T20:58:14
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/585",
"html_url": "https://github.com/ollama/ollama/pull/585",
"diff_url": "https://github.com/ollama/ollama/pull/585.diff",
"patch_url": "https://github.com/ollama/ollama/pull/585.patch",
"merged_at": "2023-10-09T20:58:14"
}
|
this is an example that will be used in a blog post about talking to mentors
|
{
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.github.com/users/technovangelist/followers",
"following_url": "https://api.github.com/users/technovangelist/following{/other_user}",
"gists_url": "https://api.github.com/users/technovangelist/gists{/gist_id}",
"starred_url": "https://api.github.com/users/technovangelist/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/technovangelist/subscriptions",
"organizations_url": "https://api.github.com/users/technovangelist/orgs",
"repos_url": "https://api.github.com/users/technovangelist/repos",
"events_url": "https://api.github.com/users/technovangelist/events{/privacy}",
"received_events_url": "https://api.github.com/users/technovangelist/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/585/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/585/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4082
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4082/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4082/comments
|
https://api.github.com/repos/ollama/ollama/issues/4082/events
|
https://github.com/ollama/ollama/issues/4082
| 2,273,573,727
|
I_kwDOJ0Z1Ps6Hg_tf
| 4,082
|
Llama3 Tokenizer
|
{
"login": "Bearsaerker",
"id": 92314812,
"node_id": "U_kgDOBYCcvA",
"avatar_url": "https://avatars.githubusercontent.com/u/92314812?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bearsaerker",
"html_url": "https://github.com/Bearsaerker",
"followers_url": "https://api.github.com/users/Bearsaerker/followers",
"following_url": "https://api.github.com/users/Bearsaerker/following{/other_user}",
"gists_url": "https://api.github.com/users/Bearsaerker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bearsaerker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bearsaerker/subscriptions",
"organizations_url": "https://api.github.com/users/Bearsaerker/orgs",
"repos_url": "https://api.github.com/users/Bearsaerker/repos",
"events_url": "https://api.github.com/users/Bearsaerker/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bearsaerker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 4
| 2024-05-01T14:05:15
| 2024-05-01T15:20:23
| 2024-05-01T15:20:22
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I requanted the llama3 Sauerkraut with the newest release of llama cpp which should have fixed the tokenizer, but when I load the model into Ollama, I still get the wrong output while people using llama cpp get the right one. So I'd say that there is still something buggy in ollama. Here is the Output.
"What is 7777 + 3333?
Let me calculate that for you!
77,777 (first number) + 33,333 (second number) = 111,110
So the answer is 111,110!"
### OS
Linux
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.1.32
|
{
"login": "Bearsaerker",
"id": 92314812,
"node_id": "U_kgDOBYCcvA",
"avatar_url": "https://avatars.githubusercontent.com/u/92314812?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bearsaerker",
"html_url": "https://github.com/Bearsaerker",
"followers_url": "https://api.github.com/users/Bearsaerker/followers",
"following_url": "https://api.github.com/users/Bearsaerker/following{/other_user}",
"gists_url": "https://api.github.com/users/Bearsaerker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bearsaerker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bearsaerker/subscriptions",
"organizations_url": "https://api.github.com/users/Bearsaerker/orgs",
"repos_url": "https://api.github.com/users/Bearsaerker/repos",
"events_url": "https://api.github.com/users/Bearsaerker/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bearsaerker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4082/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4082/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8608
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8608/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8608/comments
|
https://api.github.com/repos/ollama/ollama/issues/8608/events
|
https://github.com/ollama/ollama/issues/8608
| 2,812,662,391
|
I_kwDOJ0Z1Ps6npdJ3
| 8,608
|
Panic while downloading the model
|
{
"login": "tchaton",
"id": 12861981,
"node_id": "MDQ6VXNlcjEyODYxOTgx",
"avatar_url": "https://avatars.githubusercontent.com/u/12861981?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tchaton",
"html_url": "https://github.com/tchaton",
"followers_url": "https://api.github.com/users/tchaton/followers",
"following_url": "https://api.github.com/users/tchaton/following{/other_user}",
"gists_url": "https://api.github.com/users/tchaton/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tchaton/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tchaton/subscriptions",
"organizations_url": "https://api.github.com/users/tchaton/orgs",
"repos_url": "https://api.github.com/users/tchaton/repos",
"events_url": "https://api.github.com/users/tchaton/events{/privacy}",
"received_events_url": "https://api.github.com/users/tchaton/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677370291,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCVsw",
"url": "https://api.github.com/repos/ollama/ollama/labels/networking",
"name": "networking",
"color": "0B5368",
"default": false,
"description": "Issues relating to ollama pull and push"
}
] |
closed
| false
| null |
[] | null | 2
| 2025-01-27T10:43:53
| 2025-01-27T16:23:44
| 2025-01-27T16:23:42
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
`/bin/ollama run llama3.1`
<img width="1243" alt="Image" src="https://github.com/user-attachments/assets/0c520af1-52d5-4371-bf89-fac7a9fe94d9" />
### OS
_No response_
### GPU
_No response_
### CPU
_No response_
### Ollama version
_No response_
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8608/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8608/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5387
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5387/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5387/comments
|
https://api.github.com/repos/ollama/ollama/issues/5387/events
|
https://github.com/ollama/ollama/issues/5387
| 2,382,027,700
|
I_kwDOJ0Z1Ps6N-tu0
| 5,387
|
Intel Integrated Graphics GPU not being utilized when OLLAMA_INTEL_GPU flag is enabled
|
{
"login": "suncloudsmoon",
"id": 34616349,
"node_id": "MDQ6VXNlcjM0NjE2MzQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/34616349?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/suncloudsmoon",
"html_url": "https://github.com/suncloudsmoon",
"followers_url": "https://api.github.com/users/suncloudsmoon/followers",
"following_url": "https://api.github.com/users/suncloudsmoon/following{/other_user}",
"gists_url": "https://api.github.com/users/suncloudsmoon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/suncloudsmoon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/suncloudsmoon/subscriptions",
"organizations_url": "https://api.github.com/users/suncloudsmoon/orgs",
"repos_url": "https://api.github.com/users/suncloudsmoon/repos",
"events_url": "https://api.github.com/users/suncloudsmoon/events{/privacy}",
"received_events_url": "https://api.github.com/users/suncloudsmoon/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 4
| 2024-06-30T00:48:36
| 2024-07-02T21:07:25
| 2024-07-02T21:07:25
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
When the flag 'OLLAMA_INTEL_GPU' is enabled, I expect Ollama to take full advantage of the Intel GPU/iGPU present on the system. However, the intel iGPU is not utilized at all on my system. My Intel iGPU is Intel Iris Xe Graphics (11th gen).
Logs:
```
C:\Users\ocean>ollama serve
2024/06/29 17:35:53 routes.go:1064: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:true OLLAMA_KEEP_ALIVE: OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_MODELS:C:\\Users\\ocean\\.ollama\\models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR:C:\\Users\\ocean\\AppData\\Local\\Programs\\Ollama\\ollama_runners OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
time=2024-06-29T17:35:53.939-07:00 level=INFO source=images.go:730 msg="total blobs: 23"
time=2024-06-29T17:35:53.941-07:00 level=INFO source=images.go:737 msg="total unused blobs removed: 0"
time=2024-06-29T17:35:53.943-07:00 level=INFO source=routes.go:1111 msg="Listening on 127.0.0.1:11434 (version 0.1.48)"
time=2024-06-29T17:35:53.943-07:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11.3 rocm_v5.7]"
time=2024-06-29T17:35:54.605-07:00 level=INFO source=types.go:98 msg="inference compute" id=0 library=oneapi compute="" driver=0.0 name="" total="0 B" available="0 B"
[GIN] 2024/06/29 - 17:36:34 | 200 | 62.0367ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2024/06/29 - 17:36:35 | 200 | 4.8491ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2024/06/29 - 17:36:36 | 200 | 533.5µs | 127.0.0.1 | GET "/api/version"
time=2024-06-29T17:36:58.350-07:00 level=INFO source=memory.go:309 msg="offload to oneapi" layers.requested=-1 layers.model=29 layers.offload=0 layers.split="" memory.available="[0 B]" memory.required.full="1.0 GiB" memory.required.partial="0 B" memory.required.kv="56.0 MiB" memory.required.allocations="[0 B]" memory.weights.total="808.1 MiB" memory.weights.repeating="625.5 MiB" memory.weights.nonrepeating="182.6 MiB" memory.graph.full="299.8 MiB" memory.graph.partial="482.3 MiB"
time=2024-06-29T17:36:58.375-07:00 level=INFO source=server.go:368 msg="starting llama server" cmd="C:\\Users\\ocean\\AppData\\Local\\Programs\\Ollama\\ollama_runners\\cpu_avx2\\ollama_llama_server.exe --model C:\\Users\\ocean\\.ollama\\models\\blobs\\sha256-6acb9bb78ee9d70d4d210ebc3903e6719c7ddb9796dd120f962f640530813603 --ctx-size 2048 --batch-size 512 --embedding --log-disable --parallel 1 --port 52261"
time=2024-06-29T17:36:58.434-07:00 level=INFO source=sched.go:382 msg="loaded runners" count=1
time=2024-06-29T17:36:58.434-07:00 level=INFO source=server.go:556 msg="waiting for llama runner to start responding"
time=2024-06-29T17:36:58.435-07:00 level=INFO source=server.go:594 msg="waiting for server to become available" status="llm server error"
INFO [wmain] build info | build=3171 commit="7c26775a" tid="33680" timestamp=1719707818
INFO [wmain] system info | n_threads=4 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="33680" timestamp=1719707818 total_threads=8
INFO [wmain] HTTP server listening | hostname="127.0.0.1" n_threads_http="7" port="52261" tid="33680" timestamp=1719707818
llama_model_loader: loaded meta data with 21 key-value pairs and 338 tensors from C:\Users\ocean\.ollama\models\blobs\sha256-6acb9bb78ee9d70d4d210ebc3903e6719c7ddb9796dd120f962f640530813603 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = qwen2
llama_model_loader: - kv 1: general.name str = Qwen2-1.5B-Instruct
llama_model_loader: - kv 2: qwen2.block_count u32 = 28
llama_model_loader: - kv 3: qwen2.context_length u32 = 32768
llama_model_loader: - kv 4: qwen2.embedding_length u32 = 1536
llama_model_loader: - kv 5: qwen2.feed_forward_length u32 = 8960
llama_model_loader: - kv 6: qwen2.attention.head_count u32 = 12
llama_model_loader: - kv 7: qwen2.attention.head_count_kv u32 = 2
llama_model_loader: - kv 8: qwen2.rope.freq_base f32 = 1000000.000000
llama_model_loader: - kv 9: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001
llama_model_loader: - kv 10: general.file_type u32 = 15
llama_model_loader: - kv 11: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 12: tokenizer.ggml.pre str = qwen2
llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,151936] = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 14: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 15: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv 16: tokenizer.ggml.eos_token_id u32 = 151645
llama_model_loader: - kv 17: tokenizer.ggml.padding_token_id u32 = 151643
llama_model_loader: - kv 18: tokenizer.ggml.bos_token_id u32 = 151643
llama_model_loader: - kv 19: tokenizer.chat_template str = {% for message in messages %}{% if lo...
llama_model_loader: - kv 20: general.quantization_version u32 = 2
llama_model_loader: - type f32: 141 tensors
llama_model_loader: - type q4_K: 168 tensors
llama_model_loader: - type q6_K: 29 tensors
time=2024-06-29T17:36:58.689-07:00 level=INFO source=server.go:594 msg="waiting for server to become available" status="llm server loading model"
llm_load_vocab: special tokens cache size = 293
llm_load_vocab: token to piece cache size = 0.9338 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = qwen2
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 151936
llm_load_print_meta: n_merges = 151387
llm_load_print_meta: n_ctx_train = 32768
llm_load_print_meta: n_embd = 1536
llm_load_print_meta: n_head = 12
llm_load_print_meta: n_head_kv = 2
llm_load_print_meta: n_layer = 28
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 6
llm_load_print_meta: n_embd_k_gqa = 256
llm_load_print_meta: n_embd_v_gqa = 256
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-06
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 8960
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 2
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 32768
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: model type = ?B
llm_load_print_meta: model ftype = Q4_K - Medium
llm_load_print_meta: model params = 1.54 B
llm_load_print_meta: model size = 934.69 MiB (5.08 BPW)
llm_load_print_meta: general.name = Qwen2-1.5B-Instruct
llm_load_print_meta: BOS token = 151643 '<|endoftext|>'
llm_load_print_meta: EOS token = 151645 '<|im_end|>'
llm_load_print_meta: PAD token = 151643 '<|endoftext|>'
llm_load_print_meta: LF token = 148848 'ÄĬ'
llm_load_print_meta: EOT token = 151645 '<|im_end|>'
llm_load_tensors: ggml ctx size = 0.16 MiB
llm_load_tensors: CPU buffer size = 934.69 MiB
llama_new_context_with_model: n_ctx = 2048
llama_new_context_with_model: n_batch = 512
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 1000000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: CPU KV buffer size = 56.00 MiB
llama_new_context_with_model: KV self size = 56.00 MiB, K (f16): 28.00 MiB, V (f16): 28.00 MiB
llama_new_context_with_model: CPU output buffer size = 0.59 MiB
llama_new_context_with_model: CPU compute buffer size = 299.75 MiB
llama_new_context_with_model: graph nodes = 986
llama_new_context_with_model: graph splits = 1
INFO [wmain] model loaded | tid="33680" timestamp=1719707819
time=2024-06-29T17:36:59.739-07:00 level=INFO source=server.go:599 msg="llama runner started in 1.31 seconds"
```
### OS
Windows
### GPU
Intel
### CPU
Intel
### Ollama version
0.1.48
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5387/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5387/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3772
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3772/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3772/comments
|
https://api.github.com/repos/ollama/ollama/issues/3772/events
|
https://github.com/ollama/ollama/issues/3772
| 2,254,461,402
|
I_kwDOJ0Z1Ps6GYFna
| 3,772
|
Please add a way to specify the installation location on windows :)
|
{
"login": "Vishwamithra37",
"id": 53423141,
"node_id": "MDQ6VXNlcjUzNDIzMTQx",
"avatar_url": "https://avatars.githubusercontent.com/u/53423141?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Vishwamithra37",
"html_url": "https://github.com/Vishwamithra37",
"followers_url": "https://api.github.com/users/Vishwamithra37/followers",
"following_url": "https://api.github.com/users/Vishwamithra37/following{/other_user}",
"gists_url": "https://api.github.com/users/Vishwamithra37/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Vishwamithra37/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Vishwamithra37/subscriptions",
"organizations_url": "https://api.github.com/users/Vishwamithra37/orgs",
"repos_url": "https://api.github.com/users/Vishwamithra37/repos",
"events_url": "https://api.github.com/users/Vishwamithra37/events{/privacy}",
"received_events_url": "https://api.github.com/users/Vishwamithra37/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg",
"url": "https://api.github.com/repos/ollama/ollama/labels/windows",
"name": "windows",
"color": "0052CC",
"default": false,
"description": ""
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 2
| 2024-04-20T09:01:44
| 2024-04-24T16:57:51
| 2024-04-24T16:57:42
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I really want to mention the installation location and my C ddrive is FULL
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3772/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3772/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4129
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4129/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4129/comments
|
https://api.github.com/repos/ollama/ollama/issues/4129/events
|
https://github.com/ollama/ollama/pull/4129
| 2,277,994,343
|
PR_kwDOJ0Z1Ps5ufYxE
| 4,129
|
Soften timeouts on sched unit tests
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-05-03T16:09:07
| 2024-05-03T18:10:29
| 2024-05-03T18:10:26
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4129",
"html_url": "https://github.com/ollama/ollama/pull/4129",
"diff_url": "https://github.com/ollama/ollama/pull/4129.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4129.patch",
"merged_at": "2024-05-03T18:10:26"
}
|
This gives us more headroom on the scheduler tests to tamp down some flakes.
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4129/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4129/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2935
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2935/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2935/comments
|
https://api.github.com/repos/ollama/ollama/issues/2935/events
|
https://github.com/ollama/ollama/issues/2935
| 2,169,028,892
|
I_kwDOJ0Z1Ps6BSMEc
| 2,935
|
Ollama returns: Error: error loading model when importing a fined-tuned converted and quantized model
|
{
"login": "FotieMConstant",
"id": 42372656,
"node_id": "MDQ6VXNlcjQyMzcyNjU2",
"avatar_url": "https://avatars.githubusercontent.com/u/42372656?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/FotieMConstant",
"html_url": "https://github.com/FotieMConstant",
"followers_url": "https://api.github.com/users/FotieMConstant/followers",
"following_url": "https://api.github.com/users/FotieMConstant/following{/other_user}",
"gists_url": "https://api.github.com/users/FotieMConstant/gists{/gist_id}",
"starred_url": "https://api.github.com/users/FotieMConstant/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/FotieMConstant/subscriptions",
"organizations_url": "https://api.github.com/users/FotieMConstant/orgs",
"repos_url": "https://api.github.com/users/FotieMConstant/repos",
"events_url": "https://api.github.com/users/FotieMConstant/events{/privacy}",
"received_events_url": "https://api.github.com/users/FotieMConstant/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 20
| 2024-03-05T12:04:24
| 2024-05-10T20:25:33
| 2024-05-10T20:25:33
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi everyone, i am having an issue running a fine-tuned quantized version of llama2 on ollama. i followed all the stops at: https://github.com/ollama/ollama/blob/main/docs/import.md
however after quantizing and creating my model on ollama. i can see my model on the list however when i run it i get this error
```bash
Error: error loading model /Users/🤓.ollama/models/blobs/sha256:1c75cbd55211b7505be15c897b3ca1766708e5808558139e1531e182
```
can someone help with this? as i am not sure what is going on. technically it should work.
**ollama version is 0.1.27**
**OS: Mac OS Sonama, version 14.3.1 on Apple M1 chip**
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2935/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2935/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7691
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7691/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7691/comments
|
https://api.github.com/repos/ollama/ollama/issues/7691/events
|
https://github.com/ollama/ollama/issues/7691
| 2,662,969,562
|
I_kwDOJ0Z1Ps6eubDa
| 7,691
|
[Docs] Incorrect default value for num_predict?
|
{
"login": "owboson",
"id": 115831817,
"node_id": "U_kgDOBud0CQ",
"avatar_url": "https://avatars.githubusercontent.com/u/115831817?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/owboson",
"html_url": "https://github.com/owboson",
"followers_url": "https://api.github.com/users/owboson/followers",
"following_url": "https://api.github.com/users/owboson/following{/other_user}",
"gists_url": "https://api.github.com/users/owboson/gists{/gist_id}",
"starred_url": "https://api.github.com/users/owboson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/owboson/subscriptions",
"organizations_url": "https://api.github.com/users/owboson/orgs",
"repos_url": "https://api.github.com/users/owboson/repos",
"events_url": "https://api.github.com/users/owboson/events{/privacy}",
"received_events_url": "https://api.github.com/users/owboson/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396191,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aXw",
"url": "https://api.github.com/repos/ollama/ollama/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-11-15T19:43:53
| 2024-12-03T23:00:06
| 2024-12-03T23:00:06
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
The API documentation (https://github.com/ollama/ollama/blob/main/docs/api.md#generate-a-chat-completion) refers to https://github.com/ollama/ollama/blob/main/docs/modelfile.md#valid-parameters-and-values for more information about the parameters that can be specified in the `options` field of a chat completion request.
However, the default value for the `num_predict` parameter described there either doesn't apply to calls made via the python library (in which case the docs should emphasise this) or is incorrect.
For `num_predict`, the docs say:
> Maximum number of tokens to predict when generating text. (Default: 128, -1 = infinite generation, -2 = fill context)
I initially wondered how Ollama could generate responses much longer than 128 tokens (without me specifying a value for the parameter). After adding a debug statement in `router.go`, I noticed that the server received a value of -1 for `num_predict`, which matches my previous observations.
As a consequence, the documentation is either misleading or gives an incorrect default value.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7691/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7691/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/669
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/669/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/669/comments
|
https://api.github.com/repos/ollama/ollama/issues/669/events
|
https://github.com/ollama/ollama/issues/669
| 1,921,030,601
|
I_kwDOJ0Z1Ps5ygJnJ
| 669
|
Allow customizing allowed headers in CORS settings
|
{
"login": "spaceemotion",
"id": 429147,
"node_id": "MDQ6VXNlcjQyOTE0Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/429147?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/spaceemotion",
"html_url": "https://github.com/spaceemotion",
"followers_url": "https://api.github.com/users/spaceemotion/followers",
"following_url": "https://api.github.com/users/spaceemotion/following{/other_user}",
"gists_url": "https://api.github.com/users/spaceemotion/gists{/gist_id}",
"starred_url": "https://api.github.com/users/spaceemotion/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/spaceemotion/subscriptions",
"organizations_url": "https://api.github.com/users/spaceemotion/orgs",
"repos_url": "https://api.github.com/users/spaceemotion/repos",
"events_url": "https://api.github.com/users/spaceemotion/events{/privacy}",
"received_events_url": "https://api.github.com/users/spaceemotion/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 17
| 2023-10-01T23:31:21
| 2025-01-26T07:31:48
| 2023-10-28T19:25:17
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Based on some additional research on an issue I have (https://github.com/jmorganca/ollama/issues/300#issuecomment-1742099347), I am getting the following error in chrome/firefox:
> Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at http://localhost:11434/api/tags. (Reason: header ‘baggage’ is not allowed according to header ‘Access-Control-Allow-Headers’ from CORS preflight response).
(see https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS/Errors/CORSMissingAllowHeaderFromPreflight for details)
It would be helpful to allow all headers (if possible?) as I am able to call the API via tools like curl, postman, etc., but not using `fetch()` from a webpage. This does not need to be the default, an env variable like `OLLAMA_HOST` and such works for me.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/669/reactions",
"total_count": 7,
"+1": 7,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/669/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7446
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7446/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7446/comments
|
https://api.github.com/repos/ollama/ollama/issues/7446/events
|
https://github.com/ollama/ollama/issues/7446
| 2,626,492,019
|
I_kwDOJ0Z1Ps6cjRZz
| 7,446
|
MiniCPM-V 2.6 model crash with error code 500 when using ollama API in golang
|
{
"login": "FreemanFeng",
"id": 1662126,
"node_id": "MDQ6VXNlcjE2NjIxMjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1662126?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/FreemanFeng",
"html_url": "https://github.com/FreemanFeng",
"followers_url": "https://api.github.com/users/FreemanFeng/followers",
"following_url": "https://api.github.com/users/FreemanFeng/following{/other_user}",
"gists_url": "https://api.github.com/users/FreemanFeng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/FreemanFeng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/FreemanFeng/subscriptions",
"organizations_url": "https://api.github.com/users/FreemanFeng/orgs",
"repos_url": "https://api.github.com/users/FreemanFeng/repos",
"events_url": "https://api.github.com/users/FreemanFeng/events{/privacy}",
"received_events_url": "https://api.github.com/users/FreemanFeng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 3
| 2024-10-31T10:19:54
| 2024-11-14T21:00:10
| 2024-11-14T21:00:10
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?

below is the golang code, set the prompt to be "请识别图片", load the attached image into []byte.
func RunVLM(prompt string, images ...[]byte) (bool, any) {
client, err := api.ClientFromEnvironment()
if err != nil {
log.Fatal(err)
}
model := "minicpm-v"
req := &api.GenerateRequest{
Model: model,
Prompt: prompt,
KeepAlive: new(api.Duration),
// set streaming to false
Stream: new(bool),
}
for _, k := range images {
s := base64.StdEncoding.EncodeToString(k)
req.Images = append(req.Images, api.ImageData(s))
}
//req.KeepAlive.Duration = 24 * 60 * time.Minute
var v any
ctx := context.Background()
respFunc := func(resp api.GenerateResponse) error {
// Only print the response here; GenerateResponse has a number of other
// interesting fields you want to examine.
fmt.Println(resp.Response)
e := json.Unmarshal([]byte(resp.Response), &v)
if e != nil {
e = json.Unmarshal([]byte(fetchJSON(resp.Response)), &v)
if e != nil {
log.Println(e.Error())
v = resp.Response
return nil
}
}
return nil
}
err = client.Generate(ctx, req, respFunc)
if err != nil {
log.Println(err.Error())
return false, nil
}
return true, v
}
When I run the code, ollama return "unmarshalling llm prediction response: invalid character 'e' looking for beginning of value" error.
So I debug the code and find that the ollama api service return 500 error.
### OS
Windows
### GPU
_No response_
### CPU
Intel
### Ollama version
0.3.14
|
{
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users/jessegross/followers",
"following_url": "https://api.github.com/users/jessegross/following{/other_user}",
"gists_url": "https://api.github.com/users/jessegross/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jessegross/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jessegross/subscriptions",
"organizations_url": "https://api.github.com/users/jessegross/orgs",
"repos_url": "https://api.github.com/users/jessegross/repos",
"events_url": "https://api.github.com/users/jessegross/events{/privacy}",
"received_events_url": "https://api.github.com/users/jessegross/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7446/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7446/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7549
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7549/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7549/comments
|
https://api.github.com/repos/ollama/ollama/issues/7549/events
|
https://github.com/ollama/ollama/issues/7549
| 2,640,492,498
|
I_kwDOJ0Z1Ps6dYrfS
| 7,549
|
ollama_embed issue
|
{
"login": "Ayush-developer",
"id": 84736562,
"node_id": "MDQ6VXNlcjg0NzM2NTYy",
"avatar_url": "https://avatars.githubusercontent.com/u/84736562?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ayush-developer",
"html_url": "https://github.com/Ayush-developer",
"followers_url": "https://api.github.com/users/Ayush-developer/followers",
"following_url": "https://api.github.com/users/Ayush-developer/following{/other_user}",
"gists_url": "https://api.github.com/users/Ayush-developer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ayush-developer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ayush-developer/subscriptions",
"organizations_url": "https://api.github.com/users/Ayush-developer/orgs",
"repos_url": "https://api.github.com/users/Ayush-developer/repos",
"events_url": "https://api.github.com/users/Ayush-developer/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ayush-developer/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-11-07T09:59:43
| 2024-11-13T21:33:40
| 2024-11-13T21:33:40
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
postgres=# SELECT ai.ollama_embed('llama3', 'this is a test');
ERROR: ollama._types.ResponseError: model "llama3" not found, try pulling it first
CONTEXT: Traceback (most recent call last):
PL/Python function "ollama_embed", line 21, in <module>
resp = client.embeddings(model, input_text, options=embedding_options_1, keep_alive=keep_alive)
PL/Python function "ollama_embed", line 200, in embeddings
PL/Python function "ollama_embed", line 73, in _request
PL/Python function "ollama_embed" container is able to talk to ollama but not able to recognize this ollama_embed function
### OS
macOS
### GPU
Apple
### CPU
_No response_
### Ollama version
ollama version is 0.4.0
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7549/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7549/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3144
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3144/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3144/comments
|
https://api.github.com/repos/ollama/ollama/issues/3144/events
|
https://github.com/ollama/ollama/issues/3144
| 2,186,807,464
|
I_kwDOJ0Z1Ps6CWAio
| 3,144
|
add /metrics endpoint
|
{
"login": "codearranger",
"id": 80373433,
"node_id": "MDQ6VXNlcjgwMzczNDMz",
"avatar_url": "https://avatars.githubusercontent.com/u/80373433?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/codearranger",
"html_url": "https://github.com/codearranger",
"followers_url": "https://api.github.com/users/codearranger/followers",
"following_url": "https://api.github.com/users/codearranger/following{/other_user}",
"gists_url": "https://api.github.com/users/codearranger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/codearranger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/codearranger/subscriptions",
"organizations_url": "https://api.github.com/users/codearranger/orgs",
"repos_url": "https://api.github.com/users/codearranger/repos",
"events_url": "https://api.github.com/users/codearranger/events{/privacy}",
"received_events_url": "https://api.github.com/users/codearranger/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 7706482389,
"node_id": "LA_kwDOJ0Z1Ps8AAAABy1eW1Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/api",
"name": "api",
"color": "bfdadc",
"default": false,
"description": ""
}
] |
open
| false
| null |
[] | null | 21
| 2024-03-14T16:39:01
| 2025-01-24T09:59:07
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
It would be nice of ollama had a /metrics endpoint for collecting metrics for prometheus or other monitoring tools.
https://prometheus.io/docs/guides/go-application/
Some metrics to include might be,
GPU utilization, memory utilization, CPU utilzation, layers used, request counts, etc.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3144/reactions",
"total_count": 47,
"+1": 39,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 8,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3144/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/1752
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1752/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1752/comments
|
https://api.github.com/repos/ollama/ollama/issues/1752/events
|
https://github.com/ollama/ollama/issues/1752
| 2,061,132,751
|
I_kwDOJ0Z1Ps562mPP
| 1,752
|
Ollama can run in Docker (hosted in local machine) but not directly in local
|
{
"login": "Huertas97",
"id": 56938752,
"node_id": "MDQ6VXNlcjU2OTM4NzUy",
"avatar_url": "https://avatars.githubusercontent.com/u/56938752?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Huertas97",
"html_url": "https://github.com/Huertas97",
"followers_url": "https://api.github.com/users/Huertas97/followers",
"following_url": "https://api.github.com/users/Huertas97/following{/other_user}",
"gists_url": "https://api.github.com/users/Huertas97/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Huertas97/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Huertas97/subscriptions",
"organizations_url": "https://api.github.com/users/Huertas97/orgs",
"repos_url": "https://api.github.com/users/Huertas97/repos",
"events_url": "https://api.github.com/users/Huertas97/events{/privacy}",
"received_events_url": "https://api.github.com/users/Huertas97/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2023-12-31T18:00:17
| 2024-01-01T11:51:51
| 2024-01-01T11:51:51
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
It is quite strange.
I have deployed the container of ollama and I can access to the bash shell and load models and chat with them. But when I install Ollama in the local system (the same that is running the docker container), when I try to chat with the same model (explored: tinyllama and mistral), it says:
`Error: llama runner exited, you may not have enough available memory to run this model`
|
{
"login": "Huertas97",
"id": 56938752,
"node_id": "MDQ6VXNlcjU2OTM4NzUy",
"avatar_url": "https://avatars.githubusercontent.com/u/56938752?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Huertas97",
"html_url": "https://github.com/Huertas97",
"followers_url": "https://api.github.com/users/Huertas97/followers",
"following_url": "https://api.github.com/users/Huertas97/following{/other_user}",
"gists_url": "https://api.github.com/users/Huertas97/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Huertas97/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Huertas97/subscriptions",
"organizations_url": "https://api.github.com/users/Huertas97/orgs",
"repos_url": "https://api.github.com/users/Huertas97/repos",
"events_url": "https://api.github.com/users/Huertas97/events{/privacy}",
"received_events_url": "https://api.github.com/users/Huertas97/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1752/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1752/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7142
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7142/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7142/comments
|
https://api.github.com/repos/ollama/ollama/issues/7142/events
|
https://github.com/ollama/ollama/issues/7142
| 2,574,057,976
|
I_kwDOJ0Z1Ps6ZbQH4
| 7,142
|
Nvidia's brand spanking new model!
|
{
"login": "iplayfast",
"id": 751306,
"node_id": "MDQ6VXNlcjc1MTMwNg==",
"avatar_url": "https://avatars.githubusercontent.com/u/751306?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iplayfast",
"html_url": "https://github.com/iplayfast",
"followers_url": "https://api.github.com/users/iplayfast/followers",
"following_url": "https://api.github.com/users/iplayfast/following{/other_user}",
"gists_url": "https://api.github.com/users/iplayfast/gists{/gist_id}",
"starred_url": "https://api.github.com/users/iplayfast/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iplayfast/subscriptions",
"organizations_url": "https://api.github.com/users/iplayfast/orgs",
"repos_url": "https://api.github.com/users/iplayfast/repos",
"events_url": "https://api.github.com/users/iplayfast/events{/privacy}",
"received_events_url": "https://api.github.com/users/iplayfast/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-10-08T19:41:32
| 2024-10-16T01:40:21
| 2024-10-16T01:40:21
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
https://huggingface.co/nvidia/NVLM-D-72B
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7142/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7142/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6258
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6258/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6258/comments
|
https://api.github.com/repos/ollama/ollama/issues/6258/events
|
https://github.com/ollama/ollama/pull/6258
| 2,455,676,379
|
PR_kwDOJ0Z1Ps531Ldz
| 6,258
|
server/download.go: Fix a typo in log
|
{
"login": "coolljt0725",
"id": 8232360,
"node_id": "MDQ6VXNlcjgyMzIzNjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/8232360?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/coolljt0725",
"html_url": "https://github.com/coolljt0725",
"followers_url": "https://api.github.com/users/coolljt0725/followers",
"following_url": "https://api.github.com/users/coolljt0725/following{/other_user}",
"gists_url": "https://api.github.com/users/coolljt0725/gists{/gist_id}",
"starred_url": "https://api.github.com/users/coolljt0725/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/coolljt0725/subscriptions",
"organizations_url": "https://api.github.com/users/coolljt0725/orgs",
"repos_url": "https://api.github.com/users/coolljt0725/repos",
"events_url": "https://api.github.com/users/coolljt0725/events{/privacy}",
"received_events_url": "https://api.github.com/users/coolljt0725/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-08-08T12:29:37
| 2024-08-10T01:56:24
| 2024-08-10T00:19:48
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6258",
"html_url": "https://github.com/ollama/ollama/pull/6258",
"diff_url": "https://github.com/ollama/ollama/pull/6258.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6258.patch",
"merged_at": "2024-08-10T00:19:48"
}
| null |
{
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users/jessegross/followers",
"following_url": "https://api.github.com/users/jessegross/following{/other_user}",
"gists_url": "https://api.github.com/users/jessegross/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jessegross/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jessegross/subscriptions",
"organizations_url": "https://api.github.com/users/jessegross/orgs",
"repos_url": "https://api.github.com/users/jessegross/repos",
"events_url": "https://api.github.com/users/jessegross/events{/privacy}",
"received_events_url": "https://api.github.com/users/jessegross/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6258/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6258/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2945
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2945/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2945/comments
|
https://api.github.com/repos/ollama/ollama/issues/2945/events
|
https://github.com/ollama/ollama/issues/2945
| 2,170,470,316
|
I_kwDOJ0Z1Ps6BXr-s
| 2,945
|
Error: Post "http://127.0.0.1:11434/api/generate": EOF / CUDA errors when trying to run ollama in terminal
|
{
"login": "jferments",
"id": 158022198,
"node_id": "U_kgDOCWs6Ng",
"avatar_url": "https://avatars.githubusercontent.com/u/158022198?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jferments",
"html_url": "https://github.com/jferments",
"followers_url": "https://api.github.com/users/jferments/followers",
"following_url": "https://api.github.com/users/jferments/following{/other_user}",
"gists_url": "https://api.github.com/users/jferments/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jferments/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jferments/subscriptions",
"organizations_url": "https://api.github.com/users/jferments/orgs",
"repos_url": "https://api.github.com/users/jferments/repos",
"events_url": "https://api.github.com/users/jferments/events{/privacy}",
"received_events_url": "https://api.github.com/users/jferments/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-03-06T01:53:59
| 2024-03-06T16:21:46
| 2024-03-06T16:21:45
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I am using Ollama version 0.1.20 and am getting CUDA errors when trying to run Ollama in terminal or from python scripts. When I try to run these in terminal:
`ollama run mistral`
`ollama run orca-mini`
They fail with the only message being:
`Error: Post "http://127.0.0.1:11434/api/generate": EOF`
These are being caused by CUDA errors as you can see below, but there is nothing in the terminal output re: CUDA errors.
Here is output from `journalctl` for ollama:
```
Mar 05 11:00:25 jesse-MS-7C02 ollama[74384]: llm_load_tensors: ggml ctx size = 0.11 MiB
Mar 05 11:00:25 jesse-MS-7C02 ollama[74384]: llm_load_tensors: mem required = 3917.98 MiB
Mar 05 11:00:25 jesse-MS-7C02 ollama[74384]: llm_load_tensors: offloading 32 repeating layers to GPU
Mar 05 11:00:25 jesse-MS-7C02 ollama[74384]: llm_load_tensors: offloading non-repeating layers to GPU
Mar 05 11:00:25 jesse-MS-7C02 ollama[74384]: llm_load_tensors: offloaded 33/33 layers to GPU
Mar 05 11:00:25 jesse-MS-7C02 ollama[74384]: llm_load_tensors: VRAM used: 0.00 MiB
Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: ...................................................................................................
Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: llama_new_context_with_model: n_ctx = 2048
Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: llama_new_context_with_model: freq_base = 1000000.0
Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: llama_new_context_with_model: freq_scale = 1
Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: CUDA error 999 at /go/src/github.com/jmorganca/ollama/llm/llama.cpp/ggml-cuda.cu:495: unknown error
Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: current device: -1809317920
Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: Lazy loading /tmp/ollama801692426/cuda/libext_server.so library
Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: GGML_ASSERT: /go/src/github.com/jmorganca/ollama/llm/llama.cpp/ggml-cuda.cu:495: !"CUDA error"
Mar 05 11:00:26 jesse-MS-7C02 ollama[74962]: Could not attach to process. If your uid matches the uid of the target
Mar 05 11:00:26 jesse-MS-7C02 ollama[74962]: process, check the setting of /proc/sys/kernel/yama/ptrace_scope, or try
Mar 05 11:00:26 jesse-MS-7C02 ollama[74962]: again as the root user. For more details, see /etc/sysctl.d/10-ptrace.conf
Mar 05 11:00:26 jesse-MS-7C02 ollama[74962]: ptrace: Inappropriate ioctl for device.
Mar 05 11:00:26 jesse-MS-7C02 ollama[74962]: No stack.
Mar 05 11:00:26 jesse-MS-7C02 ollama[74962]: The program is not being run.
Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: SIGABRT: abort
Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: PC=0x7fc01a899a1b m=14 sigcode=18446744073709551610
Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: signal arrived during cgo execution
Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: goroutine 41 [syscall]:
Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: runtime.cgocall(0x9c3170, 0xc00033a608)
Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: /usr/local/go/src/runtime/cgocall.go:157 +0x4b fp=0xc00033a5e0 sp=0xc00033a5a8 pc=0x4291cb
Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: github.com/jmorganca/ollama/llm._Cfunc_dynamic_shim_llama_server_init({0x7fbf94001d40, 0x7fbf70dfa410, 0x7fbf70d>
Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: _cgo_gotypes.go:287 +0x45 fp=0xc00033a608 sp=0xc00033a5e0 pc=0x7cf965
Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: github.com/jmorganca/ollama/llm.(*shimExtServer).llama_server_init.func1(0x45973b?, 0x80?, 0x80?)
Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: /go/src/github.com/jmorganca/ollama/llm/shim_ext_server.go:40 +0xec fp=0xc00033a6f8 sp=0xc00033a608 pc=0>
Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: github.com/jmorganca/ollama/llm.(*shimExtServer).llama_server_init(0xc00010a2d0?, 0x0?, 0x43a2e8?)
Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: /go/src/github.com/jmorganca/ollama/llm/shim_ext_server.go:40 +0x13 fp=0xc00033a720 sp=0xc00033a6f8 pc=0>
Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: github.com/jmorganca/ollama/llm.newExtServer({0x17845038, 0xc0004327e0}, {0xc000190af0, _}, {_, _, _}, {0x0, 0x0>
Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: /go/src/github.com/jmorganca/ollama/llm/ext_server_common.go:139 +0x70e fp=0xc00033a8e0 sp=0xc00033a720 >
Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: github.com/jmorganca/ollama/llm.newDynamicShimExtServer({0xc0000be000, 0x2a}, {0xc000190af0, _}, {_, _, _}, {0x0>
Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: /go/src/github.com/jmorganca/ollama/llm/shim_ext_server.go:93 +0x547 fp=0xc00033aaf8 sp=0xc00033a8e0 pc=>
Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: github.com/jmorganca/ollama/llm.newLlmServer({0xc3fc44, 0x4}, {0xc000190af0, _}, {_, _, _}, {0x0, 0x0, 0x0}, ...)
Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: /go/src/github.com/jmorganca/ollama/llm/llm.go:125 +0x149 fp=0xc00033ac78 sp=0xc00033aaf8 pc=0x7ceac9
Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: github.com/jmorganca/ollama/llm.New({0xc00048e240?, 0x0?}, {0xc000190af0, _}, {_, _, _}, {0x0, 0x0, 0x0}, ...)
Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: /go/src/github.com/jmorganca/ollama/llm/llm.go:115 +0x628 fp=0xc00033aef0 sp=0xc00033ac78 pc=0x7ce608
Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: github.com/jmorganca/ollama/server.load(0xc000002f00?, 0xc000002f00, {{0x0, 0x800, 0x200, 0x1, 0xfffffffffffffff>
Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: /go/src/github.com/jmorganca/ollama/server/routes.go:84 +0x425 fp=0xc00033b0a0 sp=0xc00033aef0 pc=0x99ef>
Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: github.com/jmorganca/ollama/server.GenerateHandler(0xc000466600)
Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: /go/src/github.com/jmorganca/ollama/server/routes.go:191 +0x8c8 fp=0xc00033b748 sp=0xc00033b0a0 pc=0x99f>
Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: github.com/gin-gonic/gin.(*Context).Next(...)
Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: /root/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/context.go:174
Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: github.com/jmorganca/ollama/server.(*Server).GenerateRoutes.func1(0xc000466600)
Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: /go/src/github.com/jmorganca/ollama/server/routes.go:877 +0x68 fp=0xc00033b780 sp=0xc00033b748 pc=0x9a91>
```
You can see that CUDA error is occuring due to llama.cpp ... This is also happening when I try to call Ollama from within Python/llama-index scripts (CUDA errors).
This even happens with very tiny models like tinyllama, when I have barely any GPU usage:
```
(venv) jesse@jesse-MS-7C02:~/code/obot/obot/extractor$ nvidia-smi
Tue Mar 5 22:18:49 2024
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.161.07 Driver Version: 535.161.07 CUDA Version: 12.2 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 NVIDIA GeForce RTX 2060 Off | 00000000:26:00.0 On | N/A |
| 0% 33C P8 7W / 170W | 786MiB / 6144MiB | 3% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
+---------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
| 0 N/A N/A 1184 G /usr/lib/xorg/Xorg 220MiB |
| 0 N/A N/A 1493 G /usr/bin/kwalletd5 2MiB |
| 0 N/A N/A 1730 G /usr/bin/ksmserver 2MiB |
| 0 N/A N/A 1732 G /usr/bin/kded5 2MiB |
| 0 N/A N/A 1733 G /usr/bin/kwin_x11 157MiB |
| 0 N/A N/A 1764 G /usr/bin/plasmashell 54MiB |
| 0 N/A N/A 1787 G ...c/polkit-kde-authentication-agent-1 2MiB |
| 0 N/A N/A 1968 G ...86_64-linux-gnu/libexec/kdeconnectd 2MiB |
| 0 N/A N/A 1981 G /usr/bin/kaccess 2MiB |
| 0 N/A N/A 2003 G ...irefox/3836/usr/lib/firefox/firefox 316MiB |
| 0 N/A N/A 2007 G ...-linux-gnu/libexec/DiscoverNotifier 2MiB |
| 0 N/A N/A 2478 G ...-gnu/libexec/xdg-desktop-portal-kde 2MiB |
| 0 N/A N/A 11747 G /usr/bin/konsole 2MiB |
| 0 N/A N/A 36012 G /usr/bin/kate 2MiB |
| 0 N/A N/A 86749 G /usr/bin/dolphin 2MiB |
+---------------------------------------------------------------------------------------+
(venv) jesse@jesse-MS-7C02:~/code/obot/obot/extractor$ ollama run tinyllama
Error: Post "http://127.0.0.1:11434/api/generate": EOF
```
I don't know why, but once I reboot, it seems to magically fix everything. Simply stopping ollama service / killing ollama processes and restarting those doesn't work though.
The problem is intermittent/random so it's hard to figure out what exactly is causing it. I can often run the above commands with no issue on my system, but this EOF/CUDA error randomly pops up every couple of days, and then I have to reboot to fix it.
I am using Ubuntu Linux 23.10 and an RTX 2060 with 6GB VRAM.
Any suggestions would be very welcome!
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2945/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2945/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3269
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3269/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3269/comments
|
https://api.github.com/repos/ollama/ollama/issues/3269/events
|
https://github.com/ollama/ollama/issues/3269
| 2,197,213,593
|
I_kwDOJ0Z1Ps6C9tGZ
| 3,269
|
Error 403 with zrok and other reverse proxies
|
{
"login": "freQuensy23-coder",
"id": 64750224,
"node_id": "MDQ6VXNlcjY0NzUwMjI0",
"avatar_url": "https://avatars.githubusercontent.com/u/64750224?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/freQuensy23-coder",
"html_url": "https://github.com/freQuensy23-coder",
"followers_url": "https://api.github.com/users/freQuensy23-coder/followers",
"following_url": "https://api.github.com/users/freQuensy23-coder/following{/other_user}",
"gists_url": "https://api.github.com/users/freQuensy23-coder/gists{/gist_id}",
"starred_url": "https://api.github.com/users/freQuensy23-coder/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/freQuensy23-coder/subscriptions",
"organizations_url": "https://api.github.com/users/freQuensy23-coder/orgs",
"repos_url": "https://api.github.com/users/freQuensy23-coder/repos",
"events_url": "https://api.github.com/users/freQuensy23-coder/events{/privacy}",
"received_events_url": "https://api.github.com/users/freQuensy23-coder/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 10
| 2024-03-20T10:44:22
| 2024-10-07T06:35:55
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
After updating to 1.29, I lost the ability to make public access to the ollama instance I have running through ngrok (or analogues (like zrok)). Ollama returns a 403 response to requests received through a proxy (ngrok) while correctly responding to the request through the localhost
### What did you expect to see?
Ngrok should work
### Steps to reproduce
Install newest version of ollama
Install ngrok/zrok
zrok share public localhost:11434
### Are there any recent changes that introduced the issue?
_No response_
### OS
Linux
### Architecture
x86
### Platform
_No response_
### Ollama version
1.29
### GPU
_No response_
### GPU info
_No response_
### CPU
_No response_
### Other software
_No response_
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3269/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3269/timeline
| null |
reopened
| false
|
https://api.github.com/repos/ollama/ollama/issues/360
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/360/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/360/comments
|
https://api.github.com/repos/ollama/ollama/issues/360/events
|
https://github.com/ollama/ollama/pull/360
| 1,853,723,184
|
PR_kwDOJ0Z1Ps5YFyUO
| 360
|
Fix request copies
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-08-16T18:32:26
| 2023-08-17T16:58:44
| 2023-08-17T16:58:43
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/360",
"html_url": "https://github.com/ollama/ollama/pull/360",
"diff_url": "https://github.com/ollama/ollama/pull/360.diff",
"patch_url": "https://github.com/ollama/ollama/pull/360.patch",
"merged_at": "2023-08-17T16:58:43"
}
|
`makeRequest` makes copies of the request body via bytes.Buffer and bytes.Reader in anticipation of a possible retry. While the memory requirements are negligible for most requests, the copies become significant when pushing a model blob. A sufficiently large model will exhaust all memory on the system causing the process to be kill by the host OS.
This copy also produces inaccurate progress updates. Since the progress is set from the Pipe, with the copy, it's really measuring how quickly the files are being copied into the buffer and not how quickly the request body is sent over the wire
Instead of retrying on all requests, only retry when starting a new upload. This is the only time, for now, a request should be retried due to authentication.
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/360/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/360/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3880
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3880/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3880/comments
|
https://api.github.com/repos/ollama/ollama/issues/3880/events
|
https://github.com/ollama/ollama/issues/3880
| 2,261,599,440
|
I_kwDOJ0Z1Ps6GzUTQ
| 3,880
|
when i can use tools?
|
{
"login": "i-yoyocat",
"id": 17843761,
"node_id": "MDQ6VXNlcjE3ODQzNzYx",
"avatar_url": "https://avatars.githubusercontent.com/u/17843761?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/i-yoyocat",
"html_url": "https://github.com/i-yoyocat",
"followers_url": "https://api.github.com/users/i-yoyocat/followers",
"following_url": "https://api.github.com/users/i-yoyocat/following{/other_user}",
"gists_url": "https://api.github.com/users/i-yoyocat/gists{/gist_id}",
"starred_url": "https://api.github.com/users/i-yoyocat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/i-yoyocat/subscriptions",
"organizations_url": "https://api.github.com/users/i-yoyocat/orgs",
"repos_url": "https://api.github.com/users/i-yoyocat/repos",
"events_url": "https://api.github.com/users/i-yoyocat/events{/privacy}",
"received_events_url": "https://api.github.com/users/i-yoyocat/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-04-24T15:38:32
| 2024-07-26T00:46:11
| 2024-07-26T00:46:10
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
when i can use tools in request? do we have a plan?thanks!
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3880/reactions",
"total_count": 5,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3880/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4283
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4283/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4283/comments
|
https://api.github.com/repos/ollama/ollama/issues/4283/events
|
https://github.com/ollama/ollama/issues/4283
| 2,287,656,393
|
I_kwDOJ0Z1Ps6IWt3J
| 4,283
|
Ollama v0.1.34 Timeout issue on Codellama34B
|
{
"login": "humza-sami",
"id": 63999516,
"node_id": "MDQ6VXNlcjYzOTk5NTE2",
"avatar_url": "https://avatars.githubusercontent.com/u/63999516?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/humza-sami",
"html_url": "https://github.com/humza-sami",
"followers_url": "https://api.github.com/users/humza-sami/followers",
"following_url": "https://api.github.com/users/humza-sami/following{/other_user}",
"gists_url": "https://api.github.com/users/humza-sami/gists{/gist_id}",
"starred_url": "https://api.github.com/users/humza-sami/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/humza-sami/subscriptions",
"organizations_url": "https://api.github.com/users/humza-sami/orgs",
"repos_url": "https://api.github.com/users/humza-sami/repos",
"events_url": "https://api.github.com/users/humza-sami/events{/privacy}",
"received_events_url": "https://api.github.com/users/humza-sami/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 1
| 2024-05-09T13:04:20
| 2024-05-21T23:48:02
| 2024-05-21T23:47:12
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I am trying to run Codellama34B model on Ollama 0.1.34 version and its keep giving me timeout error. Although I was able to run codellama70B on this version. Then I rollback ollama to v0.1.32 and it worked for me. It seems latest version is not supporting codellama34B.

### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.34
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4283/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4283/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5191
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5191/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5191/comments
|
https://api.github.com/repos/ollama/ollama/issues/5191/events
|
https://github.com/ollama/ollama/pull/5191
| 2,364,854,808
|
PR_kwDOJ0Z1Ps5zGIUA
| 5,191
|
Adding introduction of x-cmd/ollama module
|
{
"login": "edwinjhlee",
"id": 4426319,
"node_id": "MDQ6VXNlcjQ0MjYzMTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/4426319?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/edwinjhlee",
"html_url": "https://github.com/edwinjhlee",
"followers_url": "https://api.github.com/users/edwinjhlee/followers",
"following_url": "https://api.github.com/users/edwinjhlee/following{/other_user}",
"gists_url": "https://api.github.com/users/edwinjhlee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/edwinjhlee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/edwinjhlee/subscriptions",
"organizations_url": "https://api.github.com/users/edwinjhlee/orgs",
"repos_url": "https://api.github.com/users/edwinjhlee/repos",
"events_url": "https://api.github.com/users/edwinjhlee/events{/privacy}",
"received_events_url": "https://api.github.com/users/edwinjhlee/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-06-20T16:41:30
| 2024-11-22T00:55:26
| 2024-11-22T00:55:26
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5191",
"html_url": "https://github.com/ollama/ollama/pull/5191",
"diff_url": "https://github.com/ollama/ollama/pull/5191.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5191.patch",
"merged_at": "2024-11-22T00:55:25"
}
|
Introducing x-cmd/ollama module in the README page.
This is the demo:
https://www.x-cmd.com/mod/ollama
Thank you.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5191/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5191/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4481
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4481/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4481/comments
|
https://api.github.com/repos/ollama/ollama/issues/4481/events
|
https://github.com/ollama/ollama/pull/4481
| 2,301,464,900
|
PR_kwDOJ0Z1Ps5vuVr9
| 4,481
|
Update README.md
|
{
"login": "ZeyoYT",
"id": 61089602,
"node_id": "MDQ6VXNlcjYxMDg5NjAy",
"avatar_url": "https://avatars.githubusercontent.com/u/61089602?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZeyoYT",
"html_url": "https://github.com/ZeyoYT",
"followers_url": "https://api.github.com/users/ZeyoYT/followers",
"following_url": "https://api.github.com/users/ZeyoYT/following{/other_user}",
"gists_url": "https://api.github.com/users/ZeyoYT/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZeyoYT/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZeyoYT/subscriptions",
"organizations_url": "https://api.github.com/users/ZeyoYT/orgs",
"repos_url": "https://api.github.com/users/ZeyoYT/repos",
"events_url": "https://api.github.com/users/ZeyoYT/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZeyoYT/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-05-16T22:09:26
| 2024-06-09T21:30:23
| 2024-06-09T21:26:55
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4481",
"html_url": "https://github.com/ollama/ollama/pull/4481",
"diff_url": "https://github.com/ollama/ollama/pull/4481.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4481.patch",
"merged_at": null
}
|
Add AiLama to the list of community apps in Extensions & Plugins
|
{
"login": "ZeyoYT",
"id": 61089602,
"node_id": "MDQ6VXNlcjYxMDg5NjAy",
"avatar_url": "https://avatars.githubusercontent.com/u/61089602?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZeyoYT",
"html_url": "https://github.com/ZeyoYT",
"followers_url": "https://api.github.com/users/ZeyoYT/followers",
"following_url": "https://api.github.com/users/ZeyoYT/following{/other_user}",
"gists_url": "https://api.github.com/users/ZeyoYT/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZeyoYT/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZeyoYT/subscriptions",
"organizations_url": "https://api.github.com/users/ZeyoYT/orgs",
"repos_url": "https://api.github.com/users/ZeyoYT/repos",
"events_url": "https://api.github.com/users/ZeyoYT/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZeyoYT/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4481/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4481/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6349
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6349/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6349/comments
|
https://api.github.com/repos/ollama/ollama/issues/6349/events
|
https://github.com/ollama/ollama/pull/6349
| 2,464,613,182
|
PR_kwDOJ0Z1Ps54TA0F
| 6,349
|
add `CONTRIBUTING.md`
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-08-14T00:53:30
| 2024-08-14T22:19:52
| 2024-08-14T22:19:50
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6349",
"html_url": "https://github.com/ollama/ollama/pull/6349",
"diff_url": "https://github.com/ollama/ollama/pull/6349.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6349.patch",
"merged_at": "2024-08-14T22:19:50"
}
| null |
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6349/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6349/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6986
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6986/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6986/comments
|
https://api.github.com/repos/ollama/ollama/issues/6986/events
|
https://github.com/ollama/ollama/pull/6986
| 2,551,258,592
|
PR_kwDOJ0Z1Ps581Ul-
| 6,986
|
server: close response body on error
|
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-09-26T18:20:57
| 2024-09-26T19:00:32
| 2024-09-26T19:00:31
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6986",
"html_url": "https://github.com/ollama/ollama/pull/6986",
"diff_url": "https://github.com/ollama/ollama/pull/6986.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6986.patch",
"merged_at": "2024-09-26T19:00:31"
}
|
This change closes the response body when an error occurs in makeRequestWithRetry. Previously, the first, non-200 response body was not closed before reattempting the request. This change ensures that the response body is closed in all cases where an error occurs, preventing leaks of file descriptors.
Fixes #6974
|
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6986/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6986/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/8539
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8539/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8539/comments
|
https://api.github.com/repos/ollama/ollama/issues/8539/events
|
https://github.com/ollama/ollama/pull/8539
| 2,805,141,918
|
PR_kwDOJ0Z1Ps6IqypO
| 8,539
|
next build
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2025-01-22T19:05:54
| 2025-01-30T13:11:07
| 2025-01-29T23:03:38
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/8539",
"html_url": "https://github.com/ollama/ollama/pull/8539",
"diff_url": "https://github.com/ollama/ollama/pull/8539.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8539.patch",
"merged_at": "2025-01-29T23:03:38"
}
|
split from #7913
this changes updates the directory structure splitting `llama.cpp` and `ggml` into separate, reusable packages. as a result the build has also changed significantly. the build now uses `cmake` to build dependencies as shared objects which will be dynamically loaded when necessary.
current (work in progress) build instructions:
- `go build .` to build ollama. this includes a default, basic cpu runner
- `cmake --preset Default; cmake --build --preset Default` to configure and build the default targets. this will configure and build cuda and rocm if those are available
- `cmake --preset CPU; cmake --build --preset CPU` to configure and build _only_ CPU variants
- `cmake --preset CUDA; cmake --build --preset CUDA` to configure and build _only_ CUDA
- `cmake --preset ROCm; cmake --build --preset ROCm` to configure and build _only_ ROCm
TODO:
- [x] Windows CI
- [x] Build docs
- [x] Update CMake output directory
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8539/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8539/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2975
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2975/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2975/comments
|
https://api.github.com/repos/ollama/ollama/issues/2975/events
|
https://github.com/ollama/ollama/pull/2975
| 2,173,144,516
|
PR_kwDOJ0Z1Ps5o7WgI
| 2,975
|
Update Go to 1.22 in other places
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-03-07T07:18:50
| 2024-03-07T15:39:50
| 2024-03-07T15:39:49
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/2975",
"html_url": "https://github.com/ollama/ollama/pull/2975",
"diff_url": "https://github.com/ollama/ollama/pull/2975.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2975.patch",
"merged_at": "2024-03-07T15:39:49"
}
|
https://github.com/ollama/ollama/pull/2824 updated Ollama to require Go 1.22, but a few places still use 1.21
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2975/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2975/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6399
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6399/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6399/comments
|
https://api.github.com/repos/ollama/ollama/issues/6399/events
|
https://github.com/ollama/ollama/pull/6399
| 2,471,552,536
|
PR_kwDOJ0Z1Ps54pHGI
| 6,399
|
IMPROVE: add ultra ai library
|
{
"login": "VaibhavAcharya",
"id": 41478382,
"node_id": "MDQ6VXNlcjQxNDc4Mzgy",
"avatar_url": "https://avatars.githubusercontent.com/u/41478382?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VaibhavAcharya",
"html_url": "https://github.com/VaibhavAcharya",
"followers_url": "https://api.github.com/users/VaibhavAcharya/followers",
"following_url": "https://api.github.com/users/VaibhavAcharya/following{/other_user}",
"gists_url": "https://api.github.com/users/VaibhavAcharya/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VaibhavAcharya/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VaibhavAcharya/subscriptions",
"organizations_url": "https://api.github.com/users/VaibhavAcharya/orgs",
"repos_url": "https://api.github.com/users/VaibhavAcharya/repos",
"events_url": "https://api.github.com/users/VaibhavAcharya/events{/privacy}",
"received_events_url": "https://api.github.com/users/VaibhavAcharya/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-08-17T14:47:09
| 2024-09-03T13:31:26
| 2024-09-02T20:04:00
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6399",
"html_url": "https://github.com/ollama/ollama/pull/6399",
"diff_url": "https://github.com/ollama/ollama/pull/6399.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6399.patch",
"merged_at": null
}
| null |
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6399/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6399/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6542
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6542/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6542/comments
|
https://api.github.com/repos/ollama/ollama/issues/6542/events
|
https://github.com/ollama/ollama/pull/6542
| 2,492,014,024
|
PR_kwDOJ0Z1Ps55tPN6
| 6,542
|
Update README.md
|
{
"login": "rapidarchitect",
"id": 126218667,
"node_id": "U_kgDOB4Xxqw",
"avatar_url": "https://avatars.githubusercontent.com/u/126218667?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rapidarchitect",
"html_url": "https://github.com/rapidarchitect",
"followers_url": "https://api.github.com/users/rapidarchitect/followers",
"following_url": "https://api.github.com/users/rapidarchitect/following{/other_user}",
"gists_url": "https://api.github.com/users/rapidarchitect/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rapidarchitect/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rapidarchitect/subscriptions",
"organizations_url": "https://api.github.com/users/rapidarchitect/orgs",
"repos_url": "https://api.github.com/users/rapidarchitect/repos",
"events_url": "https://api.github.com/users/rapidarchitect/events{/privacy}",
"received_events_url": "https://api.github.com/users/rapidarchitect/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-08-28T13:00:06
| 2024-09-08T06:07:35
| 2024-09-08T06:07:35
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6542",
"html_url": "https://github.com/ollama/ollama/pull/6542",
"diff_url": "https://github.com/ollama/ollama/pull/6542.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6542.patch",
"merged_at": null
}
|
added CrewAI and Mesop example that uses Ollama instead of OpenAI
|
{
"login": "rapidarchitect",
"id": 126218667,
"node_id": "U_kgDOB4Xxqw",
"avatar_url": "https://avatars.githubusercontent.com/u/126218667?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rapidarchitect",
"html_url": "https://github.com/rapidarchitect",
"followers_url": "https://api.github.com/users/rapidarchitect/followers",
"following_url": "https://api.github.com/users/rapidarchitect/following{/other_user}",
"gists_url": "https://api.github.com/users/rapidarchitect/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rapidarchitect/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rapidarchitect/subscriptions",
"organizations_url": "https://api.github.com/users/rapidarchitect/orgs",
"repos_url": "https://api.github.com/users/rapidarchitect/repos",
"events_url": "https://api.github.com/users/rapidarchitect/events{/privacy}",
"received_events_url": "https://api.github.com/users/rapidarchitect/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6542/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6542/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2133
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2133/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2133/comments
|
https://api.github.com/repos/ollama/ollama/issues/2133/events
|
https://github.com/ollama/ollama/pull/2133
| 2,093,117,324
|
PR_kwDOJ0Z1Ps5krdIk
| 2,133
|
Update langchainpy.md
|
{
"login": "vikesh001",
"id": 109729920,
"node_id": "U_kgDOBopYgA",
"avatar_url": "https://avatars.githubusercontent.com/u/109729920?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vikesh001",
"html_url": "https://github.com/vikesh001",
"followers_url": "https://api.github.com/users/vikesh001/followers",
"following_url": "https://api.github.com/users/vikesh001/following{/other_user}",
"gists_url": "https://api.github.com/users/vikesh001/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vikesh001/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vikesh001/subscriptions",
"organizations_url": "https://api.github.com/users/vikesh001/orgs",
"repos_url": "https://api.github.com/users/vikesh001/repos",
"events_url": "https://api.github.com/users/vikesh001/events{/privacy}",
"received_events_url": "https://api.github.com/users/vikesh001/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-01-22T05:31:56
| 2024-05-08T00:11:20
| 2024-05-08T00:11:19
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/2133",
"html_url": "https://github.com/ollama/ollama/pull/2133",
"diff_url": "https://github.com/ollama/ollama/pull/2133.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2133.patch",
"merged_at": null
}
|
Importing from langchain will no longer be supported as of langchain==0.2.0. So importing from langchain-community instead
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2133/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2133/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5594
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5594/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5594/comments
|
https://api.github.com/repos/ollama/ollama/issues/5594/events
|
https://github.com/ollama/ollama/issues/5594
| 2,400,244,908
|
I_kwDOJ0Z1Ps6PENSs
| 5,594
|
duplicated code
|
{
"login": "wangjiateng",
"id": 8120012,
"node_id": "MDQ6VXNlcjgxMjAwMTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8120012?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wangjiateng",
"html_url": "https://github.com/wangjiateng",
"followers_url": "https://api.github.com/users/wangjiateng/followers",
"following_url": "https://api.github.com/users/wangjiateng/following{/other_user}",
"gists_url": "https://api.github.com/users/wangjiateng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wangjiateng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wangjiateng/subscriptions",
"organizations_url": "https://api.github.com/users/wangjiateng/orgs",
"repos_url": "https://api.github.com/users/wangjiateng/repos",
"events_url": "https://api.github.com/users/wangjiateng/events{/privacy}",
"received_events_url": "https://api.github.com/users/wangjiateng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 0
| 2024-07-10T09:35:35
| 2024-07-10T18:47:09
| 2024-07-10T18:47:09
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
llm/server.go:253
```go
if estimate.TensorSplit != "" {
params = append(params, "--tensor-split", estimate.TensorSplit)
}
if estimate.TensorSplit != "" {
params = append(params, "--tensor-split", estimate.TensorSplit)
}
```
### OS
_No response_
### GPU
_No response_
### CPU
_No response_
### Ollama version
0.2.1
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5594/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5594/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8439
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8439/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8439/comments
|
https://api.github.com/repos/ollama/ollama/issues/8439/events
|
https://github.com/ollama/ollama/issues/8439
| 2,789,175,994
|
I_kwDOJ0Z1Ps6mP3K6
| 8,439
|
Add a service file in /etc/init.d/ to support service start stop restart in self-package containers
|
{
"login": "SunshineAI0523",
"id": 38200985,
"node_id": "MDQ6VXNlcjM4MjAwOTg1",
"avatar_url": "https://avatars.githubusercontent.com/u/38200985?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunshineAI0523",
"html_url": "https://github.com/SunshineAI0523",
"followers_url": "https://api.github.com/users/SunshineAI0523/followers",
"following_url": "https://api.github.com/users/SunshineAI0523/following{/other_user}",
"gists_url": "https://api.github.com/users/SunshineAI0523/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunshineAI0523/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunshineAI0523/subscriptions",
"organizations_url": "https://api.github.com/users/SunshineAI0523/orgs",
"repos_url": "https://api.github.com/users/SunshineAI0523/repos",
"events_url": "https://api.github.com/users/SunshineAI0523/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunshineAI0523/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 0
| 2025-01-15T08:44:55
| 2025-01-25T00:39:04
| 2025-01-25T00:39:04
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Title: Add a service file in /etc/init.d/ to support service start stop restart in self-package containers
Hi,I am looking for a Method to install ollama in a container, and use service start stop restart to manage the ollama backend,
I write a sample file in /etc/init.d/ollama,to support service start stop restart ollama, But it takes error
```bash
#!/bin/bash
# Ollama Service Manager Script
# Place this file in /etc/init.d/ollama and make it executable.
### BEGIN INIT INFO
# Provides: ollama
# Required-Start: $remote_fs $syslog
# Required-Stop: $remote_fs $syslog
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: Manage Ollama service
# Description: This script starts, stops, and restarts the Ollama service.
### END INIT INFO
SERVICE_NAME="Ollama"
OLLAMA_COMMAND="/usr/bin/ollama" # Update with the path to the Ollama serve command
PID_FILE="/var/run/ollama/ollama.pid"
USER="ollama"
GROUP="ollama"
start_service() {
echo "Starting $SERVICE_NAME..."
if [ -f "$PID_FILE" ]; then
echo "$SERVICE_NAME is already running."
return 1
fi
su -s /bin/bash -c "nohup $OLLAMA_COMMAND serve> /var/log/ollama/ollama.log 2>&1 & echo \$! > $PID_FILE" $USER
echo "$SERVICE_NAME started with PID $(cat $PID_FILE)."
}
stop_service() {
echo "Stopping $SERVICE_NAME..."
if [ ! -f "$PID_FILE" ]; then
echo "$SERVICE_NAME is not running."
return 1
fi
kill $(cat "$PID_FILE")
rm -f "$PID_FILE"
echo "$SERVICE_NAME stopped."
}
restart_service() {
echo "Restarting $SERVICE_NAME..."
stop_service
sleep 1
start_service
}
case "$1" in
start)
start_service
;;
stop)
stop_service
;;
restart)
restart_service
;;
*)
echo "Usage: $0 {start|stop|restart}"
exit 1
;;
esac
exit 0
```
Error
```bash
Starting Ollama...
bash: line 1: /var/run/ollama/ollama.pid: No such file or directory
cat: /var/run/ollama/ollama.pid: No such file or directory
Ollama started with PID .
```
can anyone help me solve the problem
|
{
"login": "SunshineAI0523",
"id": 38200985,
"node_id": "MDQ6VXNlcjM4MjAwOTg1",
"avatar_url": "https://avatars.githubusercontent.com/u/38200985?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunshineAI0523",
"html_url": "https://github.com/SunshineAI0523",
"followers_url": "https://api.github.com/users/SunshineAI0523/followers",
"following_url": "https://api.github.com/users/SunshineAI0523/following{/other_user}",
"gists_url": "https://api.github.com/users/SunshineAI0523/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunshineAI0523/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunshineAI0523/subscriptions",
"organizations_url": "https://api.github.com/users/SunshineAI0523/orgs",
"repos_url": "https://api.github.com/users/SunshineAI0523/repos",
"events_url": "https://api.github.com/users/SunshineAI0523/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunshineAI0523/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8439/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8439/timeline
| null |
not_planned
| false
|
https://api.github.com/repos/ollama/ollama/issues/7537
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7537/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7537/comments
|
https://api.github.com/repos/ollama/ollama/issues/7537/events
|
https://github.com/ollama/ollama/pull/7537
| 2,639,604,282
|
PR_kwDOJ0Z1Ps6BH-ms
| 7,537
|
imageproc mllama refactor
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-11-07T01:22:41
| 2024-12-15T03:50:17
| 2024-12-15T03:50:15
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7537",
"html_url": "https://github.com/ollama/ollama/pull/7537",
"diff_url": "https://github.com/ollama/ollama/pull/7537.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7537.patch",
"merged_at": "2024-12-15T03:50:15"
}
|
This change breaks out the image processing routines into a generic module called `models/imageproc` and also creates a new `models/mllama` model which is specific the the mllama vision processing. There are a few other minor changes such as:
* Preprocess() now takes an io.Reader instead of sending the byte slice
* Preprocess() now returns a map[string]any which contains any options to pass back which are specific to the model
* The mean/standard dev. constants are broken out into package variables
I haven't added an interface for the model, but that should go along with the forward pass and can come in a different PR. We also need to determine what the actual directory structure should look like.
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7537/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7537/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2354
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2354/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2354/comments
|
https://api.github.com/repos/ollama/ollama/issues/2354/events
|
https://github.com/ollama/ollama/pull/2354
| 2,117,299,887
|
PR_kwDOJ0Z1Ps5l9Oxw
| 2,354
|
reliably determine available VRAM on macOS (resolves #1826, #2370)
|
{
"login": "peanut256",
"id": 13474248,
"node_id": "MDQ6VXNlcjEzNDc0MjQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/13474248?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/peanut256",
"html_url": "https://github.com/peanut256",
"followers_url": "https://api.github.com/users/peanut256/followers",
"following_url": "https://api.github.com/users/peanut256/following{/other_user}",
"gists_url": "https://api.github.com/users/peanut256/gists{/gist_id}",
"starred_url": "https://api.github.com/users/peanut256/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/peanut256/subscriptions",
"organizations_url": "https://api.github.com/users/peanut256/orgs",
"repos_url": "https://api.github.com/users/peanut256/repos",
"events_url": "https://api.github.com/users/peanut256/events{/privacy}",
"received_events_url": "https://api.github.com/users/peanut256/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 4
| 2024-02-04T20:54:11
| 2024-02-25T23:16:45
| 2024-02-25T23:16:45
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/2354",
"html_url": "https://github.com/ollama/ollama/pull/2354",
"diff_url": "https://github.com/ollama/ollama/pull/2354.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2354.patch",
"merged_at": "2024-02-25T23:16:45"
}
| null |
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2354/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2354/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3727
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3727/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3727/comments
|
https://api.github.com/repos/ollama/ollama/issues/3727/events
|
https://github.com/ollama/ollama/issues/3727
| 2,249,819,412
|
I_kwDOJ0Z1Ps6GGYUU
| 3,727
|
Unable to load default model context length num_ctx for embedding
|
{
"login": "Kanishk-Kumar",
"id": 45518770,
"node_id": "MDQ6VXNlcjQ1NTE4Nzcw",
"avatar_url": "https://avatars.githubusercontent.com/u/45518770?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Kanishk-Kumar",
"html_url": "https://github.com/Kanishk-Kumar",
"followers_url": "https://api.github.com/users/Kanishk-Kumar/followers",
"following_url": "https://api.github.com/users/Kanishk-Kumar/following{/other_user}",
"gists_url": "https://api.github.com/users/Kanishk-Kumar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Kanishk-Kumar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Kanishk-Kumar/subscriptions",
"organizations_url": "https://api.github.com/users/Kanishk-Kumar/orgs",
"repos_url": "https://api.github.com/users/Kanishk-Kumar/repos",
"events_url": "https://api.github.com/users/Kanishk-Kumar/events{/privacy}",
"received_events_url": "https://api.github.com/users/Kanishk-Kumar/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 7
| 2024-04-18T06:00:00
| 2024-05-17T16:09:30
| 2024-05-16T21:57:53
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
This is the code I tried:
```
from ollama import Client
def generate_embedding(prompt: str):
r"""
Add this to utils later.
"""
client = Client(host="http://localhost:11434")
response = client.embeddings(
model="nomic-embed-text:latest",
prompt=prompt,
options={"temperature": 0, "num_ctx": 8192}
)
return response["embedding"]
generate_embedding("Why is the sky blue?")
```
Error I'm getting:
`Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: time=2024-04-18T11:21:22.071+05:30 level=WARN source=server.go:51 msg="requested context length is greater than model max context length" requested=8192 model=2048`
But model card clearly states I should be able to use full 8192 tokens for embedding:
https://ollama.com/library/nomic-embed-text
https://huggingface.co/nomic-ai/nomic-embed-text-v1
Full log:
```
Apr 18 11:20:50 xyz-MS-7D91 ollama[38865]: time=2024-04-18T11:20:50.673+05:30 level=INFO source=images.go:817 msg="total blobs: 17"
Apr 18 11:20:50 xyz-MS-7D91 ollama[38865]: time=2024-04-18T11:20:50.673+05:30 level=INFO source=images.go:824 msg="total unused blobs removed: 0"
Apr 18 11:20:50 xyz-MS-7D91 ollama[38865]: time=2024-04-18T11:20:50.673+05:30 level=INFO source=routes.go:1143 msg="Listening on 127.0.0.1:11434 (version 0.1.32)"
Apr 18 11:20:50 xyz-MS-7D91 ollama[38865]: time=2024-04-18T11:20:50.674+05:30 level=INFO source=payload.go:28 msg="extracting embedded files" dir=/tmp/ollama2506861456/runners
Apr 18 11:20:52 xyz-MS-7D91 ollama[38865]: time=2024-04-18T11:20:52.016+05:30 level=INFO source=payload.go:41 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11 rocm_v60002]"
Apr 18 11:20:52 xyz-MS-7D91 ollama[38865]: time=2024-04-18T11:20:52.016+05:30 level=INFO source=gpu.go:121 msg="Detecting GPU type"
Apr 18 11:20:52 xyz-MS-7D91 ollama[38865]: time=2024-04-18T11:20:52.016+05:30 level=INFO source=gpu.go:268 msg="Searching for GPU management library libcudart.so*"
Apr 18 11:20:52 xyz-MS-7D91 ollama[38865]: time=2024-04-18T11:20:52.017+05:30 level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [/tmp/ollama2506861456/runners/cuda_v11/libcudart.so.11.0 /usr/local/cuda/lib64/libcudart.so.12.4.99 /usr/lib/x86_64-linux-gnu/libcudart.so.11.5.117]"
Apr 18 11:20:52 xyz-MS-7D91 ollama[38865]: time=2024-04-18T11:20:52.021+05:30 level=INFO source=gpu.go:126 msg="Nvidia GPU detected via cudart"
Apr 18 11:20:52 xyz-MS-7D91 ollama[38865]: time=2024-04-18T11:20:52.021+05:30 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
Apr 18 11:20:52 xyz-MS-7D91 ollama[38865]: time=2024-04-18T11:20:52.061+05:30 level=INFO source=gpu.go:202 msg="[cudart] CUDART CUDA Compute Capability detected: 8.9"
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: time=2024-04-18T11:21:22.071+05:30 level=WARN source=server.go:51 msg="requested context length is greater than model max context length" requested=8192 model=2048
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: time=2024-04-18T11:21:22.071+05:30 level=INFO source=gpu.go:121 msg="Detecting GPU type"
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: time=2024-04-18T11:21:22.071+05:30 level=INFO source=gpu.go:268 msg="Searching for GPU management library libcudart.so*"
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: time=2024-04-18T11:21:22.072+05:30 level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [/tmp/ollama2506861456/runners/cuda_v11/libcudart.so.11.0 /usr/local/cuda/lib64/libcudart.so.12.4.99 /usr/lib/x86_64-linux-gnu/libcudart.so.11.5.117]"
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: time=2024-04-18T11:21:22.072+05:30 level=INFO source=gpu.go:126 msg="Nvidia GPU detected via cudart"
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: time=2024-04-18T11:21:22.072+05:30 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: time=2024-04-18T11:21:22.114+05:30 level=INFO source=gpu.go:202 msg="[cudart] CUDART CUDA Compute Capability detected: 8.9"
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: time=2024-04-18T11:21:22.125+05:30 level=INFO source=gpu.go:121 msg="Detecting GPU type"
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: time=2024-04-18T11:21:22.125+05:30 level=INFO source=gpu.go:268 msg="Searching for GPU management library libcudart.so*"
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: time=2024-04-18T11:21:22.126+05:30 level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [/tmp/ollama2506861456/runners/cuda_v11/libcudart.so.11.0 /usr/local/cuda/lib64/libcudart.so.12.4.99 /usr/lib/x86_64-linux-gnu/libcudart.so.11.5.117]"
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: time=2024-04-18T11:21:22.127+05:30 level=INFO source=gpu.go:126 msg="Nvidia GPU detected via cudart"
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: time=2024-04-18T11:21:22.127+05:30 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: time=2024-04-18T11:21:22.148+05:30 level=INFO source=gpu.go:202 msg="[cudart] CUDART CUDA Compute Capability detected: 8.9"
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: time=2024-04-18T11:21:22.159+05:30 level=INFO source=server.go:127 msg="offload to gpu" reallayers=13 layers=13 required="691.1 MiB" used="691.1 MiB" available="11364.1 MiB" kv="6.0 MiB" fulloffload="12.0 MiB" partialoffload="12.0 MiB"
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: time=2024-04-18T11:21:22.159+05:30 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: time=2024-04-18T11:21:22.159+05:30 level=INFO source=server.go:264 msg="starting llama server" cmd="/tmp/ollama2506861456/runners/cuda_v11/ollama_llama_server --model /usr/share/ollama/.ollama/models/blobs/sha256-970aa74c0a90ef7482477cf803618e776e173c007bf957f635f1015bfcfef0e6 --ctx-size 2048 --batch-size 512 --embedding --log-disable --n-gpu-layers 13 --port 45855"
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: time=2024-04-18T11:21:22.159+05:30 level=INFO source=server.go:389 msg="waiting for llama runner to start responding"
Apr 18 11:21:22 xyz-MS-7D91 ollama[39090]: {"function":"server_params_parse","level":"INFO","line":2603,"msg":"logging to file is disabled.","tid":"132941561552896","timestamp":1713419482}
Apr 18 11:21:22 xyz-MS-7D91 ollama[39090]: {"build":1,"commit":"7593639","function":"main","level":"INFO","line":2819,"msg":"build info","tid":"132941561552896","timestamp":1713419482}
Apr 18 11:21:22 xyz-MS-7D91 ollama[39090]: {"function":"main","level":"INFO","line":2822,"msg":"system info","n_threads":16,"n_threads_batch":-1,"system_info":"AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | ","tid":"132941561552896","timestamp":1713419482,"total_threads":32}
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_model_loader: loaded meta data with 24 key-value pairs and 112 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-970aa74c0a90ef7482477cf803618e776e173c007bf957f635f1015bfcfef0e6 (version GGUF V3 (latest))
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_model_loader: - kv 0: general.architecture str = nomic-bert
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_model_loader: - kv 1: general.name str = nomic-embed-text-v1.5
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_model_loader: - kv 2: nomic-bert.block_count u32 = 12
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_model_loader: - kv 3: nomic-bert.context_length u32 = 2048
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_model_loader: - kv 4: nomic-bert.embedding_length u32 = 768
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_model_loader: - kv 5: nomic-bert.feed_forward_length u32 = 3072
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_model_loader: - kv 6: nomic-bert.attention.head_count u32 = 12
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_model_loader: - kv 7: nomic-bert.attention.layer_norm_epsilon f32 = 0.000000
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_model_loader: - kv 8: general.file_type u32 = 1
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_model_loader: - kv 9: nomic-bert.attention.causal bool = false
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_model_loader: - kv 10: nomic-bert.pooling_type u32 = 1
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_model_loader: - kv 11: nomic-bert.rope.freq_base f32 = 1000.000000
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_model_loader: - kv 12: tokenizer.ggml.token_type_count u32 = 2
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_model_loader: - kv 13: tokenizer.ggml.bos_token_id u32 = 101
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_model_loader: - kv 14: tokenizer.ggml.eos_token_id u32 = 102
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_model_loader: - kv 15: tokenizer.ggml.model str = bert
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_model_loader: - kv 16: tokenizer.ggml.tokens arr[str,30522] = ["[PAD]", "[unused0]", "[unused1]", "...
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_model_loader: - kv 17: tokenizer.ggml.scores arr[f32,30522] = [-1000.000000, -1000.000000, -1000.00...
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_model_loader: - kv 18: tokenizer.ggml.token_type arr[i32,30522] = [3, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_model_loader: - kv 19: tokenizer.ggml.unknown_token_id u32 = 100
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_model_loader: - kv 20: tokenizer.ggml.seperator_token_id u32 = 102
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 0
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_model_loader: - kv 22: tokenizer.ggml.cls_token_id u32 = 101
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_model_loader: - kv 23: tokenizer.ggml.mask_token_id u32 = 103
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_model_loader: - type f32: 51 tensors
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_model_loader: - type f16: 61 tensors
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_vocab: mismatch in special tokens definition ( 7104/30522 vs 5/30522 ).
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: format = GGUF V3 (latest)
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: arch = nomic-bert
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: vocab type = WPM
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: n_vocab = 30522
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: n_merges = 0
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: n_ctx_train = 2048
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: n_embd = 768
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: n_head = 12
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: n_head_kv = 12
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: n_layer = 12
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: n_rot = 64
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: n_embd_head_k = 64
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: n_embd_head_v = 64
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: n_gqa = 1
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: n_embd_k_gqa = 768
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: n_embd_v_gqa = 768
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: f_norm_eps = 1.0e-12
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: f_norm_rms_eps = 0.0e+00
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: f_clamp_kqv = 0.0e+00
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: f_logit_scale = 0.0e+00
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: n_ff = 3072
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: n_expert = 0
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: n_expert_used = 0
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: causal attn = 0
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: pooling type = 1
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: rope type = 2
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: rope scaling = linear
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: freq_base_train = 1000.0
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: freq_scale_train = 1
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: n_yarn_orig_ctx = 2048
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: rope_finetuned = unknown
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: ssm_d_conv = 0
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: ssm_d_inner = 0
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: ssm_d_state = 0
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: ssm_dt_rank = 0
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: model type = 137M
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: model ftype = F16
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: model params = 136.73 M
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: model size = 260.86 MiB (16.00 BPW)
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: general.name = nomic-embed-text-v1.5
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: BOS token = 101 '[CLS]'
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: EOS token = 102 '[SEP]'
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: UNK token = 100 '[UNK]'
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: SEP token = 102 '[SEP]'
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: PAD token = 0 '[PAD]'
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: CLS token = 101 '[CLS]'
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: MASK token = 103 '[MASK]'
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: LF token = 0 '[PAD]'
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: yes
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: ggml_cuda_init: CUDA_USE_TENSOR_CORES: no
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: ggml_cuda_init: found 1 CUDA devices:
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: Device 0: NVIDIA GeForce RTX 4070 Ti, compute capability 8.9, VMM: yes
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_tensors: ggml ctx size = 0.09 MiB
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_tensors: offloading 12 repeating layers to GPU
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_tensors: offloading non-repeating layers to GPU
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_tensors: offloaded 13/13 layers to GPU
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_tensors: CPU buffer size = 44.72 MiB
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_tensors: CUDA0 buffer size = 216.15 MiB
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: .......................................................
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_new_context_with_model: n_ctx = 2048
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_new_context_with_model: n_batch = 512
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_new_context_with_model: n_ubatch = 512
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_new_context_with_model: freq_base = 1000.0
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_new_context_with_model: freq_scale = 1
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_kv_cache_init: CUDA0 KV buffer size = 72.00 MiB
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_new_context_with_model: KV self size = 72.00 MiB, K (f16): 36.00 MiB, V (f16): 36.00 MiB
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_new_context_with_model: CPU output buffer size = 0.00 MiB
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_new_context_with_model: CUDA0 compute buffer size = 23.00 MiB
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_new_context_with_model: CUDA_Host compute buffer size = 3.50 MiB
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_new_context_with_model: graph nodes = 453
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_new_context_with_model: graph splits = 2
Apr 18 11:21:22 xyz-MS-7D91 ollama[39090]: {"function":"initialize","level":"INFO","line":448,"msg":"initializing slots","n_slots":1,"tid":"132941561552896","timestamp":1713419482}
Apr 18 11:21:22 xyz-MS-7D91 ollama[39090]: {"function":"initialize","level":"INFO","line":457,"msg":"new slot","n_ctx_slot":2048,"slot_id":0,"tid":"132941561552896","timestamp":1713419482}
Apr 18 11:21:22 xyz-MS-7D91 ollama[39090]: {"function":"main","level":"INFO","line":3064,"msg":"model loaded","tid":"132941561552896","timestamp":1713419482}
Apr 18 11:21:22 xyz-MS-7D91 ollama[39090]: {"function":"main","hostname":"127.0.0.1","level":"INFO","line":3267,"msg":"HTTP server listening","n_threads_http":"31","port":"45855","tid":"132941561552896","timestamp":1713419482}
Apr 18 11:21:22 xyz-MS-7D91 ollama[39090]: {"function":"update_slots","level":"INFO","line":1578,"msg":"all slots are idle and system prompt is empty, clear the KV cache","tid":"132941561552896","timestamp":1713419482}
Apr 18 11:21:22 xyz-MS-7D91 ollama[39090]: {"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":0,"tid":"132941561552896","timestamp":1713419482}
Apr 18 11:21:22 xyz-MS-7D91 ollama[39090]: {"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":48102,"status":200,"tid":"132940209668096","timestamp":1713419482}
Apr 18 11:21:22 xyz-MS-7D91 ollama[39090]: {"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":1,"tid":"132941561552896","timestamp":1713419482}
Apr 18 11:21:22 xyz-MS-7D91 ollama[39090]: {"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":48112,"status":200,"tid":"132940187918336","timestamp":1713419482}
Apr 18 11:21:22 xyz-MS-7D91 ollama[39090]: {"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":2,"tid":"132941561552896","timestamp":1713419482}
Apr 18 11:21:22 xyz-MS-7D91 ollama[39090]: {"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":48112,"status":200,"tid":"132940187918336","timestamp":1713419482}
Apr 18 11:21:22 xyz-MS-7D91 ollama[39090]: {"function":"launch_slot_with_data","level":"INFO","line":830,"msg":"slot is processing task","slot_id":0,"task_id":3,"tid":"132941561552896","timestamp":1713419482}
Apr 18 11:21:22 xyz-MS-7D91 ollama[39090]: {"function":"update_slots","level":"INFO","line":1836,"msg":"kv cache rm [p0, end)","p0":0,"slot_id":0,"task_id":3,"tid":"132941561552896","timestamp":1713419482}
Apr 18 11:21:22 xyz-MS-7D91 ollama[39090]: {"function":"update_slots","level":"INFO","line":1640,"msg":"slot released","n_cache_tokens":6,"n_ctx":2048,"n_past":6,"n_system_tokens":0,"slot_id":0,"task_id":3,"tid":"132941561552896","timestamp":1713419482,"truncated":false}
Apr 18 11:21:22 xyz-MS-7D91 ollama[39090]: {"function":"log_server_request","level":"INFO","line":2734,"method":"POST","msg":"request","params":{},"path":"/embedding","remote_addr":"127.0.0.1","remote_port":48112,"status":200,"tid":"132940187918336","timestamp":1713419482}
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: [GIN] 2024/04/18 - 11:21:22 | 200 | 705.26423ms | 127.0.0.1 | POST "/api/embeddings"
```
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.32
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3727/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3727/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6704
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6704/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6704/comments
|
https://api.github.com/repos/ollama/ollama/issues/6704/events
|
https://github.com/ollama/ollama/issues/6704
| 2,512,745,969
|
I_kwDOJ0Z1Ps6VxXXx
| 6,704
|
ollama model not support tool calling
|
{
"login": "sunshine19870316",
"id": 165765929,
"node_id": "U_kgDOCeFjKQ",
"avatar_url": "https://avatars.githubusercontent.com/u/165765929?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sunshine19870316",
"html_url": "https://github.com/sunshine19870316",
"followers_url": "https://api.github.com/users/sunshine19870316/followers",
"following_url": "https://api.github.com/users/sunshine19870316/following{/other_user}",
"gists_url": "https://api.github.com/users/sunshine19870316/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sunshine19870316/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sunshine19870316/subscriptions",
"organizations_url": "https://api.github.com/users/sunshine19870316/orgs",
"repos_url": "https://api.github.com/users/sunshine19870316/repos",
"events_url": "https://api.github.com/users/sunshine19870316/events{/privacy}",
"received_events_url": "https://api.github.com/users/sunshine19870316/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 9
| 2024-09-09T02:12:10
| 2024-09-11T06:43:18
| 2024-09-11T06:43:18
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I use ollama model in langgraph multi-agent SupervisorAgent framework,
when I use API llm, that is give actual key and url, it can run successfully, but after changing to ollama server, can't call tools.
my code:
def get_qwen7b():
model = ChatOpenAI(
model_name="qwen2:7b",
openai_api_base="http://localhost:11434/v1",
openai_api_key="none",
streaming=True,
temperature=0.01
)
return model
llm = get_qwen7b()
class SupervisorAgent(BaseAgent):
def __init__(self, members, llm, **kwargs):
system_prompt = (
"""You are a supervisor tasked with managing a conversation between the
following workers: {members}. Given the following user request,
respond with the worker to act next. Each worker will perform a
task and respond with their results and status. When finished,
respond with FINISH."""
)
# supervisor is an LLM node. It just picks the next agent to process
options = ["FINISH"] + members
function_def = {
"name": "route",
"description": "Select the next role.",
"parameters": {
"title": "routeSchema",
"type": "object",
"properties": {
"next": {
"title": "Next",
"anyOf": [
{"enum": options},
],
}
},
"required": ["next"],
},
}
prompt = ChatPromptTemplate.from_messages(
[
("system", system_prompt),
MessagesPlaceholder(variable_name="messages"),
("system",
"Given the messages above, who should act next? Or should we FINISH? If get the final result, we should FINISH. Select one of: {options}"),
]
).partial(options=str(options), members=", ".join(members))
tools = [convert_to_openai_tool(f) for f in [function_def]]
model = llm.bind(tools=tools)
self.agent = prompt | model | OutputParser()
|
{
"login": "sunshine19870316",
"id": 165765929,
"node_id": "U_kgDOCeFjKQ",
"avatar_url": "https://avatars.githubusercontent.com/u/165765929?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sunshine19870316",
"html_url": "https://github.com/sunshine19870316",
"followers_url": "https://api.github.com/users/sunshine19870316/followers",
"following_url": "https://api.github.com/users/sunshine19870316/following{/other_user}",
"gists_url": "https://api.github.com/users/sunshine19870316/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sunshine19870316/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sunshine19870316/subscriptions",
"organizations_url": "https://api.github.com/users/sunshine19870316/orgs",
"repos_url": "https://api.github.com/users/sunshine19870316/repos",
"events_url": "https://api.github.com/users/sunshine19870316/events{/privacy}",
"received_events_url": "https://api.github.com/users/sunshine19870316/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6704/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6704/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7286
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7286/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7286/comments
|
https://api.github.com/repos/ollama/ollama/issues/7286/events
|
https://github.com/ollama/ollama/issues/7286
| 2,601,574,699
|
I_kwDOJ0Z1Ps6bEOEr
| 7,286
|
httpcore.ConnectError: [WinError 10061]
|
{
"login": "RXZAN",
"id": 176294975,
"node_id": "U_kgDOCoIMPw",
"avatar_url": "https://avatars.githubusercontent.com/u/176294975?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RXZAN",
"html_url": "https://github.com/RXZAN",
"followers_url": "https://api.github.com/users/RXZAN/followers",
"following_url": "https://api.github.com/users/RXZAN/following{/other_user}",
"gists_url": "https://api.github.com/users/RXZAN/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RXZAN/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RXZAN/subscriptions",
"organizations_url": "https://api.github.com/users/RXZAN/orgs",
"repos_url": "https://api.github.com/users/RXZAN/repos",
"events_url": "https://api.github.com/users/RXZAN/events{/privacy}",
"received_events_url": "https://api.github.com/users/RXZAN/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 7706485628,
"node_id": "LA_kwDOJ0Z1Ps8AAAABy1ejfA",
"url": "https://api.github.com/repos/ollama/ollama/labels/python",
"name": "python",
"color": "59642B",
"default": false,
"description": "relating to the ollama-python client library"
}
] |
closed
| false
| null |
[] | null | 8
| 2024-10-21T07:07:53
| 2024-11-06T11:10:24
| 2024-11-06T11:10:24
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I'm running the ollama service on the server side。
There was a problem running this piece of code without ollama on the local machine。
**MY code:**
import os
os.environ["USER_AGENT"] = "MyCustomUserAgent/1.0"
os.environ['OLLAMA_API_KEY'] = 'none'
os.environ['OLLAMA_BASE_URL'] = 'http://10.4.(my_server_ip):11434/'
from langchain_ollama import ChatOllama
llm = ChatOllama(model='llama3.1:8b', temperature=0)
messages = [
("human", "Return the words Hello World!"),
]
for chunk in llm.stream(messages):
print(chunk)

**problem:**
Traceback (most recent call last):
File "C:\Users\dell\AppData\Roaming\Python\Python312\site-packages\httpx\_transports\default.py", line 72, in map_httpcore_exceptions
yield
File "C:\Users\dell\AppData\Roaming\Python\Python312\site-packages\httpx\_transports\default.py", line 236, in handle_request
resp = self._pool.handle_request(req)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\dell\AppData\Roaming\Python\Python312\site-packages\httpcore\_sync\connection_pool.py", line 216, in handle_request
raise exc from None
File "C:\Users\dell\AppData\Roaming\Python\Python312\site-packages\httpcore\_sync\connection_pool.py", line 196, in handle_request
response = connection.handle_request(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\dell\AppData\Roaming\Python\Python312\site-packages\httpcore\_sync\connection.py", line 99, in handle_request
raise exc
File "C:\Users\dell\AppData\Roaming\Python\Python312\site-packages\httpcore\_sync\connection.py", line 76, in handle_request
stream = self._connect(request)
^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\dell\AppData\Roaming\Python\Python312\site-packages\httpcore\_sync\connection.py", line 122, in _connect
stream = self._network_backend.connect_tcp(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\dell\AppData\Roaming\Python\Python312\site-packages\httpcore\_backends\sync.py", line 205, in connect_tcp
with map_exceptions(exc_map):
^^^^^^^^^^^^^^^^^^^^^^^
File "D:\anaconda3\envs\RAG_extra\Lib\contextlib.py", line 158, in __exit__
self.gen.throw(value)
File "C:\Users\dell\AppData\Roaming\Python\Python312\site-packages\httpcore\_exceptions.py", line 14, in map_exceptions
raise to_exc(exc) from exc
httpcore.ConnectError: [WinError 10061] 由于目标计算机积极拒绝,无法连接。
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "D:\python_projects\Whole\text.py", line 13, in <module>
for chunk in llm.stream(messages):
^^^^^^^^^^^^^^^^^^^^
File "C:\Users\dell\AppData\Roaming\Python\Python312\site-packages\langchain_core\language_models\chat_models.py", line 420, in stream
raise e
File "C:\Users\dell\AppData\Roaming\Python\Python312\site-packages\langchain_core\language_models\chat_models.py", line 400, in stream
for chunk in self._stream(messages, stop=stop, **kwargs):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\dell\AppData\Roaming\Python\Python312\site-packages\langchain_ollama\chat_models.py", line 665, in _stream
for stream_resp in self._create_chat_stream(messages, stop, **kwargs):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\dell\AppData\Roaming\Python\Python312\site-packages\langchain_ollama\chat_models.py", line 527, in _create_chat_stream
yield from self._client.chat(
File "C:\Users\dell\AppData\Roaming\Python\Python312\site-packages\ollama\_client.py", line 80, in _stream
with self._client.stream(method, url, **kwargs) as r:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\anaconda3\envs\RAG_extra\Lib\contextlib.py", line 137, in __enter__
return next(self.gen)
^^^^^^^^^^^^^^
File "C:\Users\dell\AppData\Roaming\Python\Python312\site-packages\httpx\_client.py", line 880, in stream
response = self.send(
^^^^^^^^^^
File "C:\Users\dell\AppData\Roaming\Python\Python312\site-packages\httpx\_client.py", line 926, in send
response = self._send_handling_auth(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\dell\AppData\Roaming\Python\Python312\site-packages\httpx\_client.py", line 954, in _send_handling_auth
response = self._send_handling_redirects(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\dell\AppData\Roaming\Python\Python312\site-packages\httpx\_client.py", line 991, in _send_handling_redirects
response = self._send_single_request(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\dell\AppData\Roaming\Python\Python312\site-packages\httpx\_client.py", line 1027, in _send_single_request
response = transport.handle_request(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\dell\AppData\Roaming\Python\Python312\site-packages\httpx\_transports\default.py", line 235, in handle_request
with map_httpcore_exceptions():
^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\anaconda3\envs\RAG_extra\Lib\contextlib.py", line 158, in __exit__
self.gen.throw(value)
File "C:\Users\dell\AppData\Roaming\Python\Python312\site-packages\httpx\_transports\default.py", line 89, in map_httpcore_exceptions
raise mapped_exc(message) from exc
httpx.ConnectError: [WinError 10061] 由于目标计算机积极拒绝,无法连接。
I also can curl http://10.4.(my_server_ip):11434 in lacal machine

How do I solving this problem?
### OS
Linux, Windows
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.3.13
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7286/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7286/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3353
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3353/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3353/comments
|
https://api.github.com/repos/ollama/ollama/issues/3353/events
|
https://github.com/ollama/ollama/pull/3353
| 2,207,101,188
|
PR_kwDOJ0Z1Ps5qus3-
| 3,353
|
Use Rocky Linux Vault to get GCC 10.2 installed
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-03-26T02:20:02
| 2024-03-26T02:38:59
| 2024-03-26T02:38:56
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3353",
"html_url": "https://github.com/ollama/ollama/pull/3353",
"diff_url": "https://github.com/ollama/ollama/pull/3353.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3353.patch",
"merged_at": "2024-03-26T02:38:56"
}
| null |
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3353/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3353/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6933
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6933/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6933/comments
|
https://api.github.com/repos/ollama/ollama/issues/6933/events
|
https://github.com/ollama/ollama/issues/6933
| 2,545,331,560
|
I_kwDOJ0Z1Ps6Xtq1o
| 6,933
|
RTX A3000 GPU not being utilized for small LLMs
|
{
"login": "scotgopal",
"id": 76937732,
"node_id": "MDQ6VXNlcjc2OTM3NzMy",
"avatar_url": "https://avatars.githubusercontent.com/u/76937732?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/scotgopal",
"html_url": "https://github.com/scotgopal",
"followers_url": "https://api.github.com/users/scotgopal/followers",
"following_url": "https://api.github.com/users/scotgopal/following{/other_user}",
"gists_url": "https://api.github.com/users/scotgopal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/scotgopal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/scotgopal/subscriptions",
"organizations_url": "https://api.github.com/users/scotgopal/orgs",
"repos_url": "https://api.github.com/users/scotgopal/repos",
"events_url": "https://api.github.com/users/scotgopal/events{/privacy}",
"received_events_url": "https://api.github.com/users/scotgopal/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info",
"name": "needs more info",
"color": "BA8041",
"default": false,
"description": "More information is needed to assist"
},
{
"id": 6677677816,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgVG-A",
"url": "https://api.github.com/repos/ollama/ollama/labels/docker",
"name": "docker",
"color": "0052CC",
"default": false,
"description": "Issues relating to using ollama in containers"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 4
| 2024-09-24T12:55:36
| 2024-09-25T19:07:35
| 2024-09-25T19:07:34
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Hi there. I am using ollama from docker and I've already made sure that the gpu is available from the container, by using `nvidia-smi`
```shell
root@802f556c99c8:/# nvidia-smi
Tue Sep 24 12:52:58 2024
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.183.01 Driver Version: 535.183.01 CUDA Version: 12.2 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 NVIDIA RTX A3000 Laptop GPU Off | 00000000:01:00.0 Off | N/A |
| N/A 47C P8 14W / 90W | 489MiB / 6144MiB | 8% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
+---------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
+---------------------------------------------------------------------------------------+
```
I have tried with
qwen2.5:0.5b
```shell
root@802f556c99c8:/# ollama ps
NAME ID SIZE PROCESSOR UNTIL
qwen2.5:0.5b a8b0c5157701 820 MB 100% CPU 4 minutes from now
```
llama3.1:8b_q2_K
```shell
root@802f556c99c8:/# ollama ps
NAME ID SIZE PROCESSOR UNTIL
llama3.1:8b-instruct-q2_K 44a139eeb344 4.8 GB 100% CPU 4 minutes from now
```
As you can see in the `ollama ps` output, it's using 100% CPU. I understand that 6GB GPU VRAM is quite low for LLMs, but I was hoping that at least my GPU will be used partially.
### OS
Linux
### GPU
Other
### CPU
Intel
### Ollama version
0.3.11
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6933/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6933/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7659
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7659/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7659/comments
|
https://api.github.com/repos/ollama/ollama/issues/7659/events
|
https://github.com/ollama/ollama/pull/7659
| 2,657,222,381
|
PR_kwDOJ0Z1Ps6B2y6c
| 7,659
|
runner.go: Don't trim whitespace from inputs
|
{
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users/jessegross/followers",
"following_url": "https://api.github.com/users/jessegross/following{/other_user}",
"gists_url": "https://api.github.com/users/jessegross/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jessegross/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jessegross/subscriptions",
"organizations_url": "https://api.github.com/users/jessegross/orgs",
"repos_url": "https://api.github.com/users/jessegross/repos",
"events_url": "https://api.github.com/users/jessegross/events{/privacy}",
"received_events_url": "https://api.github.com/users/jessegross/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-11-14T01:02:04
| 2024-11-14T19:23:09
| 2024-11-14T19:23:07
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7659",
"html_url": "https://github.com/ollama/ollama/pull/7659",
"diff_url": "https://github.com/ollama/ollama/pull/7659.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7659.patch",
"merged_at": "2024-11-14T19:23:07"
}
|
It's possible to get prompts that consist entirely of whitespace - this is most likely to happen when generating embeddings. Currently, we will trim this away, leaving an empty prompt, which will then generate an error.
Generating embeddings from whitespace should not trigger an error, as this may break pipelines. It's better to just leave the whitespace in place and process what we are given. This is consistent with past versions of Ollama.
Bug #7578
|
{
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users/jessegross/followers",
"following_url": "https://api.github.com/users/jessegross/following{/other_user}",
"gists_url": "https://api.github.com/users/jessegross/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jessegross/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jessegross/subscriptions",
"organizations_url": "https://api.github.com/users/jessegross/orgs",
"repos_url": "https://api.github.com/users/jessegross/repos",
"events_url": "https://api.github.com/users/jessegross/events{/privacy}",
"received_events_url": "https://api.github.com/users/jessegross/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7659/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7659/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6698
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6698/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6698/comments
|
https://api.github.com/repos/ollama/ollama/issues/6698/events
|
https://github.com/ollama/ollama/issues/6698
| 2,512,230,854
|
I_kwDOJ0Z1Ps6VvZnG
| 6,698
|
Custom OLLAMA_MODELS Environment Variable Not Respected
|
{
"login": "sascharo",
"id": 88222654,
"node_id": "MDQ6VXNlcjg4MjIyNjU0",
"avatar_url": "https://avatars.githubusercontent.com/u/88222654?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sascharo",
"html_url": "https://github.com/sascharo",
"followers_url": "https://api.github.com/users/sascharo/followers",
"following_url": "https://api.github.com/users/sascharo/following{/other_user}",
"gists_url": "https://api.github.com/users/sascharo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sascharo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sascharo/subscriptions",
"organizations_url": "https://api.github.com/users/sascharo/orgs",
"repos_url": "https://api.github.com/users/sascharo/repos",
"events_url": "https://api.github.com/users/sascharo/events{/privacy}",
"received_events_url": "https://api.github.com/users/sascharo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 3
| 2024-09-08T06:52:58
| 2024-09-08T13:27:15
| 2024-09-08T13:27:15
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Despite setting the environment variable `OLLAMA_MODELS` to a custom path, Ollama continues to download models to the default location (`C:\Users\%username%\.ollama\models`). The environment variable is set correctly and confirmed via echo `env:OLLAMA_MODELS`, but the expected behavior of downloading models to the custom path is not happening.
### OS
Windows
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.3.9
|
{
"login": "sascharo",
"id": 88222654,
"node_id": "MDQ6VXNlcjg4MjIyNjU0",
"avatar_url": "https://avatars.githubusercontent.com/u/88222654?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sascharo",
"html_url": "https://github.com/sascharo",
"followers_url": "https://api.github.com/users/sascharo/followers",
"following_url": "https://api.github.com/users/sascharo/following{/other_user}",
"gists_url": "https://api.github.com/users/sascharo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sascharo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sascharo/subscriptions",
"organizations_url": "https://api.github.com/users/sascharo/orgs",
"repos_url": "https://api.github.com/users/sascharo/repos",
"events_url": "https://api.github.com/users/sascharo/events{/privacy}",
"received_events_url": "https://api.github.com/users/sascharo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6698/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6698/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5195
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5195/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5195/comments
|
https://api.github.com/repos/ollama/ollama/issues/5195/events
|
https://github.com/ollama/ollama/issues/5195
| 2,365,028,797
|
I_kwDOJ0Z1Ps6M93m9
| 5,195
|
How to import a model (.bin) from huggin face?
|
{
"login": "javierxio",
"id": 63758477,
"node_id": "MDQ6VXNlcjYzNzU4NDc3",
"avatar_url": "https://avatars.githubusercontent.com/u/63758477?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/javierxio",
"html_url": "https://github.com/javierxio",
"followers_url": "https://api.github.com/users/javierxio/followers",
"following_url": "https://api.github.com/users/javierxio/following{/other_user}",
"gists_url": "https://api.github.com/users/javierxio/gists{/gist_id}",
"starred_url": "https://api.github.com/users/javierxio/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/javierxio/subscriptions",
"organizations_url": "https://api.github.com/users/javierxio/orgs",
"repos_url": "https://api.github.com/users/javierxio/repos",
"events_url": "https://api.github.com/users/javierxio/events{/privacy}",
"received_events_url": "https://api.github.com/users/javierxio/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
closed
| false
| null |
[] | null | 6
| 2024-06-20T18:34:21
| 2024-06-30T06:48:43
| 2024-06-30T06:48:43
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hello. I would like to use a model from huggin face. I was able to download a file called `pytorch_model.bin` which I presume is the LLM. I created a directory and created a `Modelfile.txt` file. The contents of the `Modelfile.txt` are as:
```
FROM C:\ollama_models\florence-2-base\pytorch_model.bin
```
Running the ollama create command results in the following erros:
```sh
C:\ollama_models\florence-2-base>ollama create florence2:base -f ./Modelfile.txt
transferring model data
unpacking model metadata
Error: open C:\Users\javie\.ollama\models\blobs\1075676817\pytorch_model\data.pkl: The system cannot find the path specified.
```
Please help me understand? I am new at this. Thanks!
|
{
"login": "javierxio",
"id": 63758477,
"node_id": "MDQ6VXNlcjYzNzU4NDc3",
"avatar_url": "https://avatars.githubusercontent.com/u/63758477?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/javierxio",
"html_url": "https://github.com/javierxio",
"followers_url": "https://api.github.com/users/javierxio/followers",
"following_url": "https://api.github.com/users/javierxio/following{/other_user}",
"gists_url": "https://api.github.com/users/javierxio/gists{/gist_id}",
"starred_url": "https://api.github.com/users/javierxio/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/javierxio/subscriptions",
"organizations_url": "https://api.github.com/users/javierxio/orgs",
"repos_url": "https://api.github.com/users/javierxio/repos",
"events_url": "https://api.github.com/users/javierxio/events{/privacy}",
"received_events_url": "https://api.github.com/users/javierxio/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5195/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5195/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7128
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7128/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7128/comments
|
https://api.github.com/repos/ollama/ollama/issues/7128/events
|
https://github.com/ollama/ollama/issues/7128
| 2,572,198,369
|
I_kwDOJ0Z1Ps6ZUKHh
| 7,128
|
Ollama host still be 127.0.0.1 while I have set OLLAMA_HOST = 0.0.0.0:11434 in the environment
|
{
"login": "Yangshford",
"id": 71912970,
"node_id": "MDQ6VXNlcjcxOTEyOTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/71912970?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Yangshford",
"html_url": "https://github.com/Yangshford",
"followers_url": "https://api.github.com/users/Yangshford/followers",
"following_url": "https://api.github.com/users/Yangshford/following{/other_user}",
"gists_url": "https://api.github.com/users/Yangshford/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Yangshford/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Yangshford/subscriptions",
"organizations_url": "https://api.github.com/users/Yangshford/orgs",
"repos_url": "https://api.github.com/users/Yangshford/repos",
"events_url": "https://api.github.com/users/Yangshford/events{/privacy}",
"received_events_url": "https://api.github.com/users/Yangshford/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 9
| 2024-10-08T06:18:43
| 2024-11-01T09:27:23
| 2024-10-08T11:34:15
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
sorry for the bad english :(
i am not a native speaker
I run ollama on wsl2.
If I just use the commend "ollama serve" to start the ollama , I can't access to the ollama when I open the browser in windows and visit "localhost:11434".(ollama is accessable on wsl2)
I have set my environment in /etc/systemd/system/ollama.service but it doesn't work

However, it does work when I use the commend "export OLLAMA_HOST=0.0.0.0 ollama serve" before I start ollama.
what can I do if I want it just start at 0.0.0.0 as default.
### OS
WSL2
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.3.12
|
{
"login": "Yangshford",
"id": 71912970,
"node_id": "MDQ6VXNlcjcxOTEyOTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/71912970?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Yangshford",
"html_url": "https://github.com/Yangshford",
"followers_url": "https://api.github.com/users/Yangshford/followers",
"following_url": "https://api.github.com/users/Yangshford/following{/other_user}",
"gists_url": "https://api.github.com/users/Yangshford/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Yangshford/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Yangshford/subscriptions",
"organizations_url": "https://api.github.com/users/Yangshford/orgs",
"repos_url": "https://api.github.com/users/Yangshford/repos",
"events_url": "https://api.github.com/users/Yangshford/events{/privacy}",
"received_events_url": "https://api.github.com/users/Yangshford/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7128/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7128/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8036
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8036/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8036/comments
|
https://api.github.com/repos/ollama/ollama/issues/8036/events
|
https://github.com/ollama/ollama/pull/8036
| 2,731,613,917
|
PR_kwDOJ0Z1Ps6EyU9_
| 8,036
|
go.mod: go 1.22.8 -> 1.23.4
|
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-12-11T01:47:18
| 2024-12-11T02:16:18
| 2024-12-11T02:16:17
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/8036",
"html_url": "https://github.com/ollama/ollama/pull/8036",
"diff_url": "https://github.com/ollama/ollama/pull/8036.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8036.patch",
"merged_at": "2024-12-11T02:16:17"
}
| null |
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8036/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8036/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4079
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4079/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4079/comments
|
https://api.github.com/repos/ollama/ollama/issues/4079/events
|
https://github.com/ollama/ollama/issues/4079
| 2,273,418,092
|
I_kwDOJ0Z1Ps6HgZts
| 4,079
|
About OLLAMA_PARALLEL split the max context length
|
{
"login": "DirtyKnightForVi",
"id": 116725810,
"node_id": "U_kgDOBvUYMg",
"avatar_url": "https://avatars.githubusercontent.com/u/116725810?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DirtyKnightForVi",
"html_url": "https://github.com/DirtyKnightForVi",
"followers_url": "https://api.github.com/users/DirtyKnightForVi/followers",
"following_url": "https://api.github.com/users/DirtyKnightForVi/following{/other_user}",
"gists_url": "https://api.github.com/users/DirtyKnightForVi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DirtyKnightForVi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DirtyKnightForVi/subscriptions",
"organizations_url": "https://api.github.com/users/DirtyKnightForVi/orgs",
"repos_url": "https://api.github.com/users/DirtyKnightForVi/repos",
"events_url": "https://api.github.com/users/DirtyKnightForVi/events{/privacy}",
"received_events_url": "https://api.github.com/users/DirtyKnightForVi/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 0
| 2024-05-01T12:19:18
| 2024-05-01T12:19:18
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I encountered this while testing SQL QA with extremely large table, and i put all DDL into `system` .
When `OLLAMA_PARALLEL = 4,` I observed that model appears to only understand the last 4000 tokens of the DDL. This is quite different from my previous experience. My webui is open webui , it can set `num_ctx` to 16000, but useless.
BUT changing `OLLAMA_PARALLEL=1`, model can understand the whole DDL !
so , max_num_ctx = 16000 / OLLAMA_PARALLEL ? Even when the machine is free ?
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.33-RC5
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4079/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4079/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/5700
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5700/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5700/comments
|
https://api.github.com/repos/ollama/ollama/issues/5700/events
|
https://github.com/ollama/ollama/issues/5700
| 2,408,492,530
|
I_kwDOJ0Z1Ps6Pjq3y
| 5,700
|
zfs ARC leads to incorrect system memory prediction and refusal to load models that could work
|
{
"login": "arthurmelton",
"id": 29708070,
"node_id": "MDQ6VXNlcjI5NzA4MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/29708070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arthurmelton",
"html_url": "https://github.com/arthurmelton",
"followers_url": "https://api.github.com/users/arthurmelton/followers",
"following_url": "https://api.github.com/users/arthurmelton/following{/other_user}",
"gists_url": "https://api.github.com/users/arthurmelton/gists{/gist_id}",
"starred_url": "https://api.github.com/users/arthurmelton/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arthurmelton/subscriptions",
"organizations_url": "https://api.github.com/users/arthurmelton/orgs",
"repos_url": "https://api.github.com/users/arthurmelton/repos",
"events_url": "https://api.github.com/users/arthurmelton/events{/privacy}",
"received_events_url": "https://api.github.com/users/arthurmelton/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 6849881759,
"node_id": "LA_kwDOJ0Z1Ps8AAAABmEjmnw",
"url": "https://api.github.com/repos/ollama/ollama/labels/memory",
"name": "memory",
"color": "5017EA",
"default": false,
"description": ""
}
] |
open
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 7
| 2024-07-15T11:23:55
| 2024-12-24T22:40:53
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I would like if there would be a flag to ignore this condition https://github.com/ollama/ollama/blob/e9f7f3602961d2b0beaff27144ec89301c2173ca/llm/server.go#L128-L135
I use Truenas Scale to store my models and to run the models. It uses zfs as the filesystem, so that means that there is ARC using a lot the memory. I don't know what specifically Truenas does to do this, but they have the ARC size to behave like BSD where it naturally tries to use as much memory as possible. It will decrease if something else tries to use more ram though.
Ollama thus freaks out when I try and run a model that will make it OOM, but I actually do have enough memory. In my mind a flag would be the easiest to implement, but maybe it could try being smart and remove zfs arc from the calculations?
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5700/reactions",
"total_count": 8,
"+1": 8,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5700/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/8553
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8553/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8553/comments
|
https://api.github.com/repos/ollama/ollama/issues/8553/events
|
https://github.com/ollama/ollama/issues/8553
| 2,807,726,620
|
I_kwDOJ0Z1Ps6nWoIc
| 8,553
|
When LLM generates empty string response, `eval_duration` is missing.
|
{
"login": "wch",
"id": 86978,
"node_id": "MDQ6VXNlcjg2OTc4",
"avatar_url": "https://avatars.githubusercontent.com/u/86978?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wch",
"html_url": "https://github.com/wch",
"followers_url": "https://api.github.com/users/wch/followers",
"following_url": "https://api.github.com/users/wch/following{/other_user}",
"gists_url": "https://api.github.com/users/wch/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wch/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wch/subscriptions",
"organizations_url": "https://api.github.com/users/wch/orgs",
"repos_url": "https://api.github.com/users/wch/repos",
"events_url": "https://api.github.com/users/wch/events{/privacy}",
"received_events_url": "https://api.github.com/users/wch/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 0
| 2025-01-23T19:21:04
| 2025-01-28T09:16:25
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I noticed that ollama sometimes produces responses where `eval_duration` is missing. I've seen it happen with the response is simply an empty string -- just the stop message for a streaming response.
To reproduce:
```bash
curl http://localhost:11434/api/chat -d \
'{"model":"llama3.2:3b","options":{"temperature":0},"messages":[
{"content":"If the user asks you to say nothing, simply respond with an empty string.","role":"system"},
{"content":"say nothing","role":"user"}
]}'
```
This is the response I get:
```json
{"model":"llama3.2:3b","created_at":"2025-01-23T19:17:22.121346Z","message":{"role":"assistant","content":""},"done_reason":"stop","done":true,"total_duration":141674875,"load_duration":13300000,"prompt_eval_count":43,"prompt_eval_duration":127000000,"eval_count":1}
```
Notice that `eval_duration` is missing. This causes problems with some code that we're using which expects that field to be present.
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.5.7
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8553/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8553/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/5464
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5464/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5464/comments
|
https://api.github.com/repos/ollama/ollama/issues/5464/events
|
https://github.com/ollama/ollama/issues/5464
| 2,389,104,798
|
I_kwDOJ0Z1Ps6OZtie
| 5,464
|
`Ollama` fails to work with `CUDA` after `Linux` suspend/resume, unlike other `CUDA` services
|
{
"login": "bwnjnOEI",
"id": 16009223,
"node_id": "MDQ6VXNlcjE2MDA5MjIz",
"avatar_url": "https://avatars.githubusercontent.com/u/16009223?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bwnjnOEI",
"html_url": "https://github.com/bwnjnOEI",
"followers_url": "https://api.github.com/users/bwnjnOEI/followers",
"following_url": "https://api.github.com/users/bwnjnOEI/following{/other_user}",
"gists_url": "https://api.github.com/users/bwnjnOEI/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bwnjnOEI/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bwnjnOEI/subscriptions",
"organizations_url": "https://api.github.com/users/bwnjnOEI/orgs",
"repos_url": "https://api.github.com/users/bwnjnOEI/repos",
"events_url": "https://api.github.com/users/bwnjnOEI/events{/privacy}",
"received_events_url": "https://api.github.com/users/bwnjnOEI/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg",
"url": "https://api.github.com/repos/ollama/ollama/labels/nvidia",
"name": "nvidia",
"color": "8CDB00",
"default": false,
"description": "Issues relating to Nvidia GPUs and CUDA"
}
] |
open
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 12
| 2024-07-03T17:16:36
| 2025-01-23T04:33:57
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Every time Linux resumes from suspension, it fails to correctly reload `CUDA`. However, this issue has been well-resolved using commands like `sudo rmmod nvidia_uvm` and `sudo modprobe nvidia_uvm`. After this, all CUDA-dependent services except `Ollama` can utilize `CUDA` and work normally again (e.g., `torch.randn((2,2)).cuda(0)`). GPU mode for Ollama can only be restored by restarting the Ollama service. This can be done by reloading systemd and restarting Ollama: `systemctl daemon-reload` and `systemctl restart ollama`. I'm not sure if I've missed something, such as specific `Ollama` settings, so I've reported this as a bug.
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.48
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5464/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5464/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/7252
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7252/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7252/comments
|
https://api.github.com/repos/ollama/ollama/issues/7252/events
|
https://github.com/ollama/ollama/issues/7252
| 2,597,064,614
|
I_kwDOJ0Z1Ps6azA-m
| 7,252
|
add h2ovl-mississippi-800m and h2ovl-mississippi-2b
|
{
"login": "a-ghorbani",
"id": 11278140,
"node_id": "MDQ6VXNlcjExMjc4MTQw",
"avatar_url": "https://avatars.githubusercontent.com/u/11278140?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/a-ghorbani",
"html_url": "https://github.com/a-ghorbani",
"followers_url": "https://api.github.com/users/a-ghorbani/followers",
"following_url": "https://api.github.com/users/a-ghorbani/following{/other_user}",
"gists_url": "https://api.github.com/users/a-ghorbani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/a-ghorbani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/a-ghorbani/subscriptions",
"organizations_url": "https://api.github.com/users/a-ghorbani/orgs",
"repos_url": "https://api.github.com/users/a-ghorbani/repos",
"events_url": "https://api.github.com/users/a-ghorbani/events{/privacy}",
"received_events_url": "https://api.github.com/users/a-ghorbani/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
open
| false
| null |
[] | null | 3
| 2024-10-18T09:59:30
| 2024-10-22T01:32:51
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
https://huggingface.co/h2oai/h2ovl-mississippi-2b
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7252/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7252/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/6338
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6338/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6338/comments
|
https://api.github.com/repos/ollama/ollama/issues/6338/events
|
https://github.com/ollama/ollama/issues/6338
| 2,463,389,253
|
I_kwDOJ0Z1Ps6S1FZF
| 6,338
|
ollama slower than llama.cpp
|
{
"login": "phly95",
"id": 3526540,
"node_id": "MDQ6VXNlcjM1MjY1NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/3526540?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/phly95",
"html_url": "https://github.com/phly95",
"followers_url": "https://api.github.com/users/phly95/followers",
"following_url": "https://api.github.com/users/phly95/following{/other_user}",
"gists_url": "https://api.github.com/users/phly95/gists{/gist_id}",
"starred_url": "https://api.github.com/users/phly95/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/phly95/subscriptions",
"organizations_url": "https://api.github.com/users/phly95/orgs",
"repos_url": "https://api.github.com/users/phly95/repos",
"events_url": "https://api.github.com/users/phly95/events{/privacy}",
"received_events_url": "https://api.github.com/users/phly95/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5808482718,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWjZpng",
"url": "https://api.github.com/repos/ollama/ollama/labels/performance",
"name": "performance",
"color": "A5B5C6",
"default": false,
"description": ""
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg",
"url": "https://api.github.com/repos/ollama/ollama/labels/windows",
"name": "windows",
"color": "0052CC",
"default": false,
"description": ""
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg",
"url": "https://api.github.com/repos/ollama/ollama/labels/nvidia",
"name": "nvidia",
"color": "8CDB00",
"default": false,
"description": "Issues relating to Nvidia GPUs and CUDA"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info",
"name": "needs more info",
"color": "BA8041",
"default": false,
"description": "More information is needed to assist"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 11
| 2024-08-13T13:41:53
| 2025-01-15T21:26:29
| 2025-01-15T21:26:27
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
When using the llm benchmark with ollama https://github.com/MinhNgyuen/llm-benchmark , I get around 80 t/s with gemma 2 2b. When asking the same questions to llama.cpp in conversation mode, I get 130 t/s. The llama.cpp command I'm running is ".\llama-cli -m gemma-2-2b-it-Q4_K_M.gguf --threads 16 -ngl 27 --mlock --port 11484 --top_k 40 --repeat_penalty 1.1 --min_p 0.05 --top_p 0.95 --prompt-cache-all -cb -np 4 --batch-size 512 -cnv"
Is there a reason that ollama is ~38% slower than llama.cpp here?
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.3.5
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6338/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6338/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3618
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3618/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3618/comments
|
https://api.github.com/repos/ollama/ollama/issues/3618/events
|
https://github.com/ollama/ollama/pull/3618
| 2,240,864,654
|
PR_kwDOJ0Z1Ps5sh8xe
| 3,618
|
Added grammar (and json schemas and CPU-only Dockerfile) support (from ollama/ollama PR #1606)
|
{
"login": "markcda",
"id": 35887062,
"node_id": "MDQ6VXNlcjM1ODg3MDYy",
"avatar_url": "https://avatars.githubusercontent.com/u/35887062?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/markcda",
"html_url": "https://github.com/markcda",
"followers_url": "https://api.github.com/users/markcda/followers",
"following_url": "https://api.github.com/users/markcda/following{/other_user}",
"gists_url": "https://api.github.com/users/markcda/gists{/gist_id}",
"starred_url": "https://api.github.com/users/markcda/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/markcda/subscriptions",
"organizations_url": "https://api.github.com/users/markcda/orgs",
"repos_url": "https://api.github.com/users/markcda/repos",
"events_url": "https://api.github.com/users/markcda/events{/privacy}",
"received_events_url": "https://api.github.com/users/markcda/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 12
| 2024-04-12T20:32:00
| 2024-08-07T20:42:27
| 2024-06-01T22:06:34
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3618",
"html_url": "https://github.com/ollama/ollama/pull/3618",
"diff_url": "https://github.com/ollama/ollama/pull/3618.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3618.patch",
"merged_at": null
}
|
Updated version of #1606.
|
{
"login": "markcda",
"id": 35887062,
"node_id": "MDQ6VXNlcjM1ODg3MDYy",
"avatar_url": "https://avatars.githubusercontent.com/u/35887062?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/markcda",
"html_url": "https://github.com/markcda",
"followers_url": "https://api.github.com/users/markcda/followers",
"following_url": "https://api.github.com/users/markcda/following{/other_user}",
"gists_url": "https://api.github.com/users/markcda/gists{/gist_id}",
"starred_url": "https://api.github.com/users/markcda/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/markcda/subscriptions",
"organizations_url": "https://api.github.com/users/markcda/orgs",
"repos_url": "https://api.github.com/users/markcda/repos",
"events_url": "https://api.github.com/users/markcda/events{/privacy}",
"received_events_url": "https://api.github.com/users/markcda/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3618/reactions",
"total_count": 24,
"+1": 24,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3618/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2148
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2148/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2148/comments
|
https://api.github.com/repos/ollama/ollama/issues/2148/events
|
https://github.com/ollama/ollama/pull/2148
| 2,095,016,956
|
PR_kwDOJ0Z1Ps5kx9Ob
| 2,148
|
Refine Accelerate usage on mac
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-01-23T00:26:46
| 2024-01-23T00:57:02
| 2024-01-23T00:56:58
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/2148",
"html_url": "https://github.com/ollama/ollama/pull/2148",
"diff_url": "https://github.com/ollama/ollama/pull/2148.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2148.patch",
"merged_at": "2024-01-23T00:56:58"
}
|
For old macs, accelerate seems to cause crashes, but for AVX2 capable macs, it does not.
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2148/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2148/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/8211
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8211/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8211/comments
|
https://api.github.com/repos/ollama/ollama/issues/8211/events
|
https://github.com/ollama/ollama/pull/8211
| 2,754,599,721
|
PR_kwDOJ0Z1Ps6GAuJC
| 8,211
|
docker: upgrade rocm to 6.2.4
|
{
"login": "Pekkari",
"id": 13776314,
"node_id": "MDQ6VXNlcjEzNzc2MzE0",
"avatar_url": "https://avatars.githubusercontent.com/u/13776314?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Pekkari",
"html_url": "https://github.com/Pekkari",
"followers_url": "https://api.github.com/users/Pekkari/followers",
"following_url": "https://api.github.com/users/Pekkari/following{/other_user}",
"gists_url": "https://api.github.com/users/Pekkari/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Pekkari/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Pekkari/subscriptions",
"organizations_url": "https://api.github.com/users/Pekkari/orgs",
"repos_url": "https://api.github.com/users/Pekkari/repos",
"events_url": "https://api.github.com/users/Pekkari/events{/privacy}",
"received_events_url": "https://api.github.com/users/Pekkari/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 0
| 2024-12-22T10:55:40
| 2024-12-23T15:07:22
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/8211",
"html_url": "https://github.com/ollama/ollama/pull/8211",
"diff_url": "https://github.com/ollama/ollama/pull/8211.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8211.patch",
"merged_at": null
}
|
This patch upgrades the rocm version to the latest available for consumption.
Fixes: #7941
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8211/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8211/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6403
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6403/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6403/comments
|
https://api.github.com/repos/ollama/ollama/issues/6403/events
|
https://github.com/ollama/ollama/pull/6403
| 2,471,870,634
|
PR_kwDOJ0Z1Ps54qGVP
| 6,403
|
feature: simple webclient
|
{
"login": "TecDroiD",
"id": 122358,
"node_id": "MDQ6VXNlcjEyMjM1OA==",
"avatar_url": "https://avatars.githubusercontent.com/u/122358?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TecDroiD",
"html_url": "https://github.com/TecDroiD",
"followers_url": "https://api.github.com/users/TecDroiD/followers",
"following_url": "https://api.github.com/users/TecDroiD/following{/other_user}",
"gists_url": "https://api.github.com/users/TecDroiD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TecDroiD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TecDroiD/subscriptions",
"organizations_url": "https://api.github.com/users/TecDroiD/orgs",
"repos_url": "https://api.github.com/users/TecDroiD/repos",
"events_url": "https://api.github.com/users/TecDroiD/events{/privacy}",
"received_events_url": "https://api.github.com/users/TecDroiD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-08-18T09:08:55
| 2024-11-21T10:17:57
| 2024-11-21T09:50:18
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6403",
"html_url": "https://github.com/ollama/ollama/pull/6403",
"diff_url": "https://github.com/ollama/ollama/pull/6403.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6403.patch",
"merged_at": null
}
|
This is a braindead-easy (200 lines) Web client example for ollama.
I wrote it last night because i'm too stupid for using the more complex ones I've found online and don't even need those.
Well, there's still a little work to do but it's running and maybe some people are interested in it. Haven't found much about contributing those things in CONTRIBUTING.md so I just try and see if you're interested..
|
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6403/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6403/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/8674
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8674/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8674/comments
|
https://api.github.com/repos/ollama/ollama/issues/8674/events
|
https://github.com/ollama/ollama/issues/8674
| 2,819,400,345
|
I_kwDOJ0Z1Ps6oDKKZ
| 8,674
|
No compatible GPUs were discovered
|
{
"login": "mikedolx",
"id": 15738117,
"node_id": "MDQ6VXNlcjE1NzM4MTE3",
"avatar_url": "https://avatars.githubusercontent.com/u/15738117?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mikedolx",
"html_url": "https://github.com/mikedolx",
"followers_url": "https://api.github.com/users/mikedolx/followers",
"following_url": "https://api.github.com/users/mikedolx/following{/other_user}",
"gists_url": "https://api.github.com/users/mikedolx/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mikedolx/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mikedolx/subscriptions",
"organizations_url": "https://api.github.com/users/mikedolx/orgs",
"repos_url": "https://api.github.com/users/mikedolx/repos",
"events_url": "https://api.github.com/users/mikedolx/events{/privacy}",
"received_events_url": "https://api.github.com/users/mikedolx/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 1
| 2025-01-29T21:47:22
| 2025-01-29T22:06:33
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Hi,
i'm currently trying to setup ollama within docker. I am using the following `docker-compose.yml`:
```yaml
services:
ollama:
container_name: ollama
restart: unless-stopped
image: ollama/ollama:latest
ports:
- 11434:11434
environment:
- OLLAMA_KEEP_ALIVE=24h
networks:
- ollama-docker
volumes:
- ollama:/root/.ollama
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: "1"
capabilities: [gpu]
ollama-webui:
image: ghcr.io/open-webui/open-webui:main
container_name: ollama-webui
volumes:
- webui:/app/backend/data
depends_on:
- ollama
ports:
- 11080:8080
environment: # https://docs.openwebui.com/getting-started/env-configuration#default_models
- OLLAMA_BASE_URLS=http://host.docker.internal:7869 #comma separated ollama hosts
- ENV=dev
- WEBUI_AUTH=False
- WEBUI_NAME=valiantlynx AI
- WEBUI_URL=http://localhost:8080
- WEBUI_SECRET_KEY=t0p-s3cr3t
extra_hosts:
- host.docker.internal:host-gateway
restart: unless-stopped
networks:
- ollama-docker
volumes:
webui:
ollama:
networks:
ollama-docker:
external: false
```
When i start the containers and check the logs of the ollama container i can see the following logs.
```
[GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached.
[GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.
- using env: export GIN_MODE=release
- using code: gin.SetMode(gin.ReleaseMode)
2025/01/29 21:41:11 routes.go:1187: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:24h0m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
[GIN-debug] POST /api/pull --> github.com/ollama/ollama/server.(*Server).PullHandler-fm (5 handlers)
time=2025-01-29T21:41:11.597Z level=INFO source=images.go:432 msg="total blobs: 0"
[GIN-debug] POST /api/generate --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers)
time=2025-01-29T21:41:11.597Z level=INFO source=images.go:439 msg="total unused blobs removed: 0"
[GIN-debug] POST /api/chat --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers)
time=2025-01-29T21:41:11.597Z level=INFO source=routes.go:1238 msg="Listening on [::]:11434 (version 0.5.7-0-ga420a45-dirty)"
[GIN-debug] POST /api/embed --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers)
time=2025-01-29T21:41:11.598Z level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners="[cpu cpu_avx cpu_avx2 cuda_v11_avx cuda_v12_avx]"
[GIN-debug] POST /api/embeddings --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers)
time=2025-01-29T21:41:11.598Z level=INFO source=gpu.go:226 msg="looking for compatible GPUs"
[GIN-debug] POST /api/create --> github.com/ollama/ollama/server.(*Server).CreateHandler-fm (5 handlers)
time=2025-01-29T21:41:11.601Z level=WARN source=gpu.go:623 msg="unknown error initializing cuda driver library /usr/lib/x86_64-linux-gnu/libcuda.so.535.216.03: cuda driver library init failure: 999. see https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md for more information"
````
Apperently, ollama is unable to recognize my GPU.
I can run `nvidia-smi` on the host and get the following result (which tells me that at least on the host everything is correctly installed):
```bash
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.216.03 Driver Version: 535.216.03 CUDA Version: 12.2 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 NVIDIA GeForce RTX 3060 On | 00000000:00:10.0 Off | N/A |
| 0% 47C P5 18W / 170W | 1MiB / 12288MiB | 0% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
+---------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
| No running processes found |
+---------------------------------------------------------------------------------------+
```
I can run the same command within the ollama container and get this result:
```bash
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.216.03 Driver Version: 535.216.03 CUDA Version: 12.2 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 NVIDIA GeForce RTX 3060 On | 00000000:00:10.0 Off | N/A |
| 0% 50C P5 18W / 170W | 1MiB / 12288MiB | 0% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
+---------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
| No running processes found |
+---------------------------------------------------------------------------------------+
```
On the host machine i have installed the nvidia-driver using this method: https://ubuntu.com/server/docs/nvidia-drivers-installation.
I have also installed the cuda toolkit following these instructions: https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html
I have also installed the nvidia-container-toolkit and set it up in the `/etc/docker/daemon.json` accordingly.
I have read all the troubleshooting tipps here: https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md and tried the hints. (Yes, i've reloaded the nvidia_uvm module several times).
I have passed through an ASUS RTX 3060 12GB via IOMMU from my proxmox host (v7.x) to the docker-vm, which runs on ubuntu. Apperently, the GPU is correctly working, but fails to be recognized by the ollama container.
This is also how my `/etc/docker/daemon.json` looks like:
```json
{
"runtimes": {
"nvidia": {
"args": [],
"path": "nvidia-container-runtime"
}
},
"exec-opts": ["native.cgroupdriver=cgroupfs"]
}
```
Any ideas what i could try?
### OS
Linux
### GPU
Nvidia
### CPU
AMD
### Ollama version
ollama version is 0.5.7-0-ga420a45-dirty
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8674/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8674/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/3940
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3940/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3940/comments
|
https://api.github.com/repos/ollama/ollama/issues/3940/events
|
https://github.com/ollama/ollama/issues/3940
| 2,265,561,113
|
I_kwDOJ0Z1Ps6HCbgZ
| 3,940
|
GPU offloading with little CPU RAM
|
{
"login": "dcfidalgo",
"id": 15979778,
"node_id": "MDQ6VXNlcjE1OTc5Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/15979778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dcfidalgo",
"html_url": "https://github.com/dcfidalgo",
"followers_url": "https://api.github.com/users/dcfidalgo/followers",
"following_url": "https://api.github.com/users/dcfidalgo/following{/other_user}",
"gists_url": "https://api.github.com/users/dcfidalgo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dcfidalgo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dcfidalgo/subscriptions",
"organizations_url": "https://api.github.com/users/dcfidalgo/orgs",
"repos_url": "https://api.github.com/users/dcfidalgo/repos",
"events_url": "https://api.github.com/users/dcfidalgo/events{/privacy}",
"received_events_url": "https://api.github.com/users/dcfidalgo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 17
| 2024-04-26T11:13:04
| 2025-01-10T08:55:00
| 2024-07-03T23:58:49
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Thanks for this amazing project, I really enjoy the simple, concise and easy-to-start interface! Keep up the fantastic work!
I have the following issue: I have a compute instance in the cloud with one NVIDIA A100 80GB and 16GB of CPU memory running Ubuntu.
When I try to run the llama3:70b model, it takes the ollama server a long time to load the model to the GPU, and as a result, I get an "Error: timed out waiting for llama runner to start" on the `ollama run llama3:70b` command after 10min (i could not figure out how to increase this timeout).
I noticed that ollama first tries to load the whole model into the page cache, however, in my case, it does not fit entirely. Only after the entire model is read once, offloading to the GPU will occur. My guess is that, since the initial pages got overwritten, it has to read the entire model again from the disk.
I was wondering if there is a way to start the offloading right from the beginning. Not sure if this is even possible, but I think in my case it would help.
This is the log of the server:
```shell
...
Apr 26 10:29:40 qa-mpcdf ollama[7668]: llm_load_print_meta: ssm_d_state = 0
Apr 26 10:29:40 qa-mpcdf ollama[7668]: llm_load_print_meta: ssm_dt_rank = 0
Apr 26 10:29:40 qa-mpcdf ollama[7668]: llm_load_print_meta: model type = 70B
Apr 26 10:29:40 qa-mpcdf ollama[7668]: llm_load_print_meta: model ftype = Q4_0
Apr 26 10:29:40 qa-mpcdf ollama[7668]: llm_load_print_meta: model params = 70.55 B
Apr 26 10:29:40 qa-mpcdf ollama[7668]: llm_load_print_meta: model size = 37.22 GiB (4.53 BPW)
Apr 26 10:29:40 qa-mpcdf ollama[7668]: llm_load_print_meta: general.name = Meta-Llama-3-70B-Instruct
Apr 26 10:29:40 qa-mpcdf ollama[7668]: llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>'
Apr 26 10:29:40 qa-mpcdf ollama[7668]: llm_load_print_meta: EOS token = 128001 '<|end_of_text|>'
Apr 26 10:29:40 qa-mpcdf ollama[7668]: llm_load_print_meta: LF token = 128 'Ä'
Apr 26 10:29:40 qa-mpcdf ollama[7668]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: yes
Apr 26 10:29:40 qa-mpcdf ollama[7668]: ggml_cuda_init: CUDA_USE_TENSOR_CORES: no
Apr 26 10:29:40 qa-mpcdf ollama[7668]: ggml_cuda_init: found 1 CUDA devices:
Apr 26 10:29:40 qa-mpcdf ollama[7668]: Device 0: NVIDIA A100 80GB PCIe, compute capability 8.0, VMM: yes
Apr 26 10:29:40 qa-mpcdf ollama[7668]: llm_load_tensors: ggml ctx size = 0.55 MiB
Apr 26 10:32:26 qa-mpcdf ollama[7668]: time=2024-04-26T10:32:26.839Z level=DEBUG source=server.go:420 msg="server not yet available" error="health resp: Get \"http://127.0.0.1:35651/health\>
Apr 26 10:32:27 qa-mpcdf ollama[7668]: time=2024-04-26T10:32:27.049Z level=DEBUG source=server.go:420 msg="server not yet available" error="server not responding"
Apr 26 10:35:11 qa-mpcdf ollama[7668]: time=2024-04-26T10:35:11.913Z level=DEBUG source=server.go:420 msg="server not yet available" error="health resp: Get \"http://127.0.0.1:35651/health\>
Apr 26 10:35:12 qa-mpcdf ollama[7668]: time=2024-04-26T10:35:12.122Z level=DEBUG source=server.go:420 msg="server not yet available" error="server not responding"
Apr 26 10:35:52 qa-mpcdf ollama[7668]: time=2024-04-26T10:35:52.419Z level=DEBUG source=server.go:420 msg="server not yet available" error="health resp: Get \"http://127.0.0.1:35651/health\>
Apr 26 10:35:52 qa-mpcdf ollama[7668]: time=2024-04-26T10:35:52.620Z level=DEBUG source=server.go:420 msg="server not yet available" error="server not responding"
Apr 26 10:36:07 qa-mpcdf ollama[7668]: llm_load_tensors: offloading 80 repeating layers to GPU
Apr 26 10:36:07 qa-mpcdf ollama[7668]: llm_load_tensors: offloading non-repeating layers to GPU
Apr 26 10:36:07 qa-mpcdf ollama[7668]: llm_load_tensors: offloaded 81/81 layers to GPU
Apr 26 10:36:07 qa-mpcdf ollama[7668]: llm_load_tensors: CPU buffer size = 563.62 MiB
Apr 26 10:36:07 qa-mpcdf ollama[7668]: llm_load_tensors: CUDA0 buffer size = 37546.98 MiB
Apr 26 10:36:18 qa-mpcdf ollama[7668]: .....time=2024-04-26T10:36:18.482Z level=DEBUG source=server.go:420 msg="server not yet available" error="health resp: Get \"http://127.0.0.1:35651/he>
Apr 26 10:36:18 qa-mpcdf ollama[7668]: time=2024-04-26T10:36:18.683Z level=DEBUG source=server.go:420 msg="server not yet available" error="server not responding"
Apr 26 10:36:51 qa-mpcdf ollama[7668]: .........time=2024-04-26T10:36:51.360Z level=DEBUG source=server.go:420 msg="server not yet available" error="health resp: Get \"http://127.0.0.1:3565>
Apr 26 10:36:51 qa-mpcdf ollama[7668]: time=2024-04-26T10:36:51.561Z level=DEBUG source=server.go:420 msg="server not yet available" error="server not responding"
Apr 26 10:38:43 qa-mpcdf ollama[7668]: ............................time=2024-04-26T10:38:43.051Z level=DEBUG source=server.go:420 msg="server not yet available" error="health resp: Get \"ht>
Apr 26 10:38:43 qa-mpcdf ollama[7668]: time=2024-04-26T10:38:43.251Z level=DEBUG source=server.go:420 msg="server not yet available" error="server not responding"
Apr 26 10:39:07 qa-mpcdf ollama[7668]: .......time=2024-04-26T10:39:07.311Z level=DEBUG source=server.go:420 msg="server not yet available" error="health resp: Get \"http://127.0.0.1:35651/>
Apr 26 10:39:07 qa-mpcdf ollama[7668]: time=2024-04-26T10:39:07.513Z level=DEBUG source=server.go:420 msg="server not yet available" error="server not responding"
Apr 26 10:39:24 qa-mpcdf ollama[7668]: ....time=2024-04-26T10:39:24.763Z level=DEBUG source=server.go:420 msg="server not yet available" error="health resp: Get \"http://127.0.0.1:35651/hea>
Apr 26 10:39:24 qa-mpcdf ollama[7668]: time=2024-04-26T10:39:24.964Z level=DEBUG source=server.go:420 msg="server not yet available" error="server not responding"
Apr 26 10:39:39 qa-mpcdf ollama[7668]: ....time=2024-04-26T10:39:39.396Z level=ERROR source=routes.go:120 msg="error loading llama server" error="timed out waiting for llama runner to start>
Apr 26 10:39:39 qa-mpcdf ollama[7668]: time=2024-04-26T10:39:39.396Z level=DEBUG source=server.go:832 msg="stopping llama server"
Apr 26 10:39:39 qa-mpcdf ollama[7668]: [GIN] 2024/04/26 - 10:39:39 | 500 | 10m1s | 127.0.0.1 | POST "/api/chat"
```
Thanks again and have a great day!
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.32
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3940/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3940/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4917
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4917/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4917/comments
|
https://api.github.com/repos/ollama/ollama/issues/4917/events
|
https://github.com/ollama/ollama/pull/4917
| 2,341,218,867
|
PR_kwDOJ0Z1Ps5x1uxG
| 4,917
|
convert bert model from safetensors
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-06-07T21:56:33
| 2024-08-21T18:48:31
| 2024-08-21T18:48:29
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4917",
"html_url": "https://github.com/ollama/ollama/pull/4917",
"diff_url": "https://github.com/ollama/ollama/pull/4917.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4917.patch",
"merged_at": "2024-08-21T18:48:29"
}
|
add a `moreParser` interface which converters can implement to signal a need for more configuration parsing
fix a bug in the tokenizer.json parsing where vocab size might exceed intended count if added_token.json contains tokens already defined
fix a bug in cmd where create will flatten the directory structure potentially creating conflicting files
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4917/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4917/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1921
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1921/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1921/comments
|
https://api.github.com/repos/ollama/ollama/issues/1921/events
|
https://github.com/ollama/ollama/pull/1921
| 2,076,058,971
|
PR_kwDOJ0Z1Ps5jxm2m
| 1,921
|
fix gpu_test.go Error (same type) uint64->uint32
|
{
"login": "fpreiss",
"id": 17441607,
"node_id": "MDQ6VXNlcjE3NDQxNjA3",
"avatar_url": "https://avatars.githubusercontent.com/u/17441607?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fpreiss",
"html_url": "https://github.com/fpreiss",
"followers_url": "https://api.github.com/users/fpreiss/followers",
"following_url": "https://api.github.com/users/fpreiss/following{/other_user}",
"gists_url": "https://api.github.com/users/fpreiss/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fpreiss/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fpreiss/subscriptions",
"organizations_url": "https://api.github.com/users/fpreiss/orgs",
"repos_url": "https://api.github.com/users/fpreiss/repos",
"events_url": "https://api.github.com/users/fpreiss/events{/privacy}",
"received_events_url": "https://api.github.com/users/fpreiss/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-01-11T08:41:45
| 2024-01-11T13:22:23
| 2024-01-11T13:22:23
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1921",
"html_url": "https://github.com/ollama/ollama/pull/1921",
"diff_url": "https://github.com/ollama/ollama/pull/1921.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1921.patch",
"merged_at": "2024-01-11T13:22:23"
}
|
When running the test suite on linux with a cuda build I get the following error without this commit:
```log
--- FAIL: TestBasicGetGPUInfo (0.06s)
gpu_test.go:21:
Error Trace: /build/ollama-cuda/src/ollama/gpu/gpu_test.go:21
Error: Elements should be the same type
Test: TestBasicGetGPUInfo
FAIL
FAIL github.com/jmorganca/ollama/gpu 0.078s
```
This was due to a type mismatch between `GetGPUInfo()` and the corresponding `TestBasicGetGPUInfo()` test. This simple commit fixes it on the test side and now I get the following test output:
```log
ok github.com/jmorganca/ollama/gpu 0.090s
```
(my first line of go btw.)
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1921/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1921/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1363
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1363/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1363/comments
|
https://api.github.com/repos/ollama/ollama/issues/1363/events
|
https://github.com/ollama/ollama/issues/1363
| 2,022,629,683
|
I_kwDOJ0Z1Ps54juEz
| 1,363
|
Meditron stops after the first line of answer
|
{
"login": "orkutmuratyilmaz",
"id": 7395916,
"node_id": "MDQ6VXNlcjczOTU5MTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/7395916?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/orkutmuratyilmaz",
"html_url": "https://github.com/orkutmuratyilmaz",
"followers_url": "https://api.github.com/users/orkutmuratyilmaz/followers",
"following_url": "https://api.github.com/users/orkutmuratyilmaz/following{/other_user}",
"gists_url": "https://api.github.com/users/orkutmuratyilmaz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/orkutmuratyilmaz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/orkutmuratyilmaz/subscriptions",
"organizations_url": "https://api.github.com/users/orkutmuratyilmaz/orgs",
"repos_url": "https://api.github.com/users/orkutmuratyilmaz/repos",
"events_url": "https://api.github.com/users/orkutmuratyilmaz/events{/privacy}",
"received_events_url": "https://api.github.com/users/orkutmuratyilmaz/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2023-12-03T17:14:55
| 2023-12-06T16:08:23
| 2023-12-06T16:08:23
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hello all,
I've tried Meditron with "`ollama run meditron`" and after that, I've asked "what are the symptoms of Kawasaki disease?".
The answer started with one line definition of Kawasaki disease and stopped after that.
I've tried with different questions, but results were only one liners.
What could be the reason for that?
Best,
Orkut
|
{
"login": "orkutmuratyilmaz",
"id": 7395916,
"node_id": "MDQ6VXNlcjczOTU5MTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/7395916?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/orkutmuratyilmaz",
"html_url": "https://github.com/orkutmuratyilmaz",
"followers_url": "https://api.github.com/users/orkutmuratyilmaz/followers",
"following_url": "https://api.github.com/users/orkutmuratyilmaz/following{/other_user}",
"gists_url": "https://api.github.com/users/orkutmuratyilmaz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/orkutmuratyilmaz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/orkutmuratyilmaz/subscriptions",
"organizations_url": "https://api.github.com/users/orkutmuratyilmaz/orgs",
"repos_url": "https://api.github.com/users/orkutmuratyilmaz/repos",
"events_url": "https://api.github.com/users/orkutmuratyilmaz/events{/privacy}",
"received_events_url": "https://api.github.com/users/orkutmuratyilmaz/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1363/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1363/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/834
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/834/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/834/comments
|
https://api.github.com/repos/ollama/ollama/issues/834/events
|
https://github.com/ollama/ollama/issues/834
| 1,949,136,320
|
I_kwDOJ0Z1Ps50LXXA
| 834
|
Bring back the EMBED feature in the Modelfile
|
{
"login": "vividfog",
"id": 75913791,
"node_id": "MDQ6VXNlcjc1OTEzNzkx",
"avatar_url": "https://avatars.githubusercontent.com/u/75913791?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vividfog",
"html_url": "https://github.com/vividfog",
"followers_url": "https://api.github.com/users/vividfog/followers",
"following_url": "https://api.github.com/users/vividfog/following{/other_user}",
"gists_url": "https://api.github.com/users/vividfog/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vividfog/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vividfog/subscriptions",
"organizations_url": "https://api.github.com/users/vividfog/orgs",
"repos_url": "https://api.github.com/users/vividfog/repos",
"events_url": "https://api.github.com/users/vividfog/events{/privacy}",
"received_events_url": "https://api.github.com/users/vividfog/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 6100196012,
"node_id": "LA_kwDOJ0Z1Ps8AAAABa5marA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feedback%20wanted",
"name": "feedback wanted",
"color": "0e8a16",
"default": false,
"description": ""
}
] |
open
| false
| null |
[] | null | 18
| 2023-10-18T08:10:36
| 2024-06-28T20:44:36
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I appreciate the effort keeping the codebase simple, Ollama is second to none in its elegance. But this was quick work removing the feature within a week without much debate if and how people use it, and is it really not valuable, or maybe it's a fantastic feature on second thought. I am going to miss this feature a lot and was highlighting it to others as an Ollama special treat. It was in daily use.
Related: #759 (feature removal), #501 (bug), #502 (documentation)
I'd like to bring some more viewpoints to this, as a heavy user who's tried everything I've gotten my hands on:
1. **User experience in comparison to alternatives was great.** Ollama comes with an ecosystem of APIs and chatbots. With nothing else to install, Ollama was a one-liner RAG chatbot with multi-line support. Upstream clients needed zero configuration to get these benefits for free.
2. **The alternatives are not good without plenty of developer effort** that regular people can't do. Now the users need to ramp up a client for this, and every one of them is poor in their user experience in their own ways. No match for Ollama out of the box. UX doesn't happen in a vacuum, it's in comparison to others. Ollama + any chatbot GUI + dropdown to select a RAG-model was all that was needed, but now that's no longer possible.
3. **The PrivateGPT example is no match even close,** I tried it and I've tried them all, built my own RAG routines at some scale for others. All else being equal, Ollama was actually the best no-bells-and-whistles RAG routine out there, ready to run in minutes with zero extra things to install and very few to learn. "Don't make me install new things" is an important UX perspective to this.
4. **Creating embeddings was a bit of extra work, but that's unavoidable if it's generic.** Again comparing to alternatives, all other methods need some work to make the embeddings too. Ollama's was easy, even if there can be an argument that "one line per embedding isn't elegant". Well it is in its simplicity. The rest is string manipulation.
5. **It was instant fast at runtime.** Embeddings took a while to create, but at runtime there is no delay, it's jut as instant as without embeddings.
6. **Turns out LLMs create totally usable embeddings.** Even if Llama2 or Mistral aren't embedding models on paper, they worked great in practice. I was using it daily with esoteric documents and it was fine. This was an issue in theory only.
7. **Instead of outright deletion, it really needed just some cleanup, but not immediately.** Finding the root cause for what made longer ingestions not work as a single batch. Create better documentation. That's it. _Then it would have been fine_ to park it for a long time. Even without changes it was usable, and there are always issues in a sufficiently large codebase.
I'll write this as a new issue so it can be tracked, maybe there's more feedback. Please consider bringing it back. I'm going to park to v0.1.3 tag until new killer features come along. Thanks a lot for the great work! Please ask community opinion with a clear issue headline before deprecating powerful capabilities in a breaking change, and give it a few weeks if not urgent.
Other thoughts and viewpoints welcome.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/834/reactions",
"total_count": 29,
"+1": 19,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 10,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/834/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/1349
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1349/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1349/comments
|
https://api.github.com/repos/ollama/ollama/issues/1349/events
|
https://github.com/ollama/ollama/pull/1349
| 2,021,752,091
|
PR_kwDOJ0Z1Ps5g8gB8
| 1,349
|
handle ctrl+z
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-12-02T00:04:30
| 2023-12-02T00:21:50
| 2023-12-02T00:21:49
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1349",
"html_url": "https://github.com/ollama/ollama/pull/1349",
"diff_url": "https://github.com/ollama/ollama/pull/1349.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1349.patch",
"merged_at": "2023-12-02T00:21:49"
}
|
resolves #1332
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1349/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1349/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3726
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3726/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3726/comments
|
https://api.github.com/repos/ollama/ollama/issues/3726/events
|
https://github.com/ollama/ollama/issues/3726
| 2,249,816,826
|
I_kwDOJ0Z1Ps6GGXr6
| 3,726
|
Error while trying to run/pull models
|
{
"login": "Radeeshp",
"id": 82216452,
"node_id": "MDQ6VXNlcjgyMjE2NDUy",
"avatar_url": "https://avatars.githubusercontent.com/u/82216452?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Radeeshp",
"html_url": "https://github.com/Radeeshp",
"followers_url": "https://api.github.com/users/Radeeshp/followers",
"following_url": "https://api.github.com/users/Radeeshp/following{/other_user}",
"gists_url": "https://api.github.com/users/Radeeshp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Radeeshp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Radeeshp/subscriptions",
"organizations_url": "https://api.github.com/users/Radeeshp/orgs",
"repos_url": "https://api.github.com/users/Radeeshp/repos",
"events_url": "https://api.github.com/users/Radeeshp/events{/privacy}",
"received_events_url": "https://api.github.com/users/Radeeshp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677370291,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCVsw",
"url": "https://api.github.com/repos/ollama/ollama/labels/networking",
"name": "networking",
"color": "0B5368",
"default": false,
"description": "Issues relating to ollama pull and push"
}
] |
open
| false
|
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 0
| 2024-04-18T05:57:49
| 2024-07-11T03:47:03
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I have a good internet connection, but still I am unable to run or pull models in ollama

### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
_No response_
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3726/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3726/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/8436
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8436/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8436/comments
|
https://api.github.com/repos/ollama/ollama/issues/8436/events
|
https://github.com/ollama/ollama/issues/8436
| 2,789,070,510
|
I_kwDOJ0Z1Ps6mPdau
| 8,436
|
kindly make f32 tensor type available in ollama
|
{
"login": "olumolu",
"id": 162728301,
"node_id": "U_kgDOCbMJbQ",
"avatar_url": "https://avatars.githubusercontent.com/u/162728301?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/olumolu",
"html_url": "https://github.com/olumolu",
"followers_url": "https://api.github.com/users/olumolu/followers",
"following_url": "https://api.github.com/users/olumolu/following{/other_user}",
"gists_url": "https://api.github.com/users/olumolu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/olumolu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/olumolu/subscriptions",
"organizations_url": "https://api.github.com/users/olumolu/orgs",
"repos_url": "https://api.github.com/users/olumolu/repos",
"events_url": "https://api.github.com/users/olumolu/events{/privacy}",
"received_events_url": "https://api.github.com/users/olumolu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 5
| 2025-01-15T07:49:35
| 2025-01-24T09:33:36
| 2025-01-24T09:33:36
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
There are many models available in hugging face with f32 b32 but in ollama the highest avail is f16 if this is possible having higher tensor could result in better performance and results as newer hardware can actually perform good.
### OS
Linux
### GPU
AMD, Nvidia, Intel
### CPU
Intel, AMD, Apple
### Ollama version
_No response_
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8436/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8436/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4262
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4262/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4262/comments
|
https://api.github.com/repos/ollama/ollama/issues/4262/events
|
https://github.com/ollama/ollama/issues/4262
| 2,286,051,799
|
I_kwDOJ0Z1Ps6IQmHX
| 4,262
|
403 using zrok
|
{
"login": "quantumalchemy",
"id": 22033041,
"node_id": "MDQ6VXNlcjIyMDMzMDQx",
"avatar_url": "https://avatars.githubusercontent.com/u/22033041?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/quantumalchemy",
"html_url": "https://github.com/quantumalchemy",
"followers_url": "https://api.github.com/users/quantumalchemy/followers",
"following_url": "https://api.github.com/users/quantumalchemy/following{/other_user}",
"gists_url": "https://api.github.com/users/quantumalchemy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/quantumalchemy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/quantumalchemy/subscriptions",
"organizations_url": "https://api.github.com/users/quantumalchemy/orgs",
"repos_url": "https://api.github.com/users/quantumalchemy/repos",
"events_url": "https://api.github.com/users/quantumalchemy/events{/privacy}",
"received_events_url": "https://api.github.com/users/quantumalchemy/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396191,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aXw",
"url": "https://api.github.com/repos/ollama/ollama/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-05-08T17:01:53
| 2024-06-30T21:36:32
| 2024-06-30T21:36:31
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Re: https://github.com/ollama/ollama/issues/3269
was fixed for ngrok But
ngrok is paid and has limits
anyway to get it to work with zrok?
--host-header doesn't work with zrok
zrok is opensource
_Originally posted by @quantumalchemy in https://github.com/ollama/ollama/issues/3269#issuecomment-2101017786_
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4262/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4262/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4826
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4826/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4826/comments
|
https://api.github.com/repos/ollama/ollama/issues/4826/events
|
https://github.com/ollama/ollama/issues/4826
| 2,334,985,468
|
I_kwDOJ0Z1Ps6LLQz8
| 4,826
|
Model request: GLM-4 9B
|
{
"login": "mywwq",
"id": 133221105,
"node_id": "U_kgDOB_DK8Q",
"avatar_url": "https://avatars.githubusercontent.com/u/133221105?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mywwq",
"html_url": "https://github.com/mywwq",
"followers_url": "https://api.github.com/users/mywwq/followers",
"following_url": "https://api.github.com/users/mywwq/following{/other_user}",
"gists_url": "https://api.github.com/users/mywwq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mywwq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mywwq/subscriptions",
"organizations_url": "https://api.github.com/users/mywwq/orgs",
"repos_url": "https://api.github.com/users/mywwq/repos",
"events_url": "https://api.github.com/users/mywwq/events{/privacy}",
"received_events_url": "https://api.github.com/users/mywwq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
closed
| false
| null |
[] | null | 22
| 2024-06-05T06:08:18
| 2024-07-11T19:26:26
| 2024-07-09T16:34:35
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Add GLM-4 9B model
Model | Type | Seq Length | Download
-- | -- | -- | --
GLM-4-9B | Base | 8K | [🤗 Huggingface](https://huggingface.co/THUDM/glm-4-9b)
GLM-4-9B-Chat | Chat | 128K | [🤗 Huggingface](https://huggingface.co/THUDM/glm-4-9b-chat)
GLM-4-9B-Chat-1M | Chat | 1M | [🤗 Huggingface](https://huggingface.co/THUDM/glm-4-9b-chat-1m)
GLM-4V-9B | Chat | 8K | [🤗 Huggingface](https://huggingface.co/THUDM/glm-4v-9b)
-----
请问什么时候引入glm4-9b模型
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4826/reactions",
"total_count": 41,
"+1": 15,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 12,
"rocket": 0,
"eyes": 14
}
|
https://api.github.com/repos/ollama/ollama/issues/4826/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1747
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1747/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1747/comments
|
https://api.github.com/repos/ollama/ollama/issues/1747/events
|
https://github.com/ollama/ollama/pull/1747
| 2,060,824,803
|
PR_kwDOJ0Z1Ps5i-O3L
| 1,747
|
Added Ollama-SwiftUI to integrations
|
{
"login": "kghandour",
"id": 6333447,
"node_id": "MDQ6VXNlcjYzMzM0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/6333447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kghandour",
"html_url": "https://github.com/kghandour",
"followers_url": "https://api.github.com/users/kghandour/followers",
"following_url": "https://api.github.com/users/kghandour/following{/other_user}",
"gists_url": "https://api.github.com/users/kghandour/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kghandour/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kghandour/subscriptions",
"organizations_url": "https://api.github.com/users/kghandour/orgs",
"repos_url": "https://api.github.com/users/kghandour/repos",
"events_url": "https://api.github.com/users/kghandour/events{/privacy}",
"received_events_url": "https://api.github.com/users/kghandour/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2023-12-30T18:42:25
| 2024-01-02T14:47:50
| 2024-01-02T14:47:50
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1747",
"html_url": "https://github.com/ollama/ollama/pull/1747",
"diff_url": "https://github.com/ollama/ollama/pull/1747.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1747.patch",
"merged_at": "2024-01-02T14:47:50"
}
| null |
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1747/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1747/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/546
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/546/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/546/comments
|
https://api.github.com/repos/ollama/ollama/issues/546/events
|
https://github.com/ollama/ollama/issues/546
| 1,899,617,808
|
I_kwDOJ0Z1Ps5xOd4Q
| 546
|
Request: `docker compose` support for Ollama server
|
{
"login": "jamesbraza",
"id": 8990777,
"node_id": "MDQ6VXNlcjg5OTA3Nzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/8990777?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jamesbraza",
"html_url": "https://github.com/jamesbraza",
"followers_url": "https://api.github.com/users/jamesbraza/followers",
"following_url": "https://api.github.com/users/jamesbraza/following{/other_user}",
"gists_url": "https://api.github.com/users/jamesbraza/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jamesbraza/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jamesbraza/subscriptions",
"organizations_url": "https://api.github.com/users/jamesbraza/orgs",
"repos_url": "https://api.github.com/users/jamesbraza/repos",
"events_url": "https://api.github.com/users/jamesbraza/events{/privacy}",
"received_events_url": "https://api.github.com/users/jamesbraza/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 6677677816,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgVG-A",
"url": "https://api.github.com/repos/ollama/ollama/labels/docker",
"name": "docker",
"color": "0052CC",
"default": false,
"description": "Issues relating to using ollama in containers"
}
] |
closed
| false
| null |
[] | null | 17
| 2023-09-17T01:14:06
| 2024-12-23T00:56:10
| 2024-12-23T00:56:10
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
It would be really nice if Ollama supported `docker compose` for the Ollama server.
This would enable one to run:
- `docker compose up`: start the Ollama server
- `docker compose down`: stop the Ollama server
`docker compose` imo has two benefits:
- A bit easier than having to deal with multiprocessing associated with `./ollama serve`
- Would enable Ollama server to be more OS independent, by outsourcing platform support to Docker
For reference, [LocalAI](https://github.com/go-skynet/LocalAI) supports this, and it works flawlessly, without having to deal with `brew install`s and compilation
Perhaps https://github.com/sickcodes/Docker-OSX can be used as the base image, since Ollama currently just supports macOS-based installations
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/546/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/546/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7484
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7484/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7484/comments
|
https://api.github.com/repos/ollama/ollama/issues/7484/events
|
https://github.com/ollama/ollama/issues/7484
| 2,631,546,225
|
I_kwDOJ0Z1Ps6c2jVx
| 7,484
|
Invalid prompt generation when the request message exceeds the context size
|
{
"login": "b4rtaz",
"id": 12797776,
"node_id": "MDQ6VXNlcjEyNzk3Nzc2",
"avatar_url": "https://avatars.githubusercontent.com/u/12797776?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/b4rtaz",
"html_url": "https://github.com/b4rtaz",
"followers_url": "https://api.github.com/users/b4rtaz/followers",
"following_url": "https://api.github.com/users/b4rtaz/following{/other_user}",
"gists_url": "https://api.github.com/users/b4rtaz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/b4rtaz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/b4rtaz/subscriptions",
"organizations_url": "https://api.github.com/users/b4rtaz/orgs",
"repos_url": "https://api.github.com/users/b4rtaz/repos",
"events_url": "https://api.github.com/users/b4rtaz/events{/privacy}",
"received_events_url": "https://api.github.com/users/b4rtaz/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 2
| 2024-11-03T23:12:36
| 2024-11-05T22:21:09
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Hello! You're doing a great job! Thank you so much!
Probably I found a bug when the user message exceedes the `num_ctx` value in the API server.
I started the server in the debug mode: `OLLAMA_ORIGINS=* OLLAMA_DEBUG=1 ollama serve`
The below JS script works correctly with the `x/llama3.2-vision:latest` model.
```ts
async function test() {
const r = await fetch('http://127.0.0.1:11434/v1/chat/completions', {
method: 'POST',
headers: {
Accept: 'application/json',
'Content-Type': 'application/json'
},
body: JSON.stringify({
model: 'x/llama3.2-vision:latest',
messages: [
{
role: 'user',
content: [
{
type: 'text',
text: 'describe the image.',
},
{
type: 'image_url',
image_url: {
url: IMAGE_BASE64
}
}
]
}
]
}),
});
const j = await r.json();
console.log(j);
}
```
In the console I can see:
```
time=2024-11-03T23:59:41.397+01:00 level=DEBUG source=routes.go:1453 msg="chat request" images=1 prompt="<|start_header_id|>user<|end_header_id|>\n\ndescribe the image\n\n[img-0]<|image|><|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n"
```
In this case the generated sequence looks correctly. But if I change in my script `text: 'describe the image.',` => `text: 'describe the image.'.repeat(200),` then I see in the console:
```
time=2024-11-04T00:04:30.828+01:00 level=DEBUG source=routes.go:1453 msg="chat request" images=1 prompt="<|start_header_id|>user<|end_header_id|>\n\n[img-0]<|image|><|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n"
```
So by some reason now the content after `<|start_header_id|>user<|end_header_id|>\n\n` has disappeared. The problem here is that the API returns a response generated without the queried message.
When I increase the `num_ctx` value then it starts work again.
**Expected behavior**: I think the API should return an error stating that the request contains a message that is too long.
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.4.0-rc6
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7484/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7484/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/7064
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7064/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7064/comments
|
https://api.github.com/repos/ollama/ollama/issues/7064/events
|
https://github.com/ollama/ollama/pull/7064
| 2,559,479,104
|
PR_kwDOJ0Z1Ps59RMvH
| 7,064
|
Update README.md, Terminal app "bb7"
|
{
"login": "drunkwcodes",
"id": 36228443,
"node_id": "MDQ6VXNlcjM2MjI4NDQz",
"avatar_url": "https://avatars.githubusercontent.com/u/36228443?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/drunkwcodes",
"html_url": "https://github.com/drunkwcodes",
"followers_url": "https://api.github.com/users/drunkwcodes/followers",
"following_url": "https://api.github.com/users/drunkwcodes/following{/other_user}",
"gists_url": "https://api.github.com/users/drunkwcodes/gists{/gist_id}",
"starred_url": "https://api.github.com/users/drunkwcodes/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/drunkwcodes/subscriptions",
"organizations_url": "https://api.github.com/users/drunkwcodes/orgs",
"repos_url": "https://api.github.com/users/drunkwcodes/repos",
"events_url": "https://api.github.com/users/drunkwcodes/events{/privacy}",
"received_events_url": "https://api.github.com/users/drunkwcodes/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-10-01T14:51:20
| 2024-11-21T08:03:11
| 2024-11-21T08:03:11
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7064",
"html_url": "https://github.com/ollama/ollama/pull/7064",
"diff_url": "https://github.com/ollama/ollama/pull/7064.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7064.patch",
"merged_at": "2024-11-21T08:03:11"
}
|
Introducing "bb7", an advanced chat bot designed for versatile interactions. Equipped with TTS (Text-to-Speech) capabilities, bb7 enables seamless voice conversations with users. It also supports local Retrieval-Augmented Generation (RAG), allowing for efficient document-based queries and responses, even without cloud dependency.
|
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7064/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7064/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7051
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7051/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7051/comments
|
https://api.github.com/repos/ollama/ollama/issues/7051/events
|
https://github.com/ollama/ollama/issues/7051
| 2,557,401,866
|
I_kwDOJ0Z1Ps6YbtsK
| 7,051
|
Tool call support in Qwen 2.5 hallucinates with Maybe pattern
|
{
"login": "ChristianWeyer",
"id": 888718,
"node_id": "MDQ6VXNlcjg4ODcxOA==",
"avatar_url": "https://avatars.githubusercontent.com/u/888718?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ChristianWeyer",
"html_url": "https://github.com/ChristianWeyer",
"followers_url": "https://api.github.com/users/ChristianWeyer/followers",
"following_url": "https://api.github.com/users/ChristianWeyer/following{/other_user}",
"gists_url": "https://api.github.com/users/ChristianWeyer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ChristianWeyer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ChristianWeyer/subscriptions",
"organizations_url": "https://api.github.com/users/ChristianWeyer/orgs",
"repos_url": "https://api.github.com/users/ChristianWeyer/repos",
"events_url": "https://api.github.com/users/ChristianWeyer/events{/privacy}",
"received_events_url": "https://api.github.com/users/ChristianWeyer/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 5
| 2024-09-30T18:42:33
| 2024-10-12T01:18:48
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
According to https://python.useinstructor.com/concepts/maybe/.
There is an issue with tool calling in a case like this:
```json
{
"messages": [
{
"role": "system",
"content": "Today's date is 2024-09-30. Please consider this when processing the availability information.\nIf you cannot extract the start date, use today.\nThis is the list of employees, with the initials, employee ID, full name, and skills:\n...\n\nDO NOT invent data. DO NOT hallucinate!"
},
{
"role": "user",
"content": "When does our colleague XYZ have two days available for a 2 days appointment?"
}
],
"model": "qwen2.5:7b-instruct-fp16",
"tool_choice": {
"type": "function",
"function": {
"name": "MaybeAvailabilityRequest"
}
},
"tools": [
{
"type": "function",
"function": {
"name": "MaybeAvailabilityRequest",
"description": "Correctly extracted `MaybeAvailabilityRequest` with all the required parameters with correct types",
"parameters": {
"$defs": {
"AvailabilityRequest": {
"properties": {
"personIds": {
"description": "List of person IDs to check availability for",
"items": {
"type": "integer"
},
"title": "Personids",
"type": "array"
},
"startDate": {
"description": "Start date for the availability check",
"title": "Startdate",
"type": "string"
},
"endDate": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"description": "End date for the availability check",
"title": "Enddate"
},
"numberOfConsecutiveDays": {
"description": "Number of consecutive days required",
"title": "Numberofconsecutivedays",
"type": "integer"
}
},
"required": [
"personIds",
"startDate",
"endDate",
"numberOfConsecutiveDays"
],
"title": "AvailabilityRequest",
"type": "object"
}
},
"properties": {
"result": {
"anyOf": [
{
"$ref": "#/$defs/AvailabilityRequest"
},
{
"type": "null"
}
],
"default": null
},
"error": {
"default": false,
"title": "Error",
"type": "boolean"
},
"message": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Message"
}
},
"type": "object",
"required": []
}
}
}
]
}
```
it answers with this:
``` json
{
"id": "chatcmpl-485",
"object": "chat.completion",
"created": 1727721983,
"model": "qwen2.5:7b-instruct-fp16",
"system_fingerprint": "fp_ollama",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "",
"tool_calls": [
{
"id": "call_oe8h5as1",
"type": "function",
"function": {
"name": "MaybeAvailabilityRequest",
"arguments": "{\"error\":false,\"message\":null,\"result\":{\"availableDateRange\":[{\"end_date\":\"2024-10-03\",\"start_date\":\"2024-10-01\"}]}}"
}
}
]
},
"finish_reason": "tool_calls"
}
],
"usage": {
"prompt_tokens": 477,
"completion_tokens": 158,
"total_tokens": 635
}
}
```
Which is obviously wrong and not following the JSON schema from the tool call.
When I use non function calling and craft the prompt manually, it always gets the answer right.
.cc @JianxinMa
Thanks!
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7051/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7051/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/1166
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1166/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1166/comments
|
https://api.github.com/repos/ollama/ollama/issues/1166/events
|
https://github.com/ollama/ollama/issues/1166
| 1,998,311,717
|
I_kwDOJ0Z1Ps53G9El
| 1,166
|
Since Modelfiles doesn't work How do we set default PARAMETER settings?
|
{
"login": "oliverbob",
"id": 23272429,
"node_id": "MDQ6VXNlcjIzMjcyNDI5",
"avatar_url": "https://avatars.githubusercontent.com/u/23272429?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/oliverbob",
"html_url": "https://github.com/oliverbob",
"followers_url": "https://api.github.com/users/oliverbob/followers",
"following_url": "https://api.github.com/users/oliverbob/following{/other_user}",
"gists_url": "https://api.github.com/users/oliverbob/gists{/gist_id}",
"starred_url": "https://api.github.com/users/oliverbob/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/oliverbob/subscriptions",
"organizations_url": "https://api.github.com/users/oliverbob/orgs",
"repos_url": "https://api.github.com/users/oliverbob/repos",
"events_url": "https://api.github.com/users/oliverbob/events{/privacy}",
"received_events_url": "https://api.github.com/users/oliverbob/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 6
| 2023-11-17T05:34:37
| 2023-12-04T21:33:57
| 2023-12-04T21:33:56
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
How can I set global settings for the current model without making a Modelfile? Example, set paramater for number of threads and gpus, etc fo a user chosen model?
Thanks.
|
{
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.github.com/users/technovangelist/followers",
"following_url": "https://api.github.com/users/technovangelist/following{/other_user}",
"gists_url": "https://api.github.com/users/technovangelist/gists{/gist_id}",
"starred_url": "https://api.github.com/users/technovangelist/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/technovangelist/subscriptions",
"organizations_url": "https://api.github.com/users/technovangelist/orgs",
"repos_url": "https://api.github.com/users/technovangelist/repos",
"events_url": "https://api.github.com/users/technovangelist/events{/privacy}",
"received_events_url": "https://api.github.com/users/technovangelist/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1166/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1166/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4506
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4506/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4506/comments
|
https://api.github.com/repos/ollama/ollama/issues/4506/events
|
https://github.com/ollama/ollama/issues/4506
| 2,303,653,321
|
I_kwDOJ0Z1Ps6JTvXJ
| 4,506
|
Any way to increase performance? And switch to F32?
|
{
"login": "AncientMystic",
"id": 62780271,
"node_id": "MDQ6VXNlcjYyNzgwMjcx",
"avatar_url": "https://avatars.githubusercontent.com/u/62780271?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AncientMystic",
"html_url": "https://github.com/AncientMystic",
"followers_url": "https://api.github.com/users/AncientMystic/followers",
"following_url": "https://api.github.com/users/AncientMystic/following{/other_user}",
"gists_url": "https://api.github.com/users/AncientMystic/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AncientMystic/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AncientMystic/subscriptions",
"organizations_url": "https://api.github.com/users/AncientMystic/orgs",
"repos_url": "https://api.github.com/users/AncientMystic/repos",
"events_url": "https://api.github.com/users/AncientMystic/events{/privacy}",
"received_events_url": "https://api.github.com/users/AncientMystic/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] |
closed
| false
| null |
[] | null | 7
| 2024-05-17T22:35:26
| 2024-07-29T20:03:01
| 2024-05-18T22:47:31
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I am using a pascal Tesla P4 8gb gpu and i am looking for a way to increase performance.
Are there any tweaks/environment variables i can apply or things i can install such a pytorch version or something that will boost ollama performance?
I am getting very mixed results, any model bigger than a few gb has a massive performance loss, roughly 2-6x slower generation over a model that is under 2gb, 4-5gb is 2x slower 6-8gb models are about 6+x slower and models over roughly 8-11gb either are so slow they are useless or won't load.
One recent test a response of 521 tokens took 20 minutes on a 8gb model (which is something like 0.4 tokens/s)
It is to be expected to some degree as i do not have adequate vram to run extremely large models but it would be nice if i could somehow get at least slightly faster results on models most if not all of should fit into vram
Also is there a setting to try F32 over F16 precision? Pascals seem to have much higher f32 performance so I figure it is worth a try
I have already set OLLAMA_NUM_PARALLEL & OLLAMA_MAX_LOADED to 1 to achieve lower vram usage (any more tweaks would be very much so appreciated)
System:
OS: proxmox
CPU: i7-6700
Mem: 64GB DDR4 2133mhz
Main drive: 1TB nvme
VM: ubuntu 22.04.4 LTS
Ollama: 0.1.38
vGPU: GRID-P4-2Q 6GB profile
Mem: 32gb
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4506/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4506/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4510
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4510/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4510/comments
|
https://api.github.com/repos/ollama/ollama/issues/4510/events
|
https://github.com/ollama/ollama/issues/4510
| 2,303,836,269
|
I_kwDOJ0Z1Ps6JUcBt
| 4,510
|
Would it be possible for Ollama to support re-rank models?
|
{
"login": "lyfuci",
"id": 12745441,
"node_id": "MDQ6VXNlcjEyNzQ1NDQx",
"avatar_url": "https://avatars.githubusercontent.com/u/12745441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lyfuci",
"html_url": "https://github.com/lyfuci",
"followers_url": "https://api.github.com/users/lyfuci/followers",
"following_url": "https://api.github.com/users/lyfuci/following{/other_user}",
"gists_url": "https://api.github.com/users/lyfuci/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lyfuci/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lyfuci/subscriptions",
"organizations_url": "https://api.github.com/users/lyfuci/orgs",
"repos_url": "https://api.github.com/users/lyfuci/repos",
"events_url": "https://api.github.com/users/lyfuci/events{/privacy}",
"received_events_url": "https://api.github.com/users/lyfuci/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 25
| 2024-05-18T04:05:07
| 2025-01-20T13:42:26
| 2024-09-02T20:57:52
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I am using Ollama for my projects and it's been great. However, when using some AI app platform, like dify, build RAG app, rerank is nessesary. It's possible for Ollama to support rerank models.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4510/reactions",
"total_count": 34,
"+1": 32,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 2
}
|
https://api.github.com/repos/ollama/ollama/issues/4510/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1983
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1983/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1983/comments
|
https://api.github.com/repos/ollama/ollama/issues/1983/events
|
https://github.com/ollama/ollama/pull/1983
| 2,080,508,666
|
PR_kwDOJ0Z1Ps5kAzUB
| 1,983
|
use model defaults for `num_gqa`, `rope_frequency_base` and `rope_frequency_scale`
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-01-13T23:18:45
| 2024-05-09T16:20:43
| 2024-05-09T16:06:14
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1983",
"html_url": "https://github.com/ollama/ollama/pull/1983",
"diff_url": "https://github.com/ollama/ollama/pull/1983.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1983.patch",
"merged_at": "2024-05-09T16:06:14"
}
| null |
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1983/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1983/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1311
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1311/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1311/comments
|
https://api.github.com/repos/ollama/ollama/issues/1311/events
|
https://github.com/ollama/ollama/issues/1311
| 2,016,446,270
|
I_kwDOJ0Z1Ps54MIc-
| 1,311
|
ollama causes "no space left on device" on common ubuntu installation.
|
{
"login": "Dougie777",
"id": 77511128,
"node_id": "MDQ6VXNlcjc3NTExMTI4",
"avatar_url": "https://avatars.githubusercontent.com/u/77511128?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Dougie777",
"html_url": "https://github.com/Dougie777",
"followers_url": "https://api.github.com/users/Dougie777/followers",
"following_url": "https://api.github.com/users/Dougie777/following{/other_user}",
"gists_url": "https://api.github.com/users/Dougie777/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Dougie777/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Dougie777/subscriptions",
"organizations_url": "https://api.github.com/users/Dougie777/orgs",
"repos_url": "https://api.github.com/users/Dougie777/repos",
"events_url": "https://api.github.com/users/Dougie777/events{/privacy}",
"received_events_url": "https://api.github.com/users/Dougie777/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2023-11-29T12:08:31
| 2024-01-20T00:04:10
| 2024-01-20T00:04:09
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Many ubuntu installations expect data to be added to the /home folder which I think is very common on many linux distros. However ollama writes the massive model files to /usr/share/ollama. This is fine for the bin files etc. But the data should not go here.
Is there a way to specify the installation folder or data folder to avert this problem?
Here is the problem in detail:
$ ollama run neural-chat
pulling manifest
pulling b8dab3241977... 69% ▕████████████████████ ▏(2.9 GB/4.1 GB, 5.9 MB/s) [5m49s:3m23s]
Error: write /usr/share/ollama/.ollama/models/blobs/sha256:b8dab32419772a5edabf4d72fc41d7c815a54524ae8d17644cadaf532422a40f-partial: no space left on device
I uninstalled ollama but here is my file system structure. This is the default ubuntu file system structure.
Filesystem Size Used Avail Use% Mounted on
tmpfs 1.5G 2.4M 1.5G 1% /run
/dev/nvme0n1p1 28G 24G 2.8G 90% /
tmpfs 7.5G 62M 7.5G 1% /dev/shm
tmpfs 5.0M 4.0K 5.0M 1% /run/lock
tmpfs 7.5G 0 7.5G 0% /run/qemu
/dev/nvme1n1p1 196M 97M 100M 50% /boot/efi
/dev/nvme0n1p3 411G 41G 350G 11% /home
tmpfs 1.5G 148K 1.5G 1% /run/user/1000
ollama is writing to my / (root) folder instead of /home where most the disk space is allocated.
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1311/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1311/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5276
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5276/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5276/comments
|
https://api.github.com/repos/ollama/ollama/issues/5276/events
|
https://github.com/ollama/ollama/issues/5276
| 2,373,019,421
|
I_kwDOJ0Z1Ps6NcWcd
| 5,276
|
Support for Vision Language Models that can process Videos.
|
{
"login": "manishkumart",
"id": 37763863,
"node_id": "MDQ6VXNlcjM3NzYzODYz",
"avatar_url": "https://avatars.githubusercontent.com/u/37763863?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/manishkumart",
"html_url": "https://github.com/manishkumart",
"followers_url": "https://api.github.com/users/manishkumart/followers",
"following_url": "https://api.github.com/users/manishkumart/following{/other_user}",
"gists_url": "https://api.github.com/users/manishkumart/gists{/gist_id}",
"starred_url": "https://api.github.com/users/manishkumart/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/manishkumart/subscriptions",
"organizations_url": "https://api.github.com/users/manishkumart/orgs",
"repos_url": "https://api.github.com/users/manishkumart/repos",
"events_url": "https://api.github.com/users/manishkumart/events{/privacy}",
"received_events_url": "https://api.github.com/users/manishkumart/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 1
| 2024-06-25T15:42:21
| 2024-07-30T19:50:43
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Is it possible to support loading the VLM's like VideoLLama, Chat-UniVi models that can process Videos?
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5276/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 3
}
|
https://api.github.com/repos/ollama/ollama/issues/5276/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/2024
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2024/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2024/comments
|
https://api.github.com/repos/ollama/ollama/issues/2024/events
|
https://github.com/ollama/ollama/issues/2024
| 2,084,955,906
|
I_kwDOJ0Z1Ps58RecC
| 2,024
|
falcon model not working.
|
{
"login": "iplayfast",
"id": 751306,
"node_id": "MDQ6VXNlcjc1MTMwNg==",
"avatar_url": "https://avatars.githubusercontent.com/u/751306?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iplayfast",
"html_url": "https://github.com/iplayfast",
"followers_url": "https://api.github.com/users/iplayfast/followers",
"following_url": "https://api.github.com/users/iplayfast/following{/other_user}",
"gists_url": "https://api.github.com/users/iplayfast/gists{/gist_id}",
"starred_url": "https://api.github.com/users/iplayfast/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iplayfast/subscriptions",
"organizations_url": "https://api.github.com/users/iplayfast/orgs",
"repos_url": "https://api.github.com/users/iplayfast/repos",
"events_url": "https://api.github.com/users/iplayfast/events{/privacy}",
"received_events_url": "https://api.github.com/users/iplayfast/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 3
| 2024-01-16T21:18:06
| 2024-05-17T21:34:16
| 2024-05-17T21:34:16
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I've been working with https://github.com/jmorganca/ollama/issues/1691 and found that it consistently dies with falcon.
So I tried falcon on it's own. It died.
So I tried removing falcon and reinstalling it.
Still died.
I can no longer get falcon to work.
I'm on Ollama version 0.1.20
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2024/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2024/timeline
| null |
not_planned
| false
|
https://api.github.com/repos/ollama/ollama/issues/1678
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1678/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1678/comments
|
https://api.github.com/repos/ollama/ollama/issues/1678/events
|
https://github.com/ollama/ollama/issues/1678
| 2,054,434,784
|
I_kwDOJ0Z1Ps56dC_g
| 1,678
|
Error: timed out waiting for llama runner to start
|
{
"login": "LegendNava",
"id": 74506040,
"node_id": "MDQ6VXNlcjc0NTA2MDQw",
"avatar_url": "https://avatars.githubusercontent.com/u/74506040?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LegendNava",
"html_url": "https://github.com/LegendNava",
"followers_url": "https://api.github.com/users/LegendNava/followers",
"following_url": "https://api.github.com/users/LegendNava/following{/other_user}",
"gists_url": "https://api.github.com/users/LegendNava/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LegendNava/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LegendNava/subscriptions",
"organizations_url": "https://api.github.com/users/LegendNava/orgs",
"repos_url": "https://api.github.com/users/LegendNava/repos",
"events_url": "https://api.github.com/users/LegendNava/events{/privacy}",
"received_events_url": "https://api.github.com/users/LegendNava/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 18
| 2023-12-22T19:29:59
| 2024-03-12T17:58:28
| 2024-03-12T17:58:22
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Was trying out dolphin-mixtral. Downloaded successfully but:

Does anything seem off? What should i do in this situation
I'm on Ubuntu 20.24, Intel i3 6th Gen.
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1678/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1678/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2570
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2570/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2570/comments
|
https://api.github.com/repos/ollama/ollama/issues/2570/events
|
https://github.com/ollama/ollama/issues/2570
| 2,140,789,360
|
I_kwDOJ0Z1Ps5_mdpw
| 2,570
|
Potential Regression with Model switching
|
{
"login": "libbaz",
"id": 10919499,
"node_id": "MDQ6VXNlcjEwOTE5NDk5",
"avatar_url": "https://avatars.githubusercontent.com/u/10919499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/libbaz",
"html_url": "https://github.com/libbaz",
"followers_url": "https://api.github.com/users/libbaz/followers",
"following_url": "https://api.github.com/users/libbaz/following{/other_user}",
"gists_url": "https://api.github.com/users/libbaz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/libbaz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/libbaz/subscriptions",
"organizations_url": "https://api.github.com/users/libbaz/orgs",
"repos_url": "https://api.github.com/users/libbaz/repos",
"events_url": "https://api.github.com/users/libbaz/events{/privacy}",
"received_events_url": "https://api.github.com/users/libbaz/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-02-18T07:27:08
| 2024-02-18T07:28:57
| 2024-02-18T07:28:56
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
**Issue:**
I just pulled the latest ollama docker image (Ollama v0.1.25) and have noticed api `/chat` requests are no longer switching the Model Template on templates based on the same Models. In the past this wasnt an issue.
**Steps to reproduce:**
create Foo-1 from model "Foo"
create Foo-2 from model "Foo"
create Bar-1 from model "Bar"
make a chat request with Foo-1 = response uses Foo-1
make a chat request with Foo-2 = response uses Foo-1
make a chat request with Bar-1 = (model is switched to Bar-1) response uses Bar-1
make a chat request with Foo-2 = (model is switched to Foo-2) response uses Foo-2
**Expected:**
make a chat request with Foo-1 = response uses Foo-1
make a chat request with Foo-2 = (model is switched to Foo-2) response uses Foo-2
make a chat request with Bar-1 = (model is switched to Bar-1) response uses Bar-1
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2570/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2570/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4699
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4699/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4699/comments
|
https://api.github.com/repos/ollama/ollama/issues/4699/events
|
https://github.com/ollama/ollama/issues/4699
| 2,322,884,407
|
I_kwDOJ0Z1Ps6KdGc3
| 4,699
|
Computing Context Embeddings, Instead of averagning token embeddings
|
{
"login": "Demirrr",
"id": 13405667,
"node_id": "MDQ6VXNlcjEzNDA1NjY3",
"avatar_url": "https://avatars.githubusercontent.com/u/13405667?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Demirrr",
"html_url": "https://github.com/Demirrr",
"followers_url": "https://api.github.com/users/Demirrr/followers",
"following_url": "https://api.github.com/users/Demirrr/following{/other_user}",
"gists_url": "https://api.github.com/users/Demirrr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Demirrr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Demirrr/subscriptions",
"organizations_url": "https://api.github.com/users/Demirrr/orgs",
"repos_url": "https://api.github.com/users/Demirrr/repos",
"events_url": "https://api.github.com/users/Demirrr/events{/privacy}",
"received_events_url": "https://api.github.com/users/Demirrr/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 0
| 2024-05-29T09:51:44
| 2024-05-29T09:51:44
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I was wondering whether we can return the context embeddings used before the next token prediction instead of averaging the token embeddings as currently done.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4699/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4699/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/2402
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2402/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2402/comments
|
https://api.github.com/repos/ollama/ollama/issues/2402/events
|
https://github.com/ollama/ollama/issues/2402
| 2,124,162,929
|
I_kwDOJ0Z1Ps5-nCdx
| 2,402
|
Error dial tcp: lookup no such host
|
{
"login": "casey-martin",
"id": 13857230,
"node_id": "MDQ6VXNlcjEzODU3MjMw",
"avatar_url": "https://avatars.githubusercontent.com/u/13857230?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/casey-martin",
"html_url": "https://github.com/casey-martin",
"followers_url": "https://api.github.com/users/casey-martin/followers",
"following_url": "https://api.github.com/users/casey-martin/following{/other_user}",
"gists_url": "https://api.github.com/users/casey-martin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/casey-martin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/casey-martin/subscriptions",
"organizations_url": "https://api.github.com/users/casey-martin/orgs",
"repos_url": "https://api.github.com/users/casey-martin/repos",
"events_url": "https://api.github.com/users/casey-martin/events{/privacy}",
"received_events_url": "https://api.github.com/users/casey-martin/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 5
| 2024-02-08T00:49:01
| 2024-05-31T07:16:54
| 2024-02-20T21:40:26
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I am encountering a `dial tcp lookup` error when executing any `ollama pull` or `ollama run` commands through docker on Ubuntu 22.04. I searched through the issues and found some similar errors, however they were related to the users' proxies which I am not using. I am also not running any firewalls. The commands I executed are as follows:
```bash
$ sudo docker pull ollama/ollama
Using default tag: latest
latest: Pulling from ollama/ollama
Digest: sha256:36ce80dc7609fe79711d261f6614a611f7ce200dcd2849367e49812fd4181e67
Status: Image is up to date for ollama/ollama:latest
docker.io/ollama/ollama:latest
$ sudo docker run -d --gpus=all -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
$ sudo docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
687b609d95bf ollama/ollama "/bin/ollama serve" About an hour ago Up About an hour 0.0.0.0:11434->11434/tcp, :::11434->11434/tcp ollama
$ sudo docker exec -it ollama ollama run llama2
Error: Head "https://registry.ollama.ai/v2/library/llama2/blobs/sha256:8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246": dial tcp: lookup registry.ollama.ai on 192.168.0.1:53: no such host
```
Do you have any suggestions for resolving this error?
|
{
"login": "casey-martin",
"id": 13857230,
"node_id": "MDQ6VXNlcjEzODU3MjMw",
"avatar_url": "https://avatars.githubusercontent.com/u/13857230?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/casey-martin",
"html_url": "https://github.com/casey-martin",
"followers_url": "https://api.github.com/users/casey-martin/followers",
"following_url": "https://api.github.com/users/casey-martin/following{/other_user}",
"gists_url": "https://api.github.com/users/casey-martin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/casey-martin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/casey-martin/subscriptions",
"organizations_url": "https://api.github.com/users/casey-martin/orgs",
"repos_url": "https://api.github.com/users/casey-martin/repos",
"events_url": "https://api.github.com/users/casey-martin/events{/privacy}",
"received_events_url": "https://api.github.com/users/casey-martin/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2402/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2402/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1427
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1427/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1427/comments
|
https://api.github.com/repos/ollama/ollama/issues/1427/events
|
https://github.com/ollama/ollama/pull/1427
| 2,031,749,716
|
PR_kwDOJ0Z1Ps5hec27
| 1,427
|
post-response templating
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-12-08T00:56:14
| 2023-12-22T22:07:06
| 2023-12-22T22:07:05
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1427",
"html_url": "https://github.com/ollama/ollama/pull/1427",
"diff_url": "https://github.com/ollama/ollama/pull/1427.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1427.patch",
"merged_at": "2023-12-22T22:07:05"
}
|
- add post-response templating to /generate
- add post-response templating to /chat
- add templating tests
A common format for LLM templating may include post-response templating. Our current template format kind of supported this by checking `{{ if not .First }}` but it is confusing to read. This change allows post-response templating to be applied.
Here is an example of a format that is now supported:
```
<|im_start|>user
{{ .Prompt }}<|im_end|>
<|im_start|>assistant
{{ .Response }}<|im_end|>
```
Current templates are not effected.
Follow-up: docs
Resolves: #1423
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1427/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1427/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7697
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7697/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7697/comments
|
https://api.github.com/repos/ollama/ollama/issues/7697/events
|
https://github.com/ollama/ollama/issues/7697
| 2,663,637,106
|
I_kwDOJ0Z1Ps6ew-By
| 7,697
|
ollama is not working , Error: could not connect to ollama app, is it running?
|
{
"login": "gokulcoder7",
"id": 167660982,
"node_id": "U_kgDOCf5Ntg",
"avatar_url": "https://avatars.githubusercontent.com/u/167660982?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gokulcoder7",
"html_url": "https://github.com/gokulcoder7",
"followers_url": "https://api.github.com/users/gokulcoder7/followers",
"following_url": "https://api.github.com/users/gokulcoder7/following{/other_user}",
"gists_url": "https://api.github.com/users/gokulcoder7/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gokulcoder7/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gokulcoder7/subscriptions",
"organizations_url": "https://api.github.com/users/gokulcoder7/orgs",
"repos_url": "https://api.github.com/users/gokulcoder7/repos",
"events_url": "https://api.github.com/users/gokulcoder7/events{/privacy}",
"received_events_url": "https://api.github.com/users/gokulcoder7/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 26
| 2024-11-16T02:22:22
| 2025-01-12T11:50:14
| 2024-12-02T15:29:25
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
C:\Windows\System32>ollama list
Error: could not connect to ollama app, is it running?
C:\Windows\System32>
C:\Windows\System32>ollama --version
Warning: could not connect to a running Ollama instance
Warning: client version is 0.4.2
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
_No response_
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7697/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7697/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2390
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2390/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2390/comments
|
https://api.github.com/repos/ollama/ollama/issues/2390/events
|
https://github.com/ollama/ollama/issues/2390
| 2,123,355,704
|
I_kwDOJ0Z1Ps5-j9Y4
| 2,390
|
List of domains ollama needs access to
|
{
"login": "arno4000",
"id": 50365065,
"node_id": "MDQ6VXNlcjUwMzY1MDY1",
"avatar_url": "https://avatars.githubusercontent.com/u/50365065?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arno4000",
"html_url": "https://github.com/arno4000",
"followers_url": "https://api.github.com/users/arno4000/followers",
"following_url": "https://api.github.com/users/arno4000/following{/other_user}",
"gists_url": "https://api.github.com/users/arno4000/gists{/gist_id}",
"starred_url": "https://api.github.com/users/arno4000/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arno4000/subscriptions",
"organizations_url": "https://api.github.com/users/arno4000/orgs",
"repos_url": "https://api.github.com/users/arno4000/repos",
"events_url": "https://api.github.com/users/arno4000/events{/privacy}",
"received_events_url": "https://api.github.com/users/arno4000/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396191,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aXw",
"url": "https://api.github.com/repos/ollama/ollama/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-02-07T15:58:37
| 2025-01-23T12:29:03
| 2024-03-11T19:31:53
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Is there a list of domains, which need to be allowed in the forward proxy for ollama to function properly? ollama.ai is allowed, and I see in the logs of the proxy that ollama tries to connect to `https://registry.ollama.ai`. But then ollama tries to access `https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com`.
From where does ollama have this Cloudflare domain? Is this always the same, or can this change randomly? I need to know which domains I have to allow in the forward proxy of my server.
|
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2390/reactions",
"total_count": 4,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 4,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2390/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7761
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7761/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7761/comments
|
https://api.github.com/repos/ollama/ollama/issues/7761/events
|
https://github.com/ollama/ollama/issues/7761
| 2,676,131,157
|
I_kwDOJ0Z1Ps6fgoVV
| 7,761
|
High Inference Time and Limited GPU Utilization with Ollama Docker
|
{
"login": "nicho2",
"id": 11471811,
"node_id": "MDQ6VXNlcjExNDcxODEx",
"avatar_url": "https://avatars.githubusercontent.com/u/11471811?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nicho2",
"html_url": "https://github.com/nicho2",
"followers_url": "https://api.github.com/users/nicho2/followers",
"following_url": "https://api.github.com/users/nicho2/following{/other_user}",
"gists_url": "https://api.github.com/users/nicho2/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nicho2/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nicho2/subscriptions",
"organizations_url": "https://api.github.com/users/nicho2/orgs",
"repos_url": "https://api.github.com/users/nicho2/repos",
"events_url": "https://api.github.com/users/nicho2/events{/privacy}",
"received_events_url": "https://api.github.com/users/nicho2/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 2
| 2024-11-20T14:46:32
| 2024-11-21T07:01:07
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
## Description:
I am using Ollama in a Docker setup with GPU support, configured to use all available GPUs on my system. However, when using the NemoTron model with a simple prompt and utilizing the function calling feature, the inference time is around 50 seconds to get a response, which is too high for my use case.
## Docker Configuration:
Here is my docker-compose.yml file for Ollama:
services:
ollama:
image: ollama/ollama
container_name: ollama
hostname: ollama
ports:
- "11434:11434"
volumes:
- /home/system/dockers/volumes/ollama:/root/.ollama
runtime: nvidia
environment:
- NVIDIA_VISIBLE_DEVICES=all
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities: [gpu]
restart: unless-stopped
networks:
- genai_network
networks:
genai_network:
## Log Details:
### Server Initialization:
Listening on [::]:11434 (version 0.4.2)
Dynamic LLM libraries runners="[cpu_avx2 cuda_v11 cuda_v12 cpu cpu_avx]"
Looking for compatible GPUs
Inference compute id=GPU-660ca8b7-181a-ede9-f6fe-8ccd5f9dbb89 library=cuda variant=v12 compute=8.9 driver=12.4 name="NVIDIA RTX 6000 Ada Generation" total="47.5 GiB" available="47.1 GiB"
Inference compute id=GPU-d62f5e11-4192-0e70-0732-55b558edcb7a library=cuda variant=v12 compute=8.9 driver=12.4 name="NVIDIA RTX 6000 Ada Generation" total="47.5 GiB" available="46.4 GiB"
Inference compute id=GPU-45972459-815c-f304-9fdf-b952276c9b13 library=cuda variant=v12 compute=8.9 driver=12.4 name="NVIDIA RTX 6000 Ada Generation" total="47.5 GiB" available="47.0 GiB"
### Model Loading:
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
Device 0: NVIDIA RTX 6000 Ada Generation, compute capability 8.9, VMM: yes
llm_load_tensors: ggml ctx size = 0.68 MiB
## Observed Issue:
During inference with the NemoTron model, the response time is around 50 seconds.
The logs show that only one GPU seems to be utilized (found 1 CUDA devices), despite multiple GPUs being detected during service initialization.
## Questions:
### GPU Load Balancing:
Does Ollama support load balancing across multiple GPUs? If yes, why do the logs indicate that only one GPU is used (found 1 CUDA devices) when the model is loaded?
### Performance Optimization:
What steps are recommended to reduce inference time?
Should I adjust configuration settings in Docker or Ollama?
Could variables like GGML_CUDA_FORCE_CUBLAS or GGML_CUDA_FORCE_MMQ improve performance?
## Technical Context:
Ollama Version: 0.4.2
Hardware Configuration:
GPUs: 3 x NVIDIA RTX 6000 Ada Generation (47.5 GiB VRAM each)
NVIDIA Driver: 12.4
Model Used: NemoTron
Usage Scenario: Simple prompt with function calling.
## Expectation:
- Confirmation on multi-GPU support in Ollama.
- Suggestions to reduce inference time.
- Documentation or examples of optimized configuration for heavy workloads with multiple GPUs.
## Thank You for Your Help!

### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.4.2
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7761/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7761/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/6296
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6296/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6296/comments
|
https://api.github.com/repos/ollama/ollama/issues/6296/events
|
https://github.com/ollama/ollama/issues/6296
| 2,458,866,096
|
I_kwDOJ0Z1Ps6Sj1Gw
| 6,296
|
Better to add athene70b f16 and q8
|
{
"login": "Llamadouble999q",
"id": 176237961,
"node_id": "U_kgDOCoEtiQ",
"avatar_url": "https://avatars.githubusercontent.com/u/176237961?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Llamadouble999q",
"html_url": "https://github.com/Llamadouble999q",
"followers_url": "https://api.github.com/users/Llamadouble999q/followers",
"following_url": "https://api.github.com/users/Llamadouble999q/following{/other_user}",
"gists_url": "https://api.github.com/users/Llamadouble999q/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Llamadouble999q/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Llamadouble999q/subscriptions",
"organizations_url": "https://api.github.com/users/Llamadouble999q/orgs",
"repos_url": "https://api.github.com/users/Llamadouble999q/repos",
"events_url": "https://api.github.com/users/Llamadouble999q/events{/privacy}",
"received_events_url": "https://api.github.com/users/Llamadouble999q/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
closed
| false
| null |
[] | null | 3
| 2024-08-10T02:57:46
| 2024-09-02T23:10:45
| 2024-09-02T23:10:45
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Why ollama stopped uploading athene?
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6296/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6296/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4372
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4372/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4372/comments
|
https://api.github.com/repos/ollama/ollama/issues/4372/events
|
https://github.com/ollama/ollama/issues/4372
| 2,291,259,798
|
I_kwDOJ0Z1Ps6IkdmW
| 4,372
|
When can I make the api support functions parameters like openai, using langchain implementation will make the request slow, which is not what I want
|
{
"login": "zhangweiwei0326",
"id": 5975616,
"node_id": "MDQ6VXNlcjU5NzU2MTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/5975616?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhangweiwei0326",
"html_url": "https://github.com/zhangweiwei0326",
"followers_url": "https://api.github.com/users/zhangweiwei0326/followers",
"following_url": "https://api.github.com/users/zhangweiwei0326/following{/other_user}",
"gists_url": "https://api.github.com/users/zhangweiwei0326/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhangweiwei0326/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhangweiwei0326/subscriptions",
"organizations_url": "https://api.github.com/users/zhangweiwei0326/orgs",
"repos_url": "https://api.github.com/users/zhangweiwei0326/repos",
"events_url": "https://api.github.com/users/zhangweiwei0326/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhangweiwei0326/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-05-12T09:54:05
| 2024-05-13T06:09:24
| 2024-05-13T06:09:24
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
from langchain_experimental.llms.ollama_functions import OllamaFunctions
model=OllamaFunctions(base_url="http://192.168.1.117:11434", model="qwen:4b", temperature=0.0, format="json")
# 绑定这个函数
model_with_tools = model.bind_tools(
tools = [
{
"name": "getCurrentWeather",
"description": "获取本地的天气情况",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "城市名称,如:添加",
},
},
"required": ["location"],
},
}
],
function_call={"name": "getCurrentWeather"},
)
output = model_with_tools.invoke("北京的天气怎么样?")
print(output)
上面代码会输出
content='' additional_kwargs={'function_call': {'name': 'getCurrentWeather', 'arguments': '{"location": "\\u5317\\u4eac"}'}} id='run-4867ae50-51c7-4d3a-b093-a082dbd1b25a-0'
### OS
_No response_
### GPU
_No response_
### CPU
_No response_
### Ollama version
_No response_
|
{
"login": "zhangweiwei0326",
"id": 5975616,
"node_id": "MDQ6VXNlcjU5NzU2MTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/5975616?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhangweiwei0326",
"html_url": "https://github.com/zhangweiwei0326",
"followers_url": "https://api.github.com/users/zhangweiwei0326/followers",
"following_url": "https://api.github.com/users/zhangweiwei0326/following{/other_user}",
"gists_url": "https://api.github.com/users/zhangweiwei0326/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhangweiwei0326/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhangweiwei0326/subscriptions",
"organizations_url": "https://api.github.com/users/zhangweiwei0326/orgs",
"repos_url": "https://api.github.com/users/zhangweiwei0326/repos",
"events_url": "https://api.github.com/users/zhangweiwei0326/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhangweiwei0326/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4372/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4372/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/995
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/995/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/995/comments
|
https://api.github.com/repos/ollama/ollama/issues/995/events
|
https://github.com/ollama/ollama/pull/995
| 1,977,240,956
|
PR_kwDOJ0Z1Ps5el2kT
| 995
|
Added ollama-rs to community integrations
|
{
"login": "pepperoni21",
"id": 29759371,
"node_id": "MDQ6VXNlcjI5NzU5Mzcx",
"avatar_url": "https://avatars.githubusercontent.com/u/29759371?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pepperoni21",
"html_url": "https://github.com/pepperoni21",
"followers_url": "https://api.github.com/users/pepperoni21/followers",
"following_url": "https://api.github.com/users/pepperoni21/following{/other_user}",
"gists_url": "https://api.github.com/users/pepperoni21/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pepperoni21/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pepperoni21/subscriptions",
"organizations_url": "https://api.github.com/users/pepperoni21/orgs",
"repos_url": "https://api.github.com/users/pepperoni21/repos",
"events_url": "https://api.github.com/users/pepperoni21/events{/privacy}",
"received_events_url": "https://api.github.com/users/pepperoni21/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-11-04T08:51:06
| 2023-11-04T21:51:29
| 2023-11-04T21:51:29
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/995",
"html_url": "https://github.com/ollama/ollama/pull/995",
"diff_url": "https://github.com/ollama/ollama/pull/995.diff",
"patch_url": "https://github.com/ollama/ollama/pull/995.patch",
"merged_at": "2023-11-04T21:51:29"
}
|
Hey, I made Rust bindings for Ollama https://github.com/pepperoni21/ollama-rs
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/995/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/995/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3364
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3364/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3364/comments
|
https://api.github.com/repos/ollama/ollama/issues/3364/events
|
https://github.com/ollama/ollama/issues/3364
| 2,209,552,040
|
I_kwDOJ0Z1Ps6Dsxao
| 3,364
|
add starling-lm beta
|
{
"login": "Lev1ty",
"id": 15148828,
"node_id": "MDQ6VXNlcjE1MTQ4ODI4",
"avatar_url": "https://avatars.githubusercontent.com/u/15148828?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Lev1ty",
"html_url": "https://github.com/Lev1ty",
"followers_url": "https://api.github.com/users/Lev1ty/followers",
"following_url": "https://api.github.com/users/Lev1ty/following{/other_user}",
"gists_url": "https://api.github.com/users/Lev1ty/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Lev1ty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Lev1ty/subscriptions",
"organizations_url": "https://api.github.com/users/Lev1ty/orgs",
"repos_url": "https://api.github.com/users/Lev1ty/repos",
"events_url": "https://api.github.com/users/Lev1ty/events{/privacy}",
"received_events_url": "https://api.github.com/users/Lev1ty/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
closed
| false
| null |
[] | null | 5
| 2024-03-26T23:58:29
| 2024-04-10T19:49:24
| 2024-04-10T19:49:24
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What model would you like?
https://huggingface.co/Nexusflow/Starling-LM-7B-beta
Strong performance on LMSYS leaderboard

|
{
"login": "Lev1ty",
"id": 15148828,
"node_id": "MDQ6VXNlcjE1MTQ4ODI4",
"avatar_url": "https://avatars.githubusercontent.com/u/15148828?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Lev1ty",
"html_url": "https://github.com/Lev1ty",
"followers_url": "https://api.github.com/users/Lev1ty/followers",
"following_url": "https://api.github.com/users/Lev1ty/following{/other_user}",
"gists_url": "https://api.github.com/users/Lev1ty/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Lev1ty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Lev1ty/subscriptions",
"organizations_url": "https://api.github.com/users/Lev1ty/orgs",
"repos_url": "https://api.github.com/users/Lev1ty/repos",
"events_url": "https://api.github.com/users/Lev1ty/events{/privacy}",
"received_events_url": "https://api.github.com/users/Lev1ty/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3364/reactions",
"total_count": 18,
"+1": 18,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3364/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6854
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6854/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6854/comments
|
https://api.github.com/repos/ollama/ollama/issues/6854/events
|
https://github.com/ollama/ollama/pull/6854
| 2,533,079,340
|
PR_kwDOJ0Z1Ps573ay7
| 6,854
|
server: Add OLLAMA_NO_MMAP to disable mmap globally
|
{
"login": "yubingjiaocn",
"id": 9165347,
"node_id": "MDQ6VXNlcjkxNjUzNDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/9165347?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yubingjiaocn",
"html_url": "https://github.com/yubingjiaocn",
"followers_url": "https://api.github.com/users/yubingjiaocn/followers",
"following_url": "https://api.github.com/users/yubingjiaocn/following{/other_user}",
"gists_url": "https://api.github.com/users/yubingjiaocn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yubingjiaocn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yubingjiaocn/subscriptions",
"organizations_url": "https://api.github.com/users/yubingjiaocn/orgs",
"repos_url": "https://api.github.com/users/yubingjiaocn/repos",
"events_url": "https://api.github.com/users/yubingjiaocn/events{/privacy}",
"received_events_url": "https://api.github.com/users/yubingjiaocn/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 2
| 2024-09-18T08:33:28
| 2025-01-03T06:15:57
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6854",
"html_url": "https://github.com/ollama/ollama/pull/6854",
"diff_url": "https://github.com/ollama/ollama/pull/6854.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6854.patch",
"merged_at": null
}
|
Close #4895
This PR added an environment variable `OLLAMA_NO_MMAP` to `ollama serve`. When this environment variable is set to `1`, `--no-mmap` param is always added to llama runner.
This PR will not bring any breaking change. If this environment variable is not set, mmap will keep enabled except pre-defined condition.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6854/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6854/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/8688
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8688/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8688/comments
|
https://api.github.com/repos/ollama/ollama/issues/8688/events
|
https://github.com/ollama/ollama/pull/8688
| 2,820,160,395
|
PR_kwDOJ0Z1Ps6JduvO
| 8,688
|
Add library in Zig.
|
{
"login": "dravenk",
"id": 14295318,
"node_id": "MDQ6VXNlcjE0Mjk1MzE4",
"avatar_url": "https://avatars.githubusercontent.com/u/14295318?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dravenk",
"html_url": "https://github.com/dravenk",
"followers_url": "https://api.github.com/users/dravenk/followers",
"following_url": "https://api.github.com/users/dravenk/following{/other_user}",
"gists_url": "https://api.github.com/users/dravenk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dravenk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dravenk/subscriptions",
"organizations_url": "https://api.github.com/users/dravenk/orgs",
"repos_url": "https://api.github.com/users/dravenk/repos",
"events_url": "https://api.github.com/users/dravenk/events{/privacy}",
"received_events_url": "https://api.github.com/users/dravenk/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 0
| 2025-01-30T08:05:43
| 2025-01-30T08:05:43
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/8688",
"html_url": "https://github.com/ollama/ollama/pull/8688",
"diff_url": "https://github.com/ollama/ollama/pull/8688.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8688.patch",
"merged_at": null
}
| null | null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8688/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8688/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2066
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2066/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2066/comments
|
https://api.github.com/repos/ollama/ollama/issues/2066/events
|
https://github.com/ollama/ollama/issues/2066
| 2,089,566,574
|
I_kwDOJ0Z1Ps58jEFu
| 2,066
|
Switching from CUDA to CPU runner causes segmentation fault
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 1
| 2024-01-19T04:34:56
| 2024-01-19T20:22:06
| 2024-01-19T20:22:05
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
This is only currently an issue on `main`
```
2024/01/19 04:46:40 routes.go:76: INFO changing loaded model
2024/01/19 04:46:40 gpu.go:136: INFO CUDA Compute Capability detected: 8.9
2024/01/19 04:46:40 gpu.go:136: INFO CUDA Compute Capability detected: 8.9
2024/01/19 04:46:40 cpu_common.go:11: INFO CPU has AVX2
loading library /tmp/ollama2500718665/cpu_avx2/libext_server.so
2024/01/19 04:46:40 dyn_ext_server.go:90: INFO Loading Dynamic llm server: /tmp/ollama2500718665/cpu_avx2/libext_server.so
2024/01/19 04:46:40 dyn_ext_server.go:139: INFO Initializing llama server
SIGSEGV: segmentation violation
PC=0x7f811abadac8 m=5 sigcode=1
signal arrived during cgo execution
goroutine 14 [syscall]:
runtime.cgocall(0x9b4550, 0xc000a4e808)
/usr/local/go/src/runtime/cgocall.go:157 +0x4b fp=0xc000a4e7e0 sp=0xc000a4e7a8 pc=0x409b0b
github.com/jmorganca/ollama/llm._Cfunc_dyn_llama_server_init({0x7f80bc000f60, 0x7f805a501b80, 0x7f805a4f3a80, 0x7f805a4f7960, 0x7f805a505650, 0x7f805a4ffba0, 0x7f805a4f7930, 0x7f805a4f3b00, 0x7f805a505e00, 0x7f805a505200, ...}, ...)
_cgo_gotypes.go:280 +0x45 fp=0xc000a4e808 sp=0xc000a4e7e0 pc=0x7c2a45
github.com/jmorganca/ollama/llm.newDynExtServer.func7(0xae6f80?, 0x6e?)
/go/src/github.com/jmorganca/ollama/llm/dyn_ext_server.go:142 +0xef fp=0xc000a4e8f8 sp=0xc000a4e808 pc=0x7c3eaf
github.com/jmorganca/ollama/llm.newDynExtServer({0xc000134090, 0x2f}, {0xc0009f4150, _}, {_, _, _}, {0x0, 0x0, 0x0}, ...)
/go/src/github.com/jmorganca/ollama/llm/dyn_ext_server.go:142 +0xa32 fp=0xc000a4eb88 sp=0xc000a4e8f8 pc=0x7c3bf2
github.com/jmorganca/ollama/llm.newLlmServer({{_, _, _}, {_, _}, {_, _}}, {_, _}, {0x0, ...}, ...)
/go/src/github.com/jmorganca/ollama/llm/llm.go:147 +0x36a fp=0xc000a4ed48 sp=0xc000a4eb88 pc=0x7c04ea
github.com/jmorganca/ollama/llm.New({0x419c8f?, 0x1000100000100?}, {0xc0009f4150, _}, {_, _, _}, {0x0, 0x0, 0x0}, ...)
/go/src/github.com/jmorganca/ollama/llm/llm.go:122 +0x6f9 fp=0xc000a4efb8 sp=0xc000a4ed48 pc=0x7bff19
github.com/jmorganca/ollama/server.load(0xc00017e900?, 0xc00017e900, {{0x0, 0x800, 0x200, 0x1, 0x0, 0x0, 0x0, 0x1, ...}, ...}, ...)
/go/src/github.com/jmorganca/ollama/server/routes.go:83 +0x3a5 fp=0xc000a4f138 sp=0xc000a4efb8 pc=0x9908a5
github.com/jmorganca/ollama/server.ChatHandler(0xc00007c100)
/go/src/github.com/jmorganca/ollama/server/routes.go:1071 +0x828 fp=0xc000a4f748 sp=0xc000a4f138 pc=0x99b1e8
github.com/gin-gonic/gin.(*Context).Next(...)
```
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2066/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2066/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6883
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6883/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6883/comments
|
https://api.github.com/repos/ollama/ollama/issues/6883/events
|
https://github.com/ollama/ollama/issues/6883
| 2,537,368,900
|
I_kwDOJ0Z1Ps6XPS1E
| 6,883
|
Problem Executing 'ollama create' Multiple Times with Different GGUF Files
|
{
"login": "michaelc2005",
"id": 50670873,
"node_id": "MDQ6VXNlcjUwNjcwODcz",
"avatar_url": "https://avatars.githubusercontent.com/u/50670873?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/michaelc2005",
"html_url": "https://github.com/michaelc2005",
"followers_url": "https://api.github.com/users/michaelc2005/followers",
"following_url": "https://api.github.com/users/michaelc2005/following{/other_user}",
"gists_url": "https://api.github.com/users/michaelc2005/gists{/gist_id}",
"starred_url": "https://api.github.com/users/michaelc2005/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/michaelc2005/subscriptions",
"organizations_url": "https://api.github.com/users/michaelc2005/orgs",
"repos_url": "https://api.github.com/users/michaelc2005/repos",
"events_url": "https://api.github.com/users/michaelc2005/events{/privacy}",
"received_events_url": "https://api.github.com/users/michaelc2005/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-09-19T21:06:04
| 2024-12-02T23:00:54
| 2024-12-02T23:00:54
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
(I have done some searching and as of yet not found any mention of this issue, but I may have missed it.)
When creating models from GGUF files downloaded from Hugging Face, I observed that two different models, when tested with an identical prompt (copied and pasted), produced nearly similar responses but not word-for-word. Then, upon switching between models while watching the System Monitor and amdgpu_top, I noticed that the system memory usage remained unchanged, and the new model loaded almost instantly. This swift loading initially sparked excitement until closer scrutiny revealed the underlying issue.
I suspected something wrong. Running 'ollama list' showed that both models had identical IDs. A deeper investigation by loading each model and using '/show modelfile' indicated that the same blob file was being used, despite their distinct GGUF files in their respective modelfiles. However, a peculiar observation is that while the model files were nearly identical (besides sloppy formatting), including the 'seed' parameter, the responses differed slightly. The only significant difference between the files was in the 'FROM' parameter.
I acknowledge that my investigation may be incomplete or flawed due to a lack of diligence and evidence. Moreover, I apologize if any terminology used is incorrect or confusing. That said, I am merely an AI Hobbyist. My experience dates back to 1979 when I learned of the Eliza 'Therapist' chatbot, and I have been tinkering with AI on and off ever since. Although I possess programming skills in various languages, including Python, my expertise is limited. And, my reluctance to document processes has also hindered me a few times over the years. Sorry.
The two GGUF files were:
1. Llama-3.1-Storm-8B.Q8_0.gguf
2. Mistral-Nemo-2407-12.2B-Instruct-Q4_K_M.gguf
Other important info:
Laptop: Lenovo Flex 5-14ARE05 Laptop (AMD Ryzen 5 4500U with integrated Radeon Graphics)
OS: Ubuntu 24.04.1 LTS fully updated as of this morning (9/19/2024)
Ollama version is 0.3.11
My current workaround is to reboot my laptop before creating each new model with Ollama, but a more reliable solution would be most appreciated.
### OS
Linux
### GPU
AMD
### CPU
AMD
### Ollama version
0.3.11
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6883/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6883/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2751
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2751/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2751/comments
|
https://api.github.com/repos/ollama/ollama/issues/2751/events
|
https://github.com/ollama/ollama/issues/2751
| 2,152,955,339
|
I_kwDOJ0Z1Ps6AU33L
| 2,751
|
Error on run ollama serve on windows 10
|
{
"login": "Alias4D",
"id": 27604791,
"node_id": "MDQ6VXNlcjI3NjA0Nzkx",
"avatar_url": "https://avatars.githubusercontent.com/u/27604791?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Alias4D",
"html_url": "https://github.com/Alias4D",
"followers_url": "https://api.github.com/users/Alias4D/followers",
"following_url": "https://api.github.com/users/Alias4D/following{/other_user}",
"gists_url": "https://api.github.com/users/Alias4D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Alias4D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Alias4D/subscriptions",
"organizations_url": "https://api.github.com/users/Alias4D/orgs",
"repos_url": "https://api.github.com/users/Alias4D/repos",
"events_url": "https://api.github.com/users/Alias4D/events{/privacy}",
"received_events_url": "https://api.github.com/users/Alias4D/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 4
| 2024-02-25T21:46:51
| 2024-02-26T14:25:27
| 2024-02-26T14:25:27
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
**Error on run ollama serve**
time=2024-02-26T00:41:24.616+03:00 level=INFO source=images.go:706 msg="total blobs: 0"
time=2024-02-26T00:41:24.627+03:00 level=INFO source=images.go:713 msg="total unused blobs removed: 0"
panic: bad origin: origins must contain '*' or include http://,https://,chrome-extension://,safari-extension://,moz-extension://,ms-browser-extension://
goroutine 1 [running]:
github.com/gin-contrib/cors.newCors({0x0, {0xc0000a4000, 0xd, 0x14}, 0x0, {0xc0003fa150, 0x7, 0x7}, {0xc0003e8360, 0x3, ...}, ...})
C:/Users/jeff/go/pkg/mod/github.com/gin-contrib/cors@v1.4.0/config.go:42 +0x2b4
github.com/gin-contrib/cors.New({0x0, {0xc0000a4000, 0xd, 0x14}, 0x0, {0xc0003fa150, 0x7, 0x7}, {0xc0003e8360, 0x3, ...}, ...})
C:/Users/jeff/go/pkg/mod/github.com/gin-contrib/cors@v1.4.0/cors.go:164 +0x58
github.com/jmorganca/ollama/server.(*Server).GenerateRoutes(0xc0003a4020)
C:/Users/jeff/git/ollama/server/routes.go:935 +0x585
github.com/jmorganca/ollama/server.Serve({0x7ff71a4a5cf0, 0xc000067720})
C:/Users/jeff/git/ollama/server/routes.go:1012 +0x233
github.com/jmorganca/ollama/cmd.RunServer(0xc00017cb00?, {0x7ff71a9318a0?, 0x4?, 0x7ff7191b0f6b?})
C:/Users/jeff/git/ollama/cmd/cmd.go:706 +0x1a5
github.com/spf13/cobra.(*Command).execute(0xc000514908, {0x7ff71a9318a0, 0x0, 0x0})
C:/Users/jeff/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:940 +0x882
github.com/spf13/cobra.(*Command).ExecuteC(0xc0001cbb08)
C:/Users/jeff/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068 +0x3a5
github.com/spf13/cobra.(*Command).Execute(...)
C:/Users/jeff/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992
github.com/spf13/cobra.(*Command).ExecuteContext(...)
C:/Users/jeff/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:985
main.main()
C:/Users/jeff/git/ollama/main.go:11 +0x4d

|
{
"login": "Alias4D",
"id": 27604791,
"node_id": "MDQ6VXNlcjI3NjA0Nzkx",
"avatar_url": "https://avatars.githubusercontent.com/u/27604791?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Alias4D",
"html_url": "https://github.com/Alias4D",
"followers_url": "https://api.github.com/users/Alias4D/followers",
"following_url": "https://api.github.com/users/Alias4D/following{/other_user}",
"gists_url": "https://api.github.com/users/Alias4D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Alias4D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Alias4D/subscriptions",
"organizations_url": "https://api.github.com/users/Alias4D/orgs",
"repos_url": "https://api.github.com/users/Alias4D/repos",
"events_url": "https://api.github.com/users/Alias4D/events{/privacy}",
"received_events_url": "https://api.github.com/users/Alias4D/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2751/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2751/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/806
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/806/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/806/comments
|
https://api.github.com/repos/ollama/ollama/issues/806/events
|
https://github.com/ollama/ollama/issues/806
| 1,945,578,017
|
I_kwDOJ0Z1Ps5z9yoh
| 806
|
Add System prompt in WizardLM template
|
{
"login": "louisabraham",
"id": 13174805,
"node_id": "MDQ6VXNlcjEzMTc0ODA1",
"avatar_url": "https://avatars.githubusercontent.com/u/13174805?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/louisabraham",
"html_url": "https://github.com/louisabraham",
"followers_url": "https://api.github.com/users/louisabraham/followers",
"following_url": "https://api.github.com/users/louisabraham/following{/other_user}",
"gists_url": "https://api.github.com/users/louisabraham/gists{/gist_id}",
"starred_url": "https://api.github.com/users/louisabraham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/louisabraham/subscriptions",
"organizations_url": "https://api.github.com/users/louisabraham/orgs",
"repos_url": "https://api.github.com/users/louisabraham/repos",
"events_url": "https://api.github.com/users/louisabraham/events{/privacy}",
"received_events_url": "https://api.github.com/users/louisabraham/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 1
| 2023-10-16T15:53:36
| 2023-12-04T20:19:04
| 2023-12-04T20:19:04
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I think the following works quite well
```
{{ .System }}
USER: {{ .Prompt }}
ASSISTANT:
```
|
{
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.github.com/users/technovangelist/followers",
"following_url": "https://api.github.com/users/technovangelist/following{/other_user}",
"gists_url": "https://api.github.com/users/technovangelist/gists{/gist_id}",
"starred_url": "https://api.github.com/users/technovangelist/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/technovangelist/subscriptions",
"organizations_url": "https://api.github.com/users/technovangelist/orgs",
"repos_url": "https://api.github.com/users/technovangelist/repos",
"events_url": "https://api.github.com/users/technovangelist/events{/privacy}",
"received_events_url": "https://api.github.com/users/technovangelist/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/806/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/806/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/429
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/429/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/429/comments
|
https://api.github.com/repos/ollama/ollama/issues/429/events
|
https://github.com/ollama/ollama/issues/429
| 1,868,409,157
|
I_kwDOJ0Z1Ps5vXalF
| 429
|
Why does Ollama need sudo?
|
{
"login": "vegabook",
"id": 3780883,
"node_id": "MDQ6VXNlcjM3ODA4ODM=",
"avatar_url": "https://avatars.githubusercontent.com/u/3780883?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vegabook",
"html_url": "https://github.com/vegabook",
"followers_url": "https://api.github.com/users/vegabook/followers",
"following_url": "https://api.github.com/users/vegabook/following{/other_user}",
"gists_url": "https://api.github.com/users/vegabook/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vegabook/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vegabook/subscriptions",
"organizations_url": "https://api.github.com/users/vegabook/orgs",
"repos_url": "https://api.github.com/users/vegabook/repos",
"events_url": "https://api.github.com/users/vegabook/events{/privacy}",
"received_events_url": "https://api.github.com/users/vegabook/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2023-08-27T08:27:29
| 2023-08-27T11:41:19
| 2023-08-27T11:39:36
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I run nix on my mac to isolate all software.
`nix-shell -p ollama` works great since ollama is [available on the unstable channel](https://search.nixos.org/packages?channel=unstable&from=0&size=50&sort=relevance&type=packages&query=ollama).
Works perfectly if I sudo both the server, and the client:
<img width="1392" alt="image" src="https://github.com/jmorganca/ollama/assets/3780883/0ea353d9-5303-4cd6-a684-2ead9c1baca2">
But if either client or server is _not_ run as superuser, then either errors out or doesn't work.
<img width="656" alt="image" src="https://github.com/jmorganca/ollama/assets/3780883/adf4fb21-a7d6-4c5a-8b23-99a890619b62">
I note that because I have to be root, the ~/.ollama directory is also owned by root.
Is there a reason we can't run the whole stack in userspace? Having to `sudo` inhibits some automation and isolation options. Alternatively using the installer, clutters up process space with yet another background task, and also forces a toolbar icon.
|
{
"login": "vegabook",
"id": 3780883,
"node_id": "MDQ6VXNlcjM3ODA4ODM=",
"avatar_url": "https://avatars.githubusercontent.com/u/3780883?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vegabook",
"html_url": "https://github.com/vegabook",
"followers_url": "https://api.github.com/users/vegabook/followers",
"following_url": "https://api.github.com/users/vegabook/following{/other_user}",
"gists_url": "https://api.github.com/users/vegabook/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vegabook/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vegabook/subscriptions",
"organizations_url": "https://api.github.com/users/vegabook/orgs",
"repos_url": "https://api.github.com/users/vegabook/repos",
"events_url": "https://api.github.com/users/vegabook/events{/privacy}",
"received_events_url": "https://api.github.com/users/vegabook/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/429/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/429/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8633
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8633/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8633/comments
|
https://api.github.com/repos/ollama/ollama/issues/8633/events
|
https://github.com/ollama/ollama/pull/8633
| 2,815,643,279
|
PR_kwDOJ0Z1Ps6JOZes
| 8,633
|
my commit
|
{
"login": "aditya-agrawalSFDC",
"id": 122862436,
"node_id": "U_kgDOB1K7ZA",
"avatar_url": "https://avatars.githubusercontent.com/u/122862436?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aditya-agrawalSFDC",
"html_url": "https://github.com/aditya-agrawalSFDC",
"followers_url": "https://api.github.com/users/aditya-agrawalSFDC/followers",
"following_url": "https://api.github.com/users/aditya-agrawalSFDC/following{/other_user}",
"gists_url": "https://api.github.com/users/aditya-agrawalSFDC/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aditya-agrawalSFDC/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aditya-agrawalSFDC/subscriptions",
"organizations_url": "https://api.github.com/users/aditya-agrawalSFDC/orgs",
"repos_url": "https://api.github.com/users/aditya-agrawalSFDC/repos",
"events_url": "https://api.github.com/users/aditya-agrawalSFDC/events{/privacy}",
"received_events_url": "https://api.github.com/users/aditya-agrawalSFDC/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2025-01-28T13:15:07
| 2025-01-28T13:17:19
| 2025-01-28T13:17:14
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/8633",
"html_url": "https://github.com/ollama/ollama/pull/8633",
"diff_url": "https://github.com/ollama/ollama/pull/8633.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8633.patch",
"merged_at": null
}
| null |
{
"login": "aditya-agrawalSFDC",
"id": 122862436,
"node_id": "U_kgDOB1K7ZA",
"avatar_url": "https://avatars.githubusercontent.com/u/122862436?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aditya-agrawalSFDC",
"html_url": "https://github.com/aditya-agrawalSFDC",
"followers_url": "https://api.github.com/users/aditya-agrawalSFDC/followers",
"following_url": "https://api.github.com/users/aditya-agrawalSFDC/following{/other_user}",
"gists_url": "https://api.github.com/users/aditya-agrawalSFDC/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aditya-agrawalSFDC/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aditya-agrawalSFDC/subscriptions",
"organizations_url": "https://api.github.com/users/aditya-agrawalSFDC/orgs",
"repos_url": "https://api.github.com/users/aditya-agrawalSFDC/repos",
"events_url": "https://api.github.com/users/aditya-agrawalSFDC/events{/privacy}",
"received_events_url": "https://api.github.com/users/aditya-agrawalSFDC/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8633/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8633/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1636
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1636/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1636/comments
|
https://api.github.com/repos/ollama/ollama/issues/1636/events
|
https://github.com/ollama/ollama/issues/1636
| 2,050,938,443
|
I_kwDOJ0Z1Ps56PtZL
| 1,636
|
Error : llama runner process has terminated , on running mistral "ollama run mistral"
|
{
"login": "yashchittora",
"id": 112685991,
"node_id": "U_kgDOBrdzpw",
"avatar_url": "https://avatars.githubusercontent.com/u/112685991?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yashchittora",
"html_url": "https://github.com/yashchittora",
"followers_url": "https://api.github.com/users/yashchittora/followers",
"following_url": "https://api.github.com/users/yashchittora/following{/other_user}",
"gists_url": "https://api.github.com/users/yashchittora/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yashchittora/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yashchittora/subscriptions",
"organizations_url": "https://api.github.com/users/yashchittora/orgs",
"repos_url": "https://api.github.com/users/yashchittora/repos",
"events_url": "https://api.github.com/users/yashchittora/events{/privacy}",
"received_events_url": "https://api.github.com/users/yashchittora/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 9
| 2023-12-20T16:54:46
| 2024-07-23T19:37:12
| 2024-01-08T21:42:03
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I have a MacBook Air M1. Earlier the mistral model used to run flawlessly , upon the latest update of both ollama and mistral model , the model refuses to run.
Any Explaination or troubleshooting ?
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1636/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1636/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/745
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/745/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/745/comments
|
https://api.github.com/repos/ollama/ollama/issues/745/events
|
https://github.com/ollama/ollama/issues/745
| 1,934,099,225
|
I_kwDOJ0Z1Ps5zSAMZ
| 745
|
why different answers from same model?
|
{
"login": "Enhitech",
"id": 36785833,
"node_id": "MDQ6VXNlcjM2Nzg1ODMz",
"avatar_url": "https://avatars.githubusercontent.com/u/36785833?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Enhitech",
"html_url": "https://github.com/Enhitech",
"followers_url": "https://api.github.com/users/Enhitech/followers",
"following_url": "https://api.github.com/users/Enhitech/following{/other_user}",
"gists_url": "https://api.github.com/users/Enhitech/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Enhitech/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Enhitech/subscriptions",
"organizations_url": "https://api.github.com/users/Enhitech/orgs",
"repos_url": "https://api.github.com/users/Enhitech/repos",
"events_url": "https://api.github.com/users/Enhitech/events{/privacy}",
"received_events_url": "https://api.github.com/users/Enhitech/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2023-10-10T01:11:20
| 2023-10-11T00:21:04
| 2023-10-11T00:21:04
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi, guys,
I run a llama2 model and then access the model in three ways: 1. use rest api; 2. use cmd "ollama run modelname 'prompt'"; 3. use a conversational terminal.
I got different answers. 1 and 2 are similar, but 3 is much better than 1 and 2.
WHY? how could I get the same answer as 3 via 1 or 2?
Thanks a lot!
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/745/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/745/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2834
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2834/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2834/comments
|
https://api.github.com/repos/ollama/ollama/issues/2834/events
|
https://github.com/ollama/ollama/issues/2834
| 2,161,505,243
|
I_kwDOJ0Z1Ps6A1fPb
| 2,834
|
[feature request]Cmd: New Topic
|
{
"login": "lededev",
"id": 30518126,
"node_id": "MDQ6VXNlcjMwNTE4MTI2",
"avatar_url": "https://avatars.githubusercontent.com/u/30518126?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lededev",
"html_url": "https://github.com/lededev",
"followers_url": "https://api.github.com/users/lededev/followers",
"following_url": "https://api.github.com/users/lededev/following{/other_user}",
"gists_url": "https://api.github.com/users/lededev/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lededev/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lededev/subscriptions",
"organizations_url": "https://api.github.com/users/lededev/orgs",
"repos_url": "https://api.github.com/users/lededev/repos",
"events_url": "https://api.github.com/users/lededev/events{/privacy}",
"received_events_url": "https://api.github.com/users/lededev/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-02-29T15:18:48
| 2024-03-01T01:16:02
| 2024-03-01T01:16:02
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Command line press `/?`, there is no such feature as New Topic or New Session, Please add one, instead of `/bye` and rerun again.
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2834/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2834/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/632
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/632/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/632/comments
|
https://api.github.com/repos/ollama/ollama/issues/632/events
|
https://github.com/ollama/ollama/pull/632
| 1,917,549,741
|
PR_kwDOJ0Z1Ps5bcjLo
| 632
|
Document response stream chunk delimiter.
|
{
"login": "JayNakrani",
"id": 6269279,
"node_id": "MDQ6VXNlcjYyNjkyNzk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6269279?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JayNakrani",
"html_url": "https://github.com/JayNakrani",
"followers_url": "https://api.github.com/users/JayNakrani/followers",
"following_url": "https://api.github.com/users/JayNakrani/following{/other_user}",
"gists_url": "https://api.github.com/users/JayNakrani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JayNakrani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JayNakrani/subscriptions",
"organizations_url": "https://api.github.com/users/JayNakrani/orgs",
"repos_url": "https://api.github.com/users/JayNakrani/repos",
"events_url": "https://api.github.com/users/JayNakrani/events{/privacy}",
"received_events_url": "https://api.github.com/users/JayNakrani/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 5
| 2023-09-28T13:24:16
| 2023-09-30T04:46:03
| 2023-09-30T04:45:52
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/632",
"html_url": "https://github.com/ollama/ollama/pull/632",
"diff_url": "https://github.com/ollama/ollama/pull/632.diff",
"patch_url": "https://github.com/ollama/ollama/pull/632.patch",
"merged_at": "2023-09-30T04:45:52"
}
|
Discussion on discord at https://discord.com/channels/1128867683291627614/1128867684130508875/1156838261919076352
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/632/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/632/timeline
| null | null | true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.