url
stringlengths
51
54
repository_url
stringclasses
1 value
labels_url
stringlengths
65
68
comments_url
stringlengths
60
63
events_url
stringlengths
58
61
html_url
stringlengths
39
44
id
int64
1.78B
2.82B
node_id
stringlengths
18
19
number
int64
1
8.69k
title
stringlengths
1
382
user
dict
labels
listlengths
0
5
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
listlengths
0
2
milestone
null
comments
int64
0
323
created_at
timestamp[s]
updated_at
timestamp[s]
closed_at
timestamp[s]
author_association
stringclasses
4 values
sub_issues_summary
dict
active_lock_reason
null
draft
bool
2 classes
pull_request
dict
body
stringlengths
2
118k
closed_by
dict
reactions
dict
timeline_url
stringlengths
60
63
performed_via_github_app
null
state_reason
stringclasses
4 values
is_pull_request
bool
2 classes
https://api.github.com/repos/ollama/ollama/issues/4177
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4177/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4177/comments
https://api.github.com/repos/ollama/ollama/issues/4177/events
https://github.com/ollama/ollama/issues/4177
2,279,703,592
I_kwDOJ0Z1Ps6H4YQo
4,177
pull orca2:7b-fp16 Error: EOF
{ "login": "MarkWard0110", "id": 90335263, "node_id": "MDQ6VXNlcjkwMzM1MjYz", "avatar_url": "https://avatars.githubusercontent.com/u/90335263?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MarkWard0110", "html_url": "https://github.com/MarkWard0110", "followers_url": "https://api.github.com/users/MarkWard0110/followers", "following_url": "https://api.github.com/users/MarkWard0110/following{/other_user}", "gists_url": "https://api.github.com/users/MarkWard0110/gists{/gist_id}", "starred_url": "https://api.github.com/users/MarkWard0110/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MarkWard0110/subscriptions", "organizations_url": "https://api.github.com/users/MarkWard0110/orgs", "repos_url": "https://api.github.com/users/MarkWard0110/repos", "events_url": "https://api.github.com/users/MarkWard0110/events{/privacy}", "received_events_url": "https://api.github.com/users/MarkWard0110/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
2
2024-05-05T19:57:54
2024-05-06T18:53:25
2024-05-06T18:33:30
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? `ollama pull orca2:7b-fp16` errors with `Error: EOF` when it is pulling manifest. ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.1.33
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/BruceMacD/followers", "following_url": "https://api.github.com/users/BruceMacD/following{/other_user}", "gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}", "starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions", "organizations_url": "https://api.github.com/users/BruceMacD/orgs", "repos_url": "https://api.github.com/users/BruceMacD/repos", "events_url": "https://api.github.com/users/BruceMacD/events{/privacy}", "received_events_url": "https://api.github.com/users/BruceMacD/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4177/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4177/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1295
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1295/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1295/comments
https://api.github.com/repos/ollama/ollama/issues/1295/events
https://github.com/ollama/ollama/pull/1295
2,013,425,784
PR_kwDOJ0Z1Ps5ggBuv
1,295
Add verbose request logs to server.
{ "login": "rootedbox", "id": 3997890, "node_id": "MDQ6VXNlcjM5OTc4OTA=", "avatar_url": "https://avatars.githubusercontent.com/u/3997890?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rootedbox", "html_url": "https://github.com/rootedbox", "followers_url": "https://api.github.com/users/rootedbox/followers", "following_url": "https://api.github.com/users/rootedbox/following{/other_user}", "gists_url": "https://api.github.com/users/rootedbox/gists{/gist_id}", "starred_url": "https://api.github.com/users/rootedbox/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rootedbox/subscriptions", "organizations_url": "https://api.github.com/users/rootedbox/orgs", "repos_url": "https://api.github.com/users/rootedbox/repos", "events_url": "https://api.github.com/users/rootedbox/events{/privacy}", "received_events_url": "https://api.github.com/users/rootedbox/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
1
2023-11-28T01:28:16
2024-05-07T23:46:51
2024-05-07T23:46:51
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/1295", "html_url": "https://github.com/ollama/ollama/pull/1295", "diff_url": "https://github.com/ollama/ollama/pull/1295.diff", "patch_url": "https://github.com/ollama/ollama/pull/1295.patch", "merged_at": null }
Add verbose request logs to server. Completes https://github.com/jmorganca/ollama/issues/1118 example output ``` 2023/11/27 12:30:18 routes.go:736: Request POST - /api/generate; QueryParams: map[]; URLParams: []; Body: {"model":"orca-mini","prompt":"word up","system":"","template":"","context":[31822,13,8458,31922,3244,31871,13,3838,397,363,7421,8825,342,5243,10389,5164,828,31843,9530,362,988,362,365,473,31843,13,13,8458,31922,9779,31871,13,5521,397,365,13,13,8458,31922,13166,31871,13,312,705,363,7421,8825,29328,289,2803,365,288,2470,8931,291,1673,1132,31843,1035,473,312,955,365,1703,31902,13,8458,31922,3244,31871,13,3838,397,363,7421,8825,342,5243,10389,5164,828,31843,9530,362,988,362,365,473,31843,13,13,8458,31922,9779,31871,13,5521,397,365,13,13,8458,31922,13166,31871,13,312,705,363,7421,8825,29328,289,2803,365,288,2470,8931,291,1673,1132,31843,1035,473,312,955,365,1703,31902,13,8458,31922,3244,31871,13,3838,397,363,7421,8825,342,5243,10389,5164,828,31843,9530,362,988,362,365,473,31843,13,13,8458,31922,9779,31871,13,31824,3106,322,260,259,6285,313,13,13,8458,31922,13166,31871,13,312,25122,31844,504,312,705,432,1796,674,365,397,12065,289,31843,9410,365,3281,1673,541,3846,405,1132,562,266,5149,365,764,955,351,31902,13,8458,31922,3244,31871,13,3838,397,363,7421,8825,342,5243,10389,5164,828,31843,9530,362,988,362,365,473,31843,13,13,8458,31922,9779,31871,13,26755,13,13,8458,31922,13166,31871,13,312,31876,31836,10157,31844,504,362,363,7421,8825,31844,312,705,432,29328,289,1313,8931,342,6175,5000,31843,1725,4199,1717,322,289,2803,351,2470,8931,291,1673,1132,31843,1053,635,2492,2128,312,473,955,365,351,31902],"format":"","options":null}; ClientIP: 127.0.0.1; Status: 200; UserAgent: ollama/0.0.0 (arm64 darwin) Go/go1.21.4; Duration: 5.217641542s ```
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1295/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1295/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5268
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5268/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5268/comments
https://api.github.com/repos/ollama/ollama/issues/5268/events
https://github.com/ollama/ollama/pull/5268
2,371,666,486
PR_kwDOJ0Z1Ps5zcZy6
5,268
Add Windows on ARM64 build instructions
{ "login": "hmartinez82", "id": 1100440, "node_id": "MDQ6VXNlcjExMDA0NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/1100440?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hmartinez82", "html_url": "https://github.com/hmartinez82", "followers_url": "https://api.github.com/users/hmartinez82/followers", "following_url": "https://api.github.com/users/hmartinez82/following{/other_user}", "gists_url": "https://api.github.com/users/hmartinez82/gists{/gist_id}", "starred_url": "https://api.github.com/users/hmartinez82/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hmartinez82/subscriptions", "organizations_url": "https://api.github.com/users/hmartinez82/orgs", "repos_url": "https://api.github.com/users/hmartinez82/repos", "events_url": "https://api.github.com/users/hmartinez82/events{/privacy}", "received_events_url": "https://api.github.com/users/hmartinez82/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
1
2024-06-25T05:10:51
2024-11-21T17:49:40
2024-11-21T17:44:28
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5268", "html_url": "https://github.com/ollama/ollama/pull/5268", "diff_url": "https://github.com/ollama/ollama/pull/5268.diff", "patch_url": "https://github.com/ollama/ollama/pull/5268.patch", "merged_at": null }
null
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5268/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5268/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/4859
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4859/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4859/comments
https://api.github.com/repos/ollama/ollama/issues/4859/events
https://github.com/ollama/ollama/issues/4859
2,338,466,659
I_kwDOJ0Z1Ps6LYitj
4,859
glm-4-9b-chat
{ "login": "enryteam", "id": 20081090, "node_id": "MDQ6VXNlcjIwMDgxMDkw", "avatar_url": "https://avatars.githubusercontent.com/u/20081090?v=4", "gravatar_id": "", "url": "https://api.github.com/users/enryteam", "html_url": "https://github.com/enryteam", "followers_url": "https://api.github.com/users/enryteam/followers", "following_url": "https://api.github.com/users/enryteam/following{/other_user}", "gists_url": "https://api.github.com/users/enryteam/gists{/gist_id}", "starred_url": "https://api.github.com/users/enryteam/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/enryteam/subscriptions", "organizations_url": "https://api.github.com/users/enryteam/orgs", "repos_url": "https://api.github.com/users/enryteam/repos", "events_url": "https://api.github.com/users/enryteam/events{/privacy}", "received_events_url": "https://api.github.com/users/enryteam/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
closed
false
null
[]
null
1
2024-06-06T14:54:47
2024-06-06T17:34:23
2024-06-06T17:34:23
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
https://modelscope.cn/models/ZhipuAI/glm-4-9b-chat thanks !
{ "login": "mchiang0610", "id": 3325447, "node_id": "MDQ6VXNlcjMzMjU0NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mchiang0610", "html_url": "https://github.com/mchiang0610", "followers_url": "https://api.github.com/users/mchiang0610/followers", "following_url": "https://api.github.com/users/mchiang0610/following{/other_user}", "gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}", "starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions", "organizations_url": "https://api.github.com/users/mchiang0610/orgs", "repos_url": "https://api.github.com/users/mchiang0610/repos", "events_url": "https://api.github.com/users/mchiang0610/events{/privacy}", "received_events_url": "https://api.github.com/users/mchiang0610/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4859/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4859/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/150
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/150/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/150/comments
https://api.github.com/repos/ollama/ollama/issues/150/events
https://github.com/ollama/ollama/issues/150
1,814,849,853
I_kwDOJ0Z1Ps5sLGk9
150
Need word wrap
{ "login": "nathanleclaire", "id": 1476820, "node_id": "MDQ6VXNlcjE0NzY4MjA=", "avatar_url": "https://avatars.githubusercontent.com/u/1476820?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nathanleclaire", "html_url": "https://github.com/nathanleclaire", "followers_url": "https://api.github.com/users/nathanleclaire/followers", "following_url": "https://api.github.com/users/nathanleclaire/following{/other_user}", "gists_url": "https://api.github.com/users/nathanleclaire/gists{/gist_id}", "starred_url": "https://api.github.com/users/nathanleclaire/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nathanleclaire/subscriptions", "organizations_url": "https://api.github.com/users/nathanleclaire/orgs", "repos_url": "https://api.github.com/users/nathanleclaire/repos", "events_url": "https://api.github.com/users/nathanleclaire/events{/privacy}", "received_events_url": "https://api.github.com/users/nathanleclaire/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
3
2023-07-20T21:56:36
2023-09-26T22:57:12
2023-09-26T22:57:12
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
<img width="97" alt="image" src="https://github.com/jmorganca/ollama/assets/1476820/fa92e643-a994-46ab-b83c-df3e28fbf758"> It pains me
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/followers", "following_url": "https://api.github.com/users/mxyng/following{/other_user}", "gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}", "starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mxyng/subscriptions", "organizations_url": "https://api.github.com/users/mxyng/orgs", "repos_url": "https://api.github.com/users/mxyng/repos", "events_url": "https://api.github.com/users/mxyng/events{/privacy}", "received_events_url": "https://api.github.com/users/mxyng/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/150/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/150/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6865
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6865/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6865/comments
https://api.github.com/repos/ollama/ollama/issues/6865/events
https://github.com/ollama/ollama/issues/6865
2,535,149,334
I_kwDOJ0Z1Ps6XG08W
6,865
qwen2.5 context length
{ "login": "zlwu", "id": 214708, "node_id": "MDQ6VXNlcjIxNDcwOA==", "avatar_url": "https://avatars.githubusercontent.com/u/214708?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zlwu", "html_url": "https://github.com/zlwu", "followers_url": "https://api.github.com/users/zlwu/followers", "following_url": "https://api.github.com/users/zlwu/following{/other_user}", "gists_url": "https://api.github.com/users/zlwu/gists{/gist_id}", "starred_url": "https://api.github.com/users/zlwu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zlwu/subscriptions", "organizations_url": "https://api.github.com/users/zlwu/orgs", "repos_url": "https://api.github.com/users/zlwu/repos", "events_url": "https://api.github.com/users/zlwu/events{/privacy}", "received_events_url": "https://api.github.com/users/zlwu/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
1
2024-09-19T02:41:04
2024-09-19T23:33:54
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? <img width="674" alt="image" src="https://github.com/user-attachments/assets/03949cc7-07fd-45c4-a09a-4a971e0a3586"> According to the model card, the context length should be **128k**? ### OS _No response_ ### GPU _No response_ ### CPU _No response_ ### Ollama version 0.3.10
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6865/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6865/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/6421
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6421/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6421/comments
https://api.github.com/repos/ollama/ollama/issues/6421/events
https://github.com/ollama/ollama/pull/6421
2,473,105,480
PR_kwDOJ0Z1Ps54uONY
6,421
Add gitlab.com/tozd/go/fun Go package
{ "login": "mitar", "id": 585279, "node_id": "MDQ6VXNlcjU4NTI3OQ==", "avatar_url": "https://avatars.githubusercontent.com/u/585279?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mitar", "html_url": "https://github.com/mitar", "followers_url": "https://api.github.com/users/mitar/followers", "following_url": "https://api.github.com/users/mitar/following{/other_user}", "gists_url": "https://api.github.com/users/mitar/gists{/gist_id}", "starred_url": "https://api.github.com/users/mitar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mitar/subscriptions", "organizations_url": "https://api.github.com/users/mitar/orgs", "repos_url": "https://api.github.com/users/mitar/repos", "events_url": "https://api.github.com/users/mitar/events{/privacy}", "received_events_url": "https://api.github.com/users/mitar/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
3
2024-08-19T11:15:06
2024-09-04T14:57:37
2024-09-04T14:52:46
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/6421", "html_url": "https://github.com/ollama/ollama/pull/6421", "diff_url": "https://github.com/ollama/ollama/pull/6421.diff", "patch_url": "https://github.com/ollama/ollama/pull/6421.patch", "merged_at": "2024-09-04T14:52:46" }
`gitlab.com/tozd/go/fun` is a Go package which provides high-level abstraction to define functions with code (the usual way), data (providing examples of inputs and expected outputs which are then used with an AI model), or natural language description. It is the simplest but powerful way to use large language models (LLMs) in Go.
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6421/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6421/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/3346
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3346/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3346/comments
https://api.github.com/repos/ollama/ollama/issues/3346/events
https://github.com/ollama/ollama/pull/3346
2,206,513,425
PR_kwDOJ0Z1Ps5qsuWv
3,346
move community integrations to their own doc
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/BruceMacD/followers", "following_url": "https://api.github.com/users/BruceMacD/following{/other_user}", "gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}", "starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions", "organizations_url": "https://api.github.com/users/BruceMacD/orgs", "repos_url": "https://api.github.com/users/BruceMacD/repos", "events_url": "https://api.github.com/users/BruceMacD/events{/privacy}", "received_events_url": "https://api.github.com/users/BruceMacD/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
0
2024-03-25T19:21:03
2024-04-01T15:14:19
2024-04-01T15:14:19
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/3346", "html_url": "https://github.com/ollama/ollama/pull/3346", "diff_url": "https://github.com/ollama/ollama/pull/3346.diff", "patch_url": "https://github.com/ollama/ollama/pull/3346.patch", "merged_at": null }
The community integration section at the end of the README is getting quite long, moving it to its own doc to keep things tidy.
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/BruceMacD/followers", "following_url": "https://api.github.com/users/BruceMacD/following{/other_user}", "gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}", "starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions", "organizations_url": "https://api.github.com/users/BruceMacD/orgs", "repos_url": "https://api.github.com/users/BruceMacD/repos", "events_url": "https://api.github.com/users/BruceMacD/events{/privacy}", "received_events_url": "https://api.github.com/users/BruceMacD/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3346/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3346/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5432
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5432/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5432/comments
https://api.github.com/repos/ollama/ollama/issues/5432/events
https://github.com/ollama/ollama/issues/5432
2,386,047,676
I_kwDOJ0Z1Ps6OODK8
5,432
level=ERROR source=sched.go:388 msg="error loading llama server" error="llama runner process no longer running: -1 "
{ "login": "popav4", "id": 7868172, "node_id": "MDQ6VXNlcjc4NjgxNzI=", "avatar_url": "https://avatars.githubusercontent.com/u/7868172?v=4", "gravatar_id": "", "url": "https://api.github.com/users/popav4", "html_url": "https://github.com/popav4", "followers_url": "https://api.github.com/users/popav4/followers", "following_url": "https://api.github.com/users/popav4/following{/other_user}", "gists_url": "https://api.github.com/users/popav4/gists{/gist_id}", "starred_url": "https://api.github.com/users/popav4/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/popav4/subscriptions", "organizations_url": "https://api.github.com/users/popav4/orgs", "repos_url": "https://api.github.com/users/popav4/repos", "events_url": "https://api.github.com/users/popav4/events{/privacy}", "received_events_url": "https://api.github.com/users/popav4/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
2
2024-07-02T11:49:27
2024-07-02T20:18:50
2024-07-02T20:18:24
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Macbook Air M1 Run with Docker: `docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama` `docker exec -it ollama ollama run codestral:22b` Error: > level=ERROR source=sched.go:388 msg="error loading llama server" error="llama runner process no longer running: -1 " [docker.log](https://github.com/user-attachments/files/16067480/docker.log) ### OS Docker ### GPU Other ### CPU Apple ### Ollama version 0.1.48
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5432/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5432/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7747
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7747/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7747/comments
https://api.github.com/repos/ollama/ollama/issues/7747/events
https://github.com/ollama/ollama/issues/7747
2,673,723,799
I_kwDOJ0Z1Ps6fXcmX
7,747
Support Pixtral Large
{ "login": "YuntianZhao", "id": 32049544, "node_id": "MDQ6VXNlcjMyMDQ5NTQ0", "avatar_url": "https://avatars.githubusercontent.com/u/32049544?v=4", "gravatar_id": "", "url": "https://api.github.com/users/YuntianZhao", "html_url": "https://github.com/YuntianZhao", "followers_url": "https://api.github.com/users/YuntianZhao/followers", "following_url": "https://api.github.com/users/YuntianZhao/following{/other_user}", "gists_url": "https://api.github.com/users/YuntianZhao/gists{/gist_id}", "starred_url": "https://api.github.com/users/YuntianZhao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/YuntianZhao/subscriptions", "organizations_url": "https://api.github.com/users/YuntianZhao/orgs", "repos_url": "https://api.github.com/users/YuntianZhao/repos", "events_url": "https://api.github.com/users/YuntianZhao/events{/privacy}", "received_events_url": "https://api.github.com/users/YuntianZhao/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
closed
false
null
[]
null
2
2024-11-19T22:32:19
2024-11-21T06:58:07
2024-11-21T06:58:06
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Mistral AI just released Pixtral Large, a 124B multimodal model built on top of Mistral Large 2. See https://huggingface.co/mistralai/Pixtral-Large-Instruct-2411
{ "login": "mchiang0610", "id": 3325447, "node_id": "MDQ6VXNlcjMzMjU0NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mchiang0610", "html_url": "https://github.com/mchiang0610", "followers_url": "https://api.github.com/users/mchiang0610/followers", "following_url": "https://api.github.com/users/mchiang0610/following{/other_user}", "gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}", "starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions", "organizations_url": "https://api.github.com/users/mchiang0610/orgs", "repos_url": "https://api.github.com/users/mchiang0610/repos", "events_url": "https://api.github.com/users/mchiang0610/events{/privacy}", "received_events_url": "https://api.github.com/users/mchiang0610/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7747/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7747/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7734
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7734/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7734/comments
https://api.github.com/repos/ollama/ollama/issues/7734/events
https://github.com/ollama/ollama/issues/7734
2,670,766,147
I_kwDOJ0Z1Ps6fMKhD
7,734
add a Feature : clone and duplicate model from scratch when creating new model from Modelfile to laod to gpu memory
{ "login": "looijijohn", "id": 180949480, "node_id": "U_kgDOCskR6A", "avatar_url": "https://avatars.githubusercontent.com/u/180949480?v=4", "gravatar_id": "", "url": "https://api.github.com/users/looijijohn", "html_url": "https://github.com/looijijohn", "followers_url": "https://api.github.com/users/looijijohn/followers", "following_url": "https://api.github.com/users/looijijohn/following{/other_user}", "gists_url": "https://api.github.com/users/looijijohn/gists{/gist_id}", "starred_url": "https://api.github.com/users/looijijohn/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/looijijohn/subscriptions", "organizations_url": "https://api.github.com/users/looijijohn/orgs", "repos_url": "https://api.github.com/users/looijijohn/repos", "events_url": "https://api.github.com/users/looijijohn/events{/privacy}", "received_events_url": "https://api.github.com/users/looijijohn/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
2
2024-11-19T05:02:18
2024-12-02T15:32:44
2024-12-02T15:32:44
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
when we create a new_model from BaseModel ollama does not load model to memory again he used loaded BaseModel as new model and NewModel does not load as seprate model we want use two model as sepearate two load both in gpu memory we enough memory
{ "login": "rick-github", "id": 14946854, "node_id": "MDQ6VXNlcjE0OTQ2ODU0", "avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rick-github", "html_url": "https://github.com/rick-github", "followers_url": "https://api.github.com/users/rick-github/followers", "following_url": "https://api.github.com/users/rick-github/following{/other_user}", "gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}", "starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rick-github/subscriptions", "organizations_url": "https://api.github.com/users/rick-github/orgs", "repos_url": "https://api.github.com/users/rick-github/repos", "events_url": "https://api.github.com/users/rick-github/events{/privacy}", "received_events_url": "https://api.github.com/users/rick-github/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7734/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7734/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5721
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5721/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5721/comments
https://api.github.com/repos/ollama/ollama/issues/5721/events
https://github.com/ollama/ollama/pull/5721
2,410,643,198
PR_kwDOJ0Z1Ps51fNUE
5,721
README: Added AI Studio to the list of UIs
{ "login": "SommerEngineering", "id": 5158645, "node_id": "MDQ6VXNlcjUxNTg2NDU=", "avatar_url": "https://avatars.githubusercontent.com/u/5158645?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SommerEngineering", "html_url": "https://github.com/SommerEngineering", "followers_url": "https://api.github.com/users/SommerEngineering/followers", "following_url": "https://api.github.com/users/SommerEngineering/following{/other_user}", "gists_url": "https://api.github.com/users/SommerEngineering/gists{/gist_id}", "starred_url": "https://api.github.com/users/SommerEngineering/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SommerEngineering/subscriptions", "organizations_url": "https://api.github.com/users/SommerEngineering/orgs", "repos_url": "https://api.github.com/users/SommerEngineering/repos", "events_url": "https://api.github.com/users/SommerEngineering/events{/privacy}", "received_events_url": "https://api.github.com/users/SommerEngineering/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
1
2024-07-16T09:17:40
2024-07-16T21:24:27
2024-07-16T21:24:27
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5721", "html_url": "https://github.com/ollama/ollama/pull/5721", "diff_url": "https://github.com/ollama/ollama/pull/5721.diff", "patch_url": "https://github.com/ollama/ollama/pull/5721.patch", "merged_at": "2024-07-16T21:24:27" }
I added [AI Studio](https://github.com/MindWorkAI/AI-Studio) to the list of UIs.
{ "login": "mchiang0610", "id": 3325447, "node_id": "MDQ6VXNlcjMzMjU0NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mchiang0610", "html_url": "https://github.com/mchiang0610", "followers_url": "https://api.github.com/users/mchiang0610/followers", "following_url": "https://api.github.com/users/mchiang0610/following{/other_user}", "gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}", "starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions", "organizations_url": "https://api.github.com/users/mchiang0610/orgs", "repos_url": "https://api.github.com/users/mchiang0610/repos", "events_url": "https://api.github.com/users/mchiang0610/events{/privacy}", "received_events_url": "https://api.github.com/users/mchiang0610/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5721/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5721/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5246
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5246/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5246/comments
https://api.github.com/repos/ollama/ollama/issues/5246/events
https://github.com/ollama/ollama/pull/5246
2,368,953,745
PR_kwDOJ0Z1Ps5zTGc4
5,246
llm: speed up gguf decoding by a lot
{ "login": "bmizerany", "id": 46, "node_id": "MDQ6VXNlcjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/46?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bmizerany", "html_url": "https://github.com/bmizerany", "followers_url": "https://api.github.com/users/bmizerany/followers", "following_url": "https://api.github.com/users/bmizerany/following{/other_user}", "gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}", "starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions", "organizations_url": "https://api.github.com/users/bmizerany/orgs", "repos_url": "https://api.github.com/users/bmizerany/repos", "events_url": "https://api.github.com/users/bmizerany/events{/privacy}", "received_events_url": "https://api.github.com/users/bmizerany/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
9
2024-06-24T00:07:14
2024-06-25T04:49:18
2024-06-25T04:47:52
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5246", "html_url": "https://github.com/ollama/ollama/pull/5246", "diff_url": "https://github.com/ollama/ollama/pull/5246.diff", "patch_url": "https://github.com/ollama/ollama/pull/5246.patch", "merged_at": "2024-06-25T04:47:52" }
Previously, some costly things were causing the loading of GGUF files and their metadata and tensor information to be VERY slow: * Too many allocations when decoding strings * Hitting disk for each read of each key and value, resulting in a not-okay amount of syscalls/disk I/O. The show API is now down to 33ms from 800ms+ for llama3 on a macbook pro m3. This commit also prevents collecting large arrays of values when decoding GGUFs (if desired). When such keys are encountered, their values are null, and are encoded as such in JSON.
{ "login": "bmizerany", "id": 46, "node_id": "MDQ6VXNlcjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/46?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bmizerany", "html_url": "https://github.com/bmizerany", "followers_url": "https://api.github.com/users/bmizerany/followers", "following_url": "https://api.github.com/users/bmizerany/following{/other_user}", "gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}", "starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions", "organizations_url": "https://api.github.com/users/bmizerany/orgs", "repos_url": "https://api.github.com/users/bmizerany/repos", "events_url": "https://api.github.com/users/bmizerany/events{/privacy}", "received_events_url": "https://api.github.com/users/bmizerany/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5246/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5246/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/4528
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4528/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4528/comments
https://api.github.com/repos/ollama/ollama/issues/4528/events
https://github.com/ollama/ollama/issues/4528
2,304,889,457
I_kwDOJ0Z1Ps6JYdJx
4,528
OLLAMA_MODELS no longer works
{ "login": "asmrtfm", "id": 154548075, "node_id": "U_kgDOCTY3aw", "avatar_url": "https://avatars.githubusercontent.com/u/154548075?v=4", "gravatar_id": "", "url": "https://api.github.com/users/asmrtfm", "html_url": "https://github.com/asmrtfm", "followers_url": "https://api.github.com/users/asmrtfm/followers", "following_url": "https://api.github.com/users/asmrtfm/following{/other_user}", "gists_url": "https://api.github.com/users/asmrtfm/gists{/gist_id}", "starred_url": "https://api.github.com/users/asmrtfm/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/asmrtfm/subscriptions", "organizations_url": "https://api.github.com/users/asmrtfm/orgs", "repos_url": "https://api.github.com/users/asmrtfm/repos", "events_url": "https://api.github.com/users/asmrtfm/events{/privacy}", "received_events_url": "https://api.github.com/users/asmrtfm/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false } ]
null
1
2024-05-20T01:19:23
2024-05-26T12:03:49
2024-05-26T10:42:52
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
> setting the OLLAMA_MODELS environment variable (was) no longer working. [ edit: A reboot resolved this, so closed. * additional details: models were in ~/.ollama (ownership: 1000:1000); environment variable was accidentally commented out in ~/.bashrc before a reboot - so, unsurprisingly, no dice; uncommented line in .bashrc; re-login'd; models still not detected; killed ollama; `. ~/.bashrc`'d for good measure; re-ran `ollama serve & ollama list;` no dice; killed ollama; then: > I tried making /usr/share/ollama/.ollama a symlink to ~/.ollama; and then: `chown -R ollama ~/.ollama/models/...` ; `ollama serve & ollama list;` same amount of dice; also noticed: at one (non-canonical) point, ollama failed to start from cmd line and yielded an error msg that, to me, seemed to suggest it tries to (re)generate the default directory upon launch - with something equivalent to `mkdir` instead of something equivalent to `mkdir -p`; ...eventually deleted symlink and rebooted; seems to be working; closed this. ### OS Linux ### GPU Nvidia ### CPU Intel
{ "login": "asmrtfm", "id": 154548075, "node_id": "U_kgDOCTY3aw", "avatar_url": "https://avatars.githubusercontent.com/u/154548075?v=4", "gravatar_id": "", "url": "https://api.github.com/users/asmrtfm", "html_url": "https://github.com/asmrtfm", "followers_url": "https://api.github.com/users/asmrtfm/followers", "following_url": "https://api.github.com/users/asmrtfm/following{/other_user}", "gists_url": "https://api.github.com/users/asmrtfm/gists{/gist_id}", "starred_url": "https://api.github.com/users/asmrtfm/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/asmrtfm/subscriptions", "organizations_url": "https://api.github.com/users/asmrtfm/orgs", "repos_url": "https://api.github.com/users/asmrtfm/repos", "events_url": "https://api.github.com/users/asmrtfm/events{/privacy}", "received_events_url": "https://api.github.com/users/asmrtfm/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4528/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4528/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/214
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/214/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/214/comments
https://api.github.com/repos/ollama/ollama/issues/214/events
https://github.com/ollama/ollama/pull/214
1,821,158,562
PR_kwDOJ0Z1Ps5WYJ16
214
allow for concurrent pulls of the same files
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/BruceMacD/followers", "following_url": "https://api.github.com/users/BruceMacD/following{/other_user}", "gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}", "starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions", "organizations_url": "https://api.github.com/users/BruceMacD/orgs", "repos_url": "https://api.github.com/users/BruceMacD/repos", "events_url": "https://api.github.com/users/BruceMacD/events{/privacy}", "received_events_url": "https://api.github.com/users/BruceMacD/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
0
2023-07-25T21:10:24
2023-08-09T15:35:25
2023-08-09T15:35:24
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/214", "html_url": "https://github.com/ollama/ollama/pull/214", "diff_url": "https://github.com/ollama/ollama/pull/214.diff", "patch_url": "https://github.com/ollama/ollama/pull/214.patch", "merged_at": "2023-08-09T15:35:24" }
resolves #200
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/BruceMacD/followers", "following_url": "https://api.github.com/users/BruceMacD/following{/other_user}", "gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}", "starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions", "organizations_url": "https://api.github.com/users/BruceMacD/orgs", "repos_url": "https://api.github.com/users/BruceMacD/repos", "events_url": "https://api.github.com/users/BruceMacD/events{/privacy}", "received_events_url": "https://api.github.com/users/BruceMacD/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/214/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/214/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/4222
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4222/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4222/comments
https://api.github.com/repos/ollama/ollama/issues/4222/events
https://github.com/ollama/ollama/issues/4222
2,282,526,031
I_kwDOJ0Z1Ps6IDJVP
4,222
server not responding
{ "login": "thomassrour", "id": 79809227, "node_id": "MDQ6VXNlcjc5ODA5MjI3", "avatar_url": "https://avatars.githubusercontent.com/u/79809227?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thomassrour", "html_url": "https://github.com/thomassrour", "followers_url": "https://api.github.com/users/thomassrour/followers", "following_url": "https://api.github.com/users/thomassrour/following{/other_user}", "gists_url": "https://api.github.com/users/thomassrour/gists{/gist_id}", "starred_url": "https://api.github.com/users/thomassrour/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomassrour/subscriptions", "organizations_url": "https://api.github.com/users/thomassrour/orgs", "repos_url": "https://api.github.com/users/thomassrour/repos", "events_url": "https://api.github.com/users/thomassrour/events{/privacy}", "received_events_url": "https://api.github.com/users/thomassrour/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6677367769, "node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q", "url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info", "name": "needs more info", "color": "BA8041", "default": false, "description": "More information is needed to assist" } ]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false } ]
null
6
2024-05-07T07:39:08
2024-05-31T21:35:46
2024-05-31T21:35:45
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Hello, I have trouble reaching my ollama container. I have tried using the images for 0.1.32 and 0.1.33, as some users reported bugs 0.1.33 but it doesn't work on either. Here is the output of docker logs, when trying mixtral (I have also tried llama3, same result) : time=2024-05-07T07:33:21.130Z level=INFO source=images.go:817 msg="total blobs: 10" time=2024-05-07T07:33:21.134Z level=INFO source=images.go:824 msg="total unused blobs removed: 0" time=2024-05-07T07:33:21.135Z level=INFO source=routes.go:1143 msg="Listening on [::]:11434 (version 0.1.32)" time=2024-05-07T07:33:21.136Z level=INFO source=payload.go:28 msg="extracting embedded files" dir=/tmp/ollama3873501864/runners time=2024-05-07T07:33:21.136Z level=DEBUG source=payload.go:160 msg=extracting variant=cpu file=build/linux/x86_64/cpu/bin/ollama_llama_server.gz time=2024-05-07T07:33:21.136Z level=DEBUG source=payload.go:160 msg=extracting variant=cpu_avx file=build/linux/x86_64/cpu_avx/bin/ollama_llama_server.gz time=2024-05-07T07:33:21.136Z level=DEBUG source=payload.go:160 msg=extracting variant=cpu_avx2 file=build/linux/x86_64/cpu_avx2/bin/ollama_llama_server.gz time=2024-05-07T07:33:21.136Z level=DEBUG source=payload.go:160 msg=extracting variant=cuda_v11 file=build/linux/x86_64/cuda_v11/bin/libcublas.so.11.gz time=2024-05-07T07:33:21.136Z level=DEBUG source=payload.go:160 msg=extracting variant=cuda_v11 file=build/linux/x86_64/cuda_v11/bin/libcublasLt.so.11.gz time=2024-05-07T07:33:21.136Z level=DEBUG source=payload.go:160 msg=extracting variant=cuda_v11 file=build/linux/x86_64/cuda_v11/bin/libcudart.so.11.0.gz time=2024-05-07T07:33:21.136Z level=DEBUG source=payload.go:160 msg=extracting variant=cuda_v11 file=build/linux/x86_64/cuda_v11/bin/ollama_llama_server.gz time=2024-05-07T07:33:21.136Z level=DEBUG source=payload.go:160 msg=extracting variant=rocm_v60002 file=build/linux/x86_64/rocm_v60002/bin/deps.txt.gz time=2024-05-07T07:33:21.136Z level=DEBUG source=payload.go:160 msg=extracting variant=rocm_v60002 file=build/linux/x86_64/rocm_v60002/bin/ollama_llama_server.gz time=2024-05-07T07:33:25.457Z level=DEBUG source=payload.go:68 msg="availableServers : found" file=/tmp/ollama3873501864/runners/cpu time=2024-05-07T07:33:25.457Z level=DEBUG source=payload.go:68 msg="availableServers : found" file=/tmp/ollama3873501864/runners/cpu_avx time=2024-05-07T07:33:25.457Z level=DEBUG source=payload.go:68 msg="availableServers : found" file=/tmp/ollama3873501864/runners/cpu_avx2 time=2024-05-07T07:33:25.457Z level=DEBUG source=payload.go:68 msg="availableServers : found" file=/tmp/ollama3873501864/runners/cuda_v11 time=2024-05-07T07:33:25.457Z level=DEBUG source=payload.go:68 msg="availableServers : found" file=/tmp/ollama3873501864/runners/rocm_v60002 time=2024-05-07T07:33:25.457Z level=INFO source=payload.go:41 msg="Dynamic LLM libraries [cuda_v11 rocm_v60002 cpu cpu_avx cpu_avx2]" time=2024-05-07T07:33:25.457Z level=DEBUG source=payload.go:42 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY" time=2024-05-07T07:33:25.457Z level=INFO source=gpu.go:121 msg="Detecting GPU type" time=2024-05-07T07:33:25.457Z level=INFO source=gpu.go:268 msg="Searching for GPU management library libcudart.so*" time=2024-05-07T07:33:25.457Z level=DEBUG source=gpu.go:286 msg="gpu management search paths: [/tmp/ollama3873501864/runners/cuda*/libcudart.so* /usr/local/cuda/lib64/libcudart.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/x86_64-linux-gnu/libcudart.so* /usr/lib/wsl/lib/libcudart.so* /usr/lib/wsl/drivers/*/libcudart.so* /opt/cuda/lib64/libcudart.so* /usr/local/cuda*/targets/aarch64-linux/lib/libcudart.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/aarch64-linux-gnu/libcudart.so* /usr/local/cuda/lib*/libcudart.so* /usr/lib*/libcudart.so* /usr/local/lib*/libcudart.so* /usr/local/nvidia/lib/libcudart.so** /usr/local/nvidia/lib64/libcudart.so**]" time=2024-05-07T07:33:25.462Z level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [/tmp/ollama3873501864/runners/cuda_v11/libcudart.so.11.0]" wiring cudart library functions in /tmp/ollama3873501864/runners/cuda_v11/libcudart.so.11.0 dlsym: cudaSetDevice dlsym: cudaDeviceSynchronize dlsym: cudaDeviceReset dlsym: cudaMemGetInfo dlsym: cudaGetDeviceCount dlsym: cudaDeviceGetAttribute dlsym: cudaDriverGetVersion CUDA driver version: 12-0 time=2024-05-07T07:33:25.495Z level=INFO source=gpu.go:126 msg="Nvidia GPU detected via cudart" time=2024-05-07T07:33:25.495Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2" [0] CUDA totalMem 85895020544 [0] CUDA freeMem 79269199872 time=2024-05-07T07:33:25.661Z level=INFO source=gpu.go:202 msg="[cudart] CUDART CUDA Compute Capability detected: 8.0" releasing cudart library time=2024-05-07T07:33:30.257Z level=DEBUG source=gguf.go:57 msg="model = &llm.gguf{containerGGUF:(*llm.containerGGUF)(0xc000426c80), kv:llm.KV{}, tensors:[]*llm.Tensor(nil), parameters:0x0}" time=2024-05-07T07:33:30.522Z level=DEBUG source=gguf.go:193 msg="general.architecture = llama" time=2024-05-07T07:33:30.528Z level=INFO source=gpu.go:121 msg="Detecting GPU type" time=2024-05-07T07:33:30.528Z level=INFO source=gpu.go:268 msg="Searching for GPU management library libcudart.so*" time=2024-05-07T07:33:30.528Z level=DEBUG source=gpu.go:286 msg="gpu management search paths: [/tmp/ollama3873501864/runners/cuda*/libcudart.so* /usr/local/cuda/lib64/libcudart.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/x86_64-linux-gnu/libcudart.so* /usr/lib/wsl/lib/libcudart.so* /usr/lib/wsl/drivers/*/libcudart.so* /opt/cuda/lib64/libcudart.so* /usr/local/cuda*/targets/aarch64-linux/lib/libcudart.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/aarch64-linux-gnu/libcudart.so* /usr/local/cuda/lib*/libcudart.so* /usr/lib*/libcudart.so* /usr/local/lib*/libcudart.so* /usr/local/nvidia/lib/libcudart.so** /usr/local/nvidia/lib64/libcudart.so**]" time=2024-05-07T07:33:30.529Z level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [/tmp/ollama3873501864/runners/cuda_v11/libcudart.so.11.0]" wiring cudart library functions in /tmp/ollama3873501864/runners/cuda_v11/libcudart.so.11.0 dlsym: cudaSetDevice dlsym: cudaDeviceSynchronize dlsym: cudaDeviceReset dlsym: cudaMemGetInfo dlsym: cudaGetDeviceCount dlsym: cudaDeviceGetAttribute dlsym: cudaDriverGetVersion CUDA driver version: 12-0 time=2024-05-07T07:33:30.530Z level=INFO source=gpu.go:126 msg="Nvidia GPU detected via cudart" time=2024-05-07T07:33:30.530Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2" [0] CUDA totalMem 85895020544 [0] CUDA freeMem 79269199872 time=2024-05-07T07:33:30.674Z level=INFO source=gpu.go:202 msg="[cudart] CUDART CUDA Compute Capability detected: 8.0" releasing cudart library time=2024-05-07T07:33:30.719Z level=INFO source=gpu.go:121 msg="Detecting GPU type" time=2024-05-07T07:33:30.719Z level=INFO source=gpu.go:268 msg="Searching for GPU management library libcudart.so*" time=2024-05-07T07:33:30.719Z level=DEBUG source=gpu.go:286 msg="gpu management search paths: [/tmp/ollama3873501864/runners/cuda*/libcudart.so* /usr/local/cuda/lib64/libcudart.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/x86_64-linux-gnu/libcudart.so* /usr/lib/wsl/lib/libcudart.so* /usr/lib/wsl/drivers/*/libcudart.so* /opt/cuda/lib64/libcudart.so* /usr/local/cuda*/targets/aarch64-linux/lib/libcudart.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/aarch64-linux-gnu/libcudart.so* /usr/local/cuda/lib*/libcudart.so* /usr/lib*/libcudart.so* /usr/local/lib*/libcudart.so* /usr/local/nvidia/lib/libcudart.so** /usr/local/nvidia/lib64/libcudart.so**]" time=2024-05-07T07:33:30.720Z level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [/tmp/ollama3873501864/runners/cuda_v11/libcudart.so.11.0]" wiring cudart library functions in /tmp/ollama3873501864/runners/cuda_v11/libcudart.so.11.0 dlsym: cudaSetDevice dlsym: cudaDeviceSynchronize dlsym: cudaDeviceReset dlsym: cudaMemGetInfo dlsym: cudaGetDeviceCount dlsym: cudaDeviceGetAttribute dlsym: cudaDriverGetVersion CUDA driver version: 12-0 time=2024-05-07T07:33:30.721Z level=INFO source=gpu.go:126 msg="Nvidia GPU detected via cudart" time=2024-05-07T07:33:30.721Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2" [0] CUDA totalMem 85895020544 [0] CUDA freeMem 79269199872 time=2024-05-07T07:33:30.869Z level=INFO source=gpu.go:202 msg="[cudart] CUDART CUDA Compute Capability detected: 8.0" releasing cudart library time=2024-05-07T07:33:30.916Z level=INFO source=server.go:127 msg="offload to gpu" reallayers=33 layers=33 required="26042.6 MiB" used="26042.6 MiB" available="75597.0 MiB" kv="256.0 MiB" fulloffload="184.0 MiB" partialoffload="935.0 MiB" time=2024-05-07T07:33:30.916Z level=DEBUG source=payload.go:68 msg="availableServers : found" file=/tmp/ollama3873501864/runners/cpu time=2024-05-07T07:33:30.916Z level=DEBUG source=payload.go:68 msg="availableServers : found" file=/tmp/ollama3873501864/runners/cpu_avx time=2024-05-07T07:33:30.916Z level=DEBUG source=payload.go:68 msg="availableServers : found" file=/tmp/ollama3873501864/runners/cpu_avx2 time=2024-05-07T07:33:30.916Z level=DEBUG source=payload.go:68 msg="availableServers : found" file=/tmp/ollama3873501864/runners/cuda_v11 time=2024-05-07T07:33:30.916Z level=DEBUG source=payload.go:68 msg="availableServers : found" file=/tmp/ollama3873501864/runners/rocm_v60002 time=2024-05-07T07:33:30.916Z level=DEBUG source=payload.go:68 msg="availableServers : found" file=/tmp/ollama3873501864/runners/cpu time=2024-05-07T07:33:30.916Z level=DEBUG source=payload.go:68 msg="availableServers : found" file=/tmp/ollama3873501864/runners/cpu_avx time=2024-05-07T07:33:30.916Z level=DEBUG source=payload.go:68 msg="availableServers : found" file=/tmp/ollama3873501864/runners/cpu_avx2 time=2024-05-07T07:33:30.916Z level=DEBUG source=payload.go:68 msg="availableServers : found" file=/tmp/ollama3873501864/runners/cuda_v11 time=2024-05-07T07:33:30.916Z level=DEBUG source=payload.go:68 msg="availableServers : found" file=/tmp/ollama3873501864/runners/rocm_v60002 time=2024-05-07T07:33:30.916Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-05-07T07:33:30.918Z level=DEBUG source=server.go:259 msg="LD_LIBRARY_PATH=/usr/local/nvidia/lib:/usr/local/nvidia/lib64:/tmp/ollama3873501864/runners/cuda_v11" time=2024-05-07T07:33:30.918Z level=INFO source=server.go:264 msg="starting llama server" cmd="/tmp/ollama3873501864/runners/cuda_v11/ollama_llama_server --model /root/.ollama/models/blobs/sha256-e9e56e8bb5f0fcd4860675e6837a8f6a94e659f5fa7dce6a1076279336320f2b --ctx-size 2048 --batch-size 512 --embedding --log-format json --n-gpu-layers 33 --verbose --port 33787" time=2024-05-07T07:33:30.919Z level=INFO source=server.go:389 msg="waiting for llama runner to start responding" {"function":"server_params_parse","level":"WARN","line":2494,"msg":"server.cpp is not built with verbose logging.","tid":"140005422133248","timestamp":1715067210} time=2024-05-07T07:33:30.970Z level=DEBUG source=server.go:420 msg="server not yet available" error="health resp: Get \"http://127.0.0.1:33787/health\": dial tcp 127.0.0.1:33787: connect: connection refused" {"build":1,"commit":"7593639","function":"main","level":"INFO","line":2819,"msg":"build info","tid":"140005422133248","timestamp":1715067210} {"function":"main","level":"INFO","line":2822,"msg":"system info","n_threads":4,"n_threads_batch":-1,"system_info":"AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | ","tid":"140005422133248","timestamp":1715067210,"total_threads":8} llama_model_loader: loaded meta data with 26 key-value pairs and 995 tensors from /root/.ollama/models/blobs/sha256-e9e56e8bb5f0fcd4860675e6837a8f6a94e659f5fa7dce6a1076279336320f2b (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str = mistralai llama_model_loader: - kv 2: llama.context_length u32 = 32768 llama_model_loader: - kv 3: llama.embedding_length u32 = 4096 llama_model_loader: - kv 4: llama.block_count u32 = 32 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336 llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 7: llama.attention.head_count u32 = 32 llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 9: llama.expert_count u32 = 8 llama_model_loader: - kv 10: llama.expert_used_count u32 = 2 llama_model_loader: - kv 11: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 12: llama.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 13: general.file_type u32 = 2 llama_model_loader: - kv 14: tokenizer.ggml.model str = llama llama_model_loader: - kv 15: tokenizer.ggml.tokens arr[str,32000] = ["<unk>", "<s>", "</s>", "<0x00>", "<... llama_model_loader: - kv 16: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000... llama_model_loader: - kv 17: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ... llama_model_loader: - kv 18: tokenizer.ggml.merges arr[str,58980] = ["▁ t", "i n", "e r", "▁ a", "h e... llama_model_loader: - kv 19: tokenizer.ggml.bos_token_id u32 = 1 llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 2 llama_model_loader: - kv 21: tokenizer.ggml.unknown_token_id u32 = 0 llama_model_loader: - kv 22: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 23: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 24: tokenizer.chat_template str = {{ bos_token }}{% for message in mess... llama_model_loader: - kv 25: general.quantization_version u32 = 2 llama_model_loader: - type f32: 65 tensors llama_model_loader: - type f16: 32 tensors llama_model_loader: - type q4_0: 833 tensors llama_model_loader: - type q8_0: 64 tensors llama_model_loader: - type q6_K: 1 tensors llm_load_vocab: special tokens definition check successful ( 259/32000 ). llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = SPM llm_load_print_meta: n_vocab = 32000 llm_load_print_meta: n_merges = 0 llm_load_print_meta: n_ctx_train = 32768 llm_load_print_meta: n_embd = 4096 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_layer = 32 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 4 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 14336 llm_load_print_meta: n_expert = 8 llm_load_print_meta: n_expert_used = 2 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 1000000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_yarn_orig_ctx = 32768 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: model type = 8x7B llm_load_print_meta: model ftype = Q4_0 llm_load_print_meta: model params = 46.70 B llm_load_print_meta: model size = 24.62 GiB (4.53 BPW) llm_load_print_meta: general.name = mistralai llm_load_print_meta: BOS token = 1 '<s>' llm_load_print_meta: EOS token = 2 '</s>' llm_load_print_meta: UNK token = 0 '<unk>' llm_load_print_meta: LF token = 13 '<0x0A>' ggml_cuda_init: GGML_CUDA_FORCE_MMQ: yes ggml_cuda_init: CUDA_USE_TENSOR_CORES: no ggml_cuda_init: found 1 CUDA devices: Device 0: GRID A100D-80C, compute capability 8.0, VMM: no time=2024-05-07T07:33:31.220Z level=DEBUG source=server.go:420 msg="server not yet available" error="server not responding" llm_load_tensors: ggml ctx size = 0.96 MiB llm_load_tensors: offloading 32 repeating layers to GPU llm_load_tensors: offloading non-repeating layers to GPU llm_load_tensors: offloaded 33/33 layers to GPU llm_load_tensors: CUDA_Host buffer size = 70.31 MiB llm_load_tensors: CUDA0 buffer size = 25145.55 MiB ..................................................time=2024-05-07T07:35:23.693Z level=DEBUG source=server.go:420 msg="server not yet available" error="health resp: Get \"http://127.0.0.1:33787/health\": dial tcp 127.0.0.1:33787: i/o timeout" time=2024-05-07T07:35:23.894Z level=DEBUG source=server.go:420 msg="server not yet available" error="server not responding" ` Thanks for your help ### OS Docker ### GPU Nvidia ### CPU _No response_ ### Ollama version 0.1.32, 0.1.33
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4222/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4222/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4929
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4929/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4929/comments
https://api.github.com/repos/ollama/ollama/issues/4929/events
https://github.com/ollama/ollama/issues/4929
2,341,613,818
I_kwDOJ0Z1Ps6LkjD6
4,929
Never-ending loading whether using the OpenAI API or Ollama Python
{ "login": "Wannabeasmartguy", "id": 107250451, "node_id": "U_kgDOBmSDEw", "avatar_url": "https://avatars.githubusercontent.com/u/107250451?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Wannabeasmartguy", "html_url": "https://github.com/Wannabeasmartguy", "followers_url": "https://api.github.com/users/Wannabeasmartguy/followers", "following_url": "https://api.github.com/users/Wannabeasmartguy/following{/other_user}", "gists_url": "https://api.github.com/users/Wannabeasmartguy/gists{/gist_id}", "starred_url": "https://api.github.com/users/Wannabeasmartguy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Wannabeasmartguy/subscriptions", "organizations_url": "https://api.github.com/users/Wannabeasmartguy/orgs", "repos_url": "https://api.github.com/users/Wannabeasmartguy/repos", "events_url": "https://api.github.com/users/Wannabeasmartguy/events{/privacy}", "received_events_url": "https://api.github.com/users/Wannabeasmartguy/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
1
2024-06-08T11:30:37
2024-08-12T07:40:29
2024-08-12T07:40:29
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Hi, I'm having a problem: Whether I'm using the OpenAI API or Ollama-python, I get bogged down in never-ending loading when doing model inference. ![image](https://github.com/ollama/ollama/assets/107250451/4b97cb64-e8ed-4c71-a1b0-b3d2d04c397c) When I check the logs, I see that there is no record of a POST request in the logs. ![image](https://github.com/ollama/ollama/assets/107250451/07f13b43-98be-4a08-be89-0a86b0bdb2d5) I tried to fix it by reinstalling and rebooting, but that didn't work. But what very much puzzles me is that on my other laptop, with a similar working environment (Windows), both run fine. I didn't find a similar problem in issue, so any suggestions on how to solve this problem would be greatly appreciated. ### OS Windows ### GPU Nvidia ### CPU Intel ### Ollama version 0.1.42
{ "login": "Wannabeasmartguy", "id": 107250451, "node_id": "U_kgDOBmSDEw", "avatar_url": "https://avatars.githubusercontent.com/u/107250451?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Wannabeasmartguy", "html_url": "https://github.com/Wannabeasmartguy", "followers_url": "https://api.github.com/users/Wannabeasmartguy/followers", "following_url": "https://api.github.com/users/Wannabeasmartguy/following{/other_user}", "gists_url": "https://api.github.com/users/Wannabeasmartguy/gists{/gist_id}", "starred_url": "https://api.github.com/users/Wannabeasmartguy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Wannabeasmartguy/subscriptions", "organizations_url": "https://api.github.com/users/Wannabeasmartguy/orgs", "repos_url": "https://api.github.com/users/Wannabeasmartguy/repos", "events_url": "https://api.github.com/users/Wannabeasmartguy/events{/privacy}", "received_events_url": "https://api.github.com/users/Wannabeasmartguy/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4929/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4929/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/744
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/744/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/744/comments
https://api.github.com/repos/ollama/ollama/issues/744/events
https://github.com/ollama/ollama/issues/744
1,933,682,760
I_kwDOJ0Z1Ps5zQahI
744
"Delete word" buggy in TUI
{ "login": "mjvmroz", "id": 4539332, "node_id": "MDQ6VXNlcjQ1MzkzMzI=", "avatar_url": "https://avatars.githubusercontent.com/u/4539332?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mjvmroz", "html_url": "https://github.com/mjvmroz", "followers_url": "https://api.github.com/users/mjvmroz/followers", "following_url": "https://api.github.com/users/mjvmroz/following{/other_user}", "gists_url": "https://api.github.com/users/mjvmroz/gists{/gist_id}", "starred_url": "https://api.github.com/users/mjvmroz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mjvmroz/subscriptions", "organizations_url": "https://api.github.com/users/mjvmroz/orgs", "repos_url": "https://api.github.com/users/mjvmroz/repos", "events_url": "https://api.github.com/users/mjvmroz/events{/privacy}", "received_events_url": "https://api.github.com/users/mjvmroz/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/followers", "following_url": "https://api.github.com/users/pdevine/following{/other_user}", "gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}", "starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pdevine/subscriptions", "organizations_url": "https://api.github.com/users/pdevine/orgs", "repos_url": "https://api.github.com/users/pdevine/repos", "events_url": "https://api.github.com/users/pdevine/events{/privacy}", "received_events_url": "https://api.github.com/users/pdevine/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/followers", "following_url": "https://api.github.com/users/pdevine/following{/other_user}", "gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}", "starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pdevine/subscriptions", "organizations_url": "https://api.github.com/users/pdevine/orgs", "repos_url": "https://api.github.com/users/pdevine/repos", "events_url": "https://api.github.com/users/pdevine/events{/privacy}", "received_events_url": "https://api.github.com/users/pdevine/received_events", "type": "User", "user_view_type": "public", "site_admin": false } ]
null
1
2023-10-09T19:42:42
2023-10-25T23:53:59
2023-10-25T23:53:59
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Using a delete word hotkey (e.g. ctrl-w) when the cursor is within the first word of a prompt causes the entire prompt to be deleted. Steps to reproduce: 1. Type several words at an ollama LLM prompt 2. Move the cursor to the first word (immediately following ">>>") 3. Use a "delete word" hotkey (e.g. ctrl-w) 4. The entire prompt is deleted Expected behavior: Only the first word of the prompt is deleted
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/followers", "following_url": "https://api.github.com/users/pdevine/following{/other_user}", "gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}", "starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pdevine/subscriptions", "organizations_url": "https://api.github.com/users/pdevine/orgs", "repos_url": "https://api.github.com/users/pdevine/repos", "events_url": "https://api.github.com/users/pdevine/events{/privacy}", "received_events_url": "https://api.github.com/users/pdevine/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/744/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/744/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7203
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7203/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7203/comments
https://api.github.com/repos/ollama/ollama/issues/7203/events
https://github.com/ollama/ollama/pull/7203
2,587,209,895
PR_kwDOJ0Z1Ps5-mcHl
7,203
Move macos v11 support flags to build script
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
0
2024-10-14T22:45:30
2024-10-16T19:49:49
2024-10-16T19:49:46
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/7203", "html_url": "https://github.com/ollama/ollama/pull/7203", "diff_url": "https://github.com/ollama/ollama/pull/7203.diff", "patch_url": "https://github.com/ollama/ollama/pull/7203.patch", "merged_at": "2024-10-16T19:49:46" }
Having v11 support hard-coded into the cgo settings causes warnings for newer Xcode versions. This should help keep the build clean for users building from source with the latest tools, while still allow us to target the older OS via our CI processes.
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7203/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7203/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/7392
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7392/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7392/comments
https://api.github.com/repos/ollama/ollama/issues/7392/events
https://github.com/ollama/ollama/issues/7392
2,617,321,426
I_kwDOJ0Z1Ps6cASfS
7,392
Fails to build on macOS with "fatal error: {'string','cstdint'} file not found"
{ "login": "efd6", "id": 90160302, "node_id": "MDQ6VXNlcjkwMTYwMzAy", "avatar_url": "https://avatars.githubusercontent.com/u/90160302?v=4", "gravatar_id": "", "url": "https://api.github.com/users/efd6", "html_url": "https://github.com/efd6", "followers_url": "https://api.github.com/users/efd6/followers", "following_url": "https://api.github.com/users/efd6/following{/other_user}", "gists_url": "https://api.github.com/users/efd6/gists{/gist_id}", "starred_url": "https://api.github.com/users/efd6/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/efd6/subscriptions", "organizations_url": "https://api.github.com/users/efd6/orgs", "repos_url": "https://api.github.com/users/efd6/repos", "events_url": "https://api.github.com/users/efd6/events{/privacy}", "received_events_url": "https://api.github.com/users/efd6/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6677279472, "node_id": "LA_kwDOJ0Z1Ps8AAAABjf8y8A", "url": "https://api.github.com/repos/ollama/ollama/labels/macos", "name": "macos", "color": "E2DBC0", "default": false, "description": "" }, { "id": 6677367769, "node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q", "url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info", "name": "needs more info", "color": "BA8041", "default": false, "description": "More information is needed to assist" }, { "id": 7700262114, "node_id": "LA_kwDOJ0Z1Ps8AAAAByvis4g", "url": "https://api.github.com/repos/ollama/ollama/labels/build", "name": "build", "color": "006b75", "default": false, "description": "Issues relating to building ollama from source" } ]
open
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false } ]
null
11
2024-10-28T05:11:29
2024-12-08T21:20:34
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I followed the instructions for building on mac [here](https://github.com/ollama/ollama/blob/main/docs/development.md#macos), but this failed at the `go generate` step. Running `go generate ./...` fails with a set of header files not found errors. ``` $ go generate ./... + set -o pipefail + compress_pids= + echo 'Starting darwin generate script' Starting darwin generate script ++ dirname ./gen_darwin.sh <snip> [ 26%] Linking CXX static library libggml.a [ 26%] Built target ggml [ 33%] Building CXX object src/CMakeFiles/llama.dir/llama.cpp.o [ 40%] Building CXX object src/CMakeFiles/llama.dir/llama-vocab.cpp.o [ 46%] Building CXX object src/CMakeFiles/llama.dir/llama-sampling.cpp.o [ 53%] Building CXX object src/CMakeFiles/llama.dir/unicode-data.cpp.o [ 53%] Building CXX object src/CMakeFiles/llama.dir/llama-grammar.cpp.o [ 53%] Building CXX object src/CMakeFiles/llama.dir/unicode.cpp.o In file included from .../src/github.com/ollama/ollama/llm/llama.cpp/src/unicode.cpp:5: .../src/github.com/ollama/ollama/llm/llama.cpp/src/unicode.h:3:10: fatal error: 'cstdint' file not found 3 | #include <cstdint> | ^~~~~~~~~ 1 error generated. make[3]: *** [src/CMakeFiles/llama.dir/unicode.cpp.o] Error 1 make[3]: *** Waiting for unfinished jobs.... In file included from .../src/github.com/ollama/ollama/llm/llama.cpp/src/unicode-data.cpp:3: .../src/github.com/ollama/ollama/llm/llama.cpp/src/unicode-data.h:3:10: fatal error: 'cstdint' file not found 3 | #include <cstdint> | ^~~~~~~~~ 1 error generated. make[3]: *** [src/CMakeFiles/llama.dir/unicode-data.cpp.o] Error 1 In file included from .../src/github.com/ollama/ollama/llm/llama.cpp/src/llama-grammar.cpp:1: In file included from .../src/github.com/ollama/ollama/llm/llama.cpp/src/llama-grammar.h:3: .../src/github.com/ollama/ollama/llm/llama.cpp/src/llama-impl.h:5:10: fatal error: 'string' file not found 5 | #include <string> | ^~~~~~~~ In file included from .../src/github.com/ollama/ollama/llm/llama.cpp/src/llama-vocab.cpp:1: In file included from .../src/github.com/ollama/ollama/llm/llama.cpp/src/llama-vocab.h:3: .../src/github.com/ollama/ollama/llm/llama.cpp/src/llama-impl.h:5:10: fatal error: 'string' file not found 5 | #include <string> | ^~~~~~~~ In file included from .../src/github.com/ollama/ollama/llm/llama.cpp/src/llama-sampling.cpp:1: In file included from .../src/github.com/ollama/ollama/llm/llama.cpp/src/llama-sampling.h:5: In file included from .../src/github.com/ollama/ollama/llm/llama.cpp/src/llama-grammar.h:3: .../src/github.com/ollama/ollama/llm/llama.cpp/src/llama-impl.h:5:10: fatal error: 'string' file not found 5 | #include <string> | ^~~~~~~~ 1 error generated. make[3]: *** [src/CMakeFiles/llama.dir/llama-grammar.cpp.o] Error 1 1 error generated. 1 error generated. make[3]: *** [src/CMakeFiles/llama.dir/llama-vocab.cpp.o] Error 1 In file included from .../src/github.com/ollama/ollama/llm/llama.cpp/src/llama.cpp:1: .../src/github.com/ollama/ollama/llm/llama.cpp/src/llama-impl.h:5:10: fatal error: 'string' file not found 5 | #include <string> | ^~~~~~~~ make[3]: *** [src/CMakeFiles/llama.dir/llama-sampling.cpp.o] Error 1 1 error generated. make[3]: *** [src/CMakeFiles/llama.dir/llama.cpp.o] Error 1 make[2]: *** [src/CMakeFiles/llama.dir/all] Error 2 make[1]: *** [ext_server/CMakeFiles/ollama_llama_server.dir/rule] Error 2 make: *** [ollama_llama_server] Error 2 llm/generate/generate_darwin.go:3: running "bash": exit status 2 ``` Noted dependencies: ``` $ go version go version go1.22.8 darwin/amd64 $ cmake --version cmake version 3.30.5 CMake suite maintained and supported by Kitware (kitware.com/cmake). $ gcc --version Apple clang version 16.0.0 (clang-1600.0.26.3) Target: x86_64-apple-darwin23.6.0 Thread model: posix InstalledDir: /Library/Developer/CommandLineTools/usr/bin ``` I expected to see a completed generation (as seen on linux). Alternatively, if other deps are required, I expected to see them listed in the [developer build instructions](https://github.com/ollama/ollama/blob/main/docs/development.md#macos). ### OS macOS ### GPU Apple ### CPU AMD ### Ollama version abd5dfd06a8e1309394b07c111be5f8412fca600
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7392/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7392/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/8613
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8613/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8613/comments
https://api.github.com/repos/ollama/ollama/issues/8613/events
https://github.com/ollama/ollama/issues/8613
2,813,831,969
I_kwDOJ0Z1Ps6nt6sh
8,613
[v0.5.4] Download timeouts cause download cache corruption. Any download that needs to be retried by re-running ollama ends up corrupted at 100% download(file sha256-sha256hash-partial-0 not found).
{ "login": "esperanza-esperanza", "id": 196695882, "node_id": "U_kgDOC7lXSg", "avatar_url": "https://avatars.githubusercontent.com/u/196695882?v=4", "gravatar_id": "", "url": "https://api.github.com/users/esperanza-esperanza", "html_url": "https://github.com/esperanza-esperanza", "followers_url": "https://api.github.com/users/esperanza-esperanza/followers", "following_url": "https://api.github.com/users/esperanza-esperanza/following{/other_user}", "gists_url": "https://api.github.com/users/esperanza-esperanza/gists{/gist_id}", "starred_url": "https://api.github.com/users/esperanza-esperanza/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/esperanza-esperanza/subscriptions", "organizations_url": "https://api.github.com/users/esperanza-esperanza/orgs", "repos_url": "https://api.github.com/users/esperanza-esperanza/repos", "events_url": "https://api.github.com/users/esperanza-esperanza/events{/privacy}", "received_events_url": "https://api.github.com/users/esperanza-esperanza/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
2
2025-01-27T19:04:50
2025-01-27T19:40:50
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Running ollama through alpaca. I'm aware this is a seperate project will mirror the bug report. ### OS Linux ### GPU AMD ### CPU AMD ### Ollama version 0.5.4
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8613/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8613/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/5596
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5596/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5596/comments
https://api.github.com/repos/ollama/ollama/issues/5596/events
https://github.com/ollama/ollama/issues/5596
2,400,470,468
I_kwDOJ0Z1Ps6PFEXE
5,596
version is 0.2.1 can't run glm4
{ "login": "qiulaidongfeng", "id": 96758349, "node_id": "U_kgDOBcRqTQ", "avatar_url": "https://avatars.githubusercontent.com/u/96758349?v=4", "gravatar_id": "", "url": "https://api.github.com/users/qiulaidongfeng", "html_url": "https://github.com/qiulaidongfeng", "followers_url": "https://api.github.com/users/qiulaidongfeng/followers", "following_url": "https://api.github.com/users/qiulaidongfeng/following{/other_user}", "gists_url": "https://api.github.com/users/qiulaidongfeng/gists{/gist_id}", "starred_url": "https://api.github.com/users/qiulaidongfeng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/qiulaidongfeng/subscriptions", "organizations_url": "https://api.github.com/users/qiulaidongfeng/orgs", "repos_url": "https://api.github.com/users/qiulaidongfeng/repos", "events_url": "https://api.github.com/users/qiulaidongfeng/events{/privacy}", "received_events_url": "https://api.github.com/users/qiulaidongfeng/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false } ]
null
4
2024-07-10T11:12:41
2024-07-11T03:58:29
2024-07-11T03:58:28
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? run `ollama run glm4` Error: this model is not supported by your version of Ollama. You may need to upgrade ### OS Windows ### GPU AMD ### CPU AMD ### Ollama version ollama version is 0.2.1
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5596/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5596/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/8665
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8665/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8665/comments
https://api.github.com/repos/ollama/ollama/issues/8665/events
https://github.com/ollama/ollama/pull/8665
2,818,502,333
PR_kwDOJ0Z1Ps6JYJaA
8,665
Fix /api/create status code
{ "login": "canpacis", "id": 37307107, "node_id": "MDQ6VXNlcjM3MzA3MTA3", "avatar_url": "https://avatars.githubusercontent.com/u/37307107?v=4", "gravatar_id": "", "url": "https://api.github.com/users/canpacis", "html_url": "https://github.com/canpacis", "followers_url": "https://api.github.com/users/canpacis/followers", "following_url": "https://api.github.com/users/canpacis/following{/other_user}", "gists_url": "https://api.github.com/users/canpacis/gists{/gist_id}", "starred_url": "https://api.github.com/users/canpacis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/canpacis/subscriptions", "organizations_url": "https://api.github.com/users/canpacis/orgs", "repos_url": "https://api.github.com/users/canpacis/repos", "events_url": "https://api.github.com/users/canpacis/events{/privacy}", "received_events_url": "https://api.github.com/users/canpacis/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
null
1
2025-01-29T15:12:55
2025-01-29T23:07:19
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/8665", "html_url": "https://github.com/ollama/ollama/pull/8665", "diff_url": "https://github.com/ollama/ollama/pull/8665.diff", "patch_url": "https://github.com/ollama/ollama/pull/8665.patch", "merged_at": null }
The server just bails out without proper http error codes inside the goroutine in /api/create route. Added a simple abort function to write the proper status code and send the `gin.H` map. Also changed the error name and message to be grammatically correct but that's a nitpick, I wouldn't wanna be rude 🙃
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8665/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8665/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5709
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5709/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5709/comments
https://api.github.com/repos/ollama/ollama/issues/5709/events
https://github.com/ollama/ollama/pull/5709
2,409,729,362
PR_kwDOJ0Z1Ps51cMCo
5,709
Add Metrics to `api\embed` response
{ "login": "royjhan", "id": 65097070, "node_id": "MDQ6VXNlcjY1MDk3MDcw", "avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4", "gravatar_id": "", "url": "https://api.github.com/users/royjhan", "html_url": "https://github.com/royjhan", "followers_url": "https://api.github.com/users/royjhan/followers", "following_url": "https://api.github.com/users/royjhan/following{/other_user}", "gists_url": "https://api.github.com/users/royjhan/gists{/gist_id}", "starred_url": "https://api.github.com/users/royjhan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/royjhan/subscriptions", "organizations_url": "https://api.github.com/users/royjhan/orgs", "repos_url": "https://api.github.com/users/royjhan/repos", "events_url": "https://api.github.com/users/royjhan/events{/privacy}", "received_events_url": "https://api.github.com/users/royjhan/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
0
2024-07-15T22:10:46
2024-07-30T20:12:23
2024-07-30T20:12:21
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5709", "html_url": "https://github.com/ollama/ollama/pull/5709", "diff_url": "https://github.com/ollama/ollama/pull/5709.diff", "patch_url": "https://github.com/ollama/ollama/pull/5709.patch", "merged_at": "2024-07-30T20:12:21" }
"timings" is returned per request_completion in server.cpp, which must be aggregated to return metrics for a batch of completions. supporting: prompt_eval_count (total number of tokens evaluated), load duration, total duration
{ "login": "royjhan", "id": 65097070, "node_id": "MDQ6VXNlcjY1MDk3MDcw", "avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4", "gravatar_id": "", "url": "https://api.github.com/users/royjhan", "html_url": "https://github.com/royjhan", "followers_url": "https://api.github.com/users/royjhan/followers", "following_url": "https://api.github.com/users/royjhan/following{/other_user}", "gists_url": "https://api.github.com/users/royjhan/gists{/gist_id}", "starred_url": "https://api.github.com/users/royjhan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/royjhan/subscriptions", "organizations_url": "https://api.github.com/users/royjhan/orgs", "repos_url": "https://api.github.com/users/royjhan/repos", "events_url": "https://api.github.com/users/royjhan/events{/privacy}", "received_events_url": "https://api.github.com/users/royjhan/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5709/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5709/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/2158
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2158/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2158/comments
https://api.github.com/repos/ollama/ollama/issues/2158/events
https://github.com/ollama/ollama/issues/2158
2,096,237,679
I_kwDOJ0Z1Ps588gxv
2,158
Seed option is not working on API
{ "login": "Juliano-uCondo", "id": 153868863, "node_id": "U_kgDOCSvaPw", "avatar_url": "https://avatars.githubusercontent.com/u/153868863?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Juliano-uCondo", "html_url": "https://github.com/Juliano-uCondo", "followers_url": "https://api.github.com/users/Juliano-uCondo/followers", "following_url": "https://api.github.com/users/Juliano-uCondo/following{/other_user}", "gists_url": "https://api.github.com/users/Juliano-uCondo/gists{/gist_id}", "starred_url": "https://api.github.com/users/Juliano-uCondo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Juliano-uCondo/subscriptions", "organizations_url": "https://api.github.com/users/Juliano-uCondo/orgs", "repos_url": "https://api.github.com/users/Juliano-uCondo/repos", "events_url": "https://api.github.com/users/Juliano-uCondo/events{/privacy}", "received_events_url": "https://api.github.com/users/Juliano-uCondo/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
2
2024-01-23T14:40:32
2024-01-23T18:25:52
2024-01-23T18:25:52
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Even configuring the option seed, the API return is different for each request. Im using the version 0.1.20 ``` { "model": "mistral", "stream": false, "options": { "seed": 0 }, "prompt":"Why is the sky blue?" } ```
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/followers", "following_url": "https://api.github.com/users/pdevine/following{/other_user}", "gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}", "starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pdevine/subscriptions", "organizations_url": "https://api.github.com/users/pdevine/orgs", "repos_url": "https://api.github.com/users/pdevine/repos", "events_url": "https://api.github.com/users/pdevine/events{/privacy}", "received_events_url": "https://api.github.com/users/pdevine/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2158/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2158/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1419
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1419/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1419/comments
https://api.github.com/repos/ollama/ollama/issues/1419/events
https://github.com/ollama/ollama/pull/1419
2,031,418,323
PR_kwDOJ0Z1Ps5hdVaY
1,419
Simple chat example for typescript
{ "login": "technovangelist", "id": 633681, "node_id": "MDQ6VXNlcjYzMzY4MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4", "gravatar_id": "", "url": "https://api.github.com/users/technovangelist", "html_url": "https://github.com/technovangelist", "followers_url": "https://api.github.com/users/technovangelist/followers", "following_url": "https://api.github.com/users/technovangelist/following{/other_user}", "gists_url": "https://api.github.com/users/technovangelist/gists{/gist_id}", "starred_url": "https://api.github.com/users/technovangelist/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/technovangelist/subscriptions", "organizations_url": "https://api.github.com/users/technovangelist/orgs", "repos_url": "https://api.github.com/users/technovangelist/repos", "events_url": "https://api.github.com/users/technovangelist/events{/privacy}", "received_events_url": "https://api.github.com/users/technovangelist/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
1
2023-12-07T19:49:25
2023-12-07T22:42:24
2023-12-07T22:42:24
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/1419", "html_url": "https://github.com/ollama/ollama/pull/1419", "diff_url": "https://github.com/ollama/ollama/pull/1419.diff", "patch_url": "https://github.com/ollama/ollama/pull/1419.patch", "merged_at": "2023-12-07T22:42:24" }
A simple example of the chat endpoint
{ "login": "technovangelist", "id": 633681, "node_id": "MDQ6VXNlcjYzMzY4MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4", "gravatar_id": "", "url": "https://api.github.com/users/technovangelist", "html_url": "https://github.com/technovangelist", "followers_url": "https://api.github.com/users/technovangelist/followers", "following_url": "https://api.github.com/users/technovangelist/following{/other_user}", "gists_url": "https://api.github.com/users/technovangelist/gists{/gist_id}", "starred_url": "https://api.github.com/users/technovangelist/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/technovangelist/subscriptions", "organizations_url": "https://api.github.com/users/technovangelist/orgs", "repos_url": "https://api.github.com/users/technovangelist/repos", "events_url": "https://api.github.com/users/technovangelist/events{/privacy}", "received_events_url": "https://api.github.com/users/technovangelist/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1419/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1419/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/6735
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6735/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6735/comments
https://api.github.com/repos/ollama/ollama/issues/6735/events
https://github.com/ollama/ollama/pull/6735
2,517,822,037
PR_kwDOJ0Z1Ps57DuU5
6,735
runner.go: Prompt caching
{ "login": "jessegross", "id": 6468499, "node_id": "MDQ6VXNlcjY0Njg0OTk=", "avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jessegross", "html_url": "https://github.com/jessegross", "followers_url": "https://api.github.com/users/jessegross/followers", "following_url": "https://api.github.com/users/jessegross/following{/other_user}", "gists_url": "https://api.github.com/users/jessegross/gists{/gist_id}", "starred_url": "https://api.github.com/users/jessegross/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jessegross/subscriptions", "organizations_url": "https://api.github.com/users/jessegross/orgs", "repos_url": "https://api.github.com/users/jessegross/repos", "events_url": "https://api.github.com/users/jessegross/events{/privacy}", "received_events_url": "https://api.github.com/users/jessegross/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
0
2024-09-10T21:06:37
2024-09-11T03:45:02
2024-09-11T03:45:00
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/6735", "html_url": "https://github.com/ollama/ollama/pull/6735", "diff_url": "https://github.com/ollama/ollama/pull/6735.diff", "patch_url": "https://github.com/ollama/ollama/pull/6735.patch", "merged_at": "2024-09-11T03:45:00" }
Currently, KV cache entries from a sequence are discarded at the end of each processing run. In a typical chat conversation, this results in each message taking longer and longer to process as the entire history of the conversation needs to be replayed. Prompt caching retains the KV entries as long as possible so that we only need to process the newest message in the converation, at least until there are too many simultaneous conversations and something needs to be evicted.
{ "login": "jessegross", "id": 6468499, "node_id": "MDQ6VXNlcjY0Njg0OTk=", "avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jessegross", "html_url": "https://github.com/jessegross", "followers_url": "https://api.github.com/users/jessegross/followers", "following_url": "https://api.github.com/users/jessegross/following{/other_user}", "gists_url": "https://api.github.com/users/jessegross/gists{/gist_id}", "starred_url": "https://api.github.com/users/jessegross/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jessegross/subscriptions", "organizations_url": "https://api.github.com/users/jessegross/orgs", "repos_url": "https://api.github.com/users/jessegross/repos", "events_url": "https://api.github.com/users/jessegross/events{/privacy}", "received_events_url": "https://api.github.com/users/jessegross/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6735/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6735/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/2263
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2263/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2263/comments
https://api.github.com/repos/ollama/ollama/issues/2263/events
https://github.com/ollama/ollama/pull/2263
2,106,748,706
PR_kwDOJ0Z1Ps5lZLEE
2,263
Bump llama.cpp to b1999
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
1
2024-01-30T00:57:33
2024-01-31T16:39:44
2024-01-31T16:39:41
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/2263", "html_url": "https://github.com/ollama/ollama/pull/2263", "diff_url": "https://github.com/ollama/ollama/pull/2263.diff", "patch_url": "https://github.com/ollama/ollama/pull/2263.patch", "merged_at": "2024-01-31T16:39:41" }
This requires an upstream change to support graceful termination, carried as a patch. Tracking branches for the 2 patches: - 01-cache.diff - https://github.com/dhiltgen/llama.cpp/tree/kv_cache - 02-shutdown.diff - https://github.com/dhiltgen/llama.cpp/tree/server_shutdown I'm going to mark it draft until I can run more testing (so far happy path on windows, mac and linux looks good)
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2263/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 2, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2263/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/2239
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2239/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2239/comments
https://api.github.com/repos/ollama/ollama/issues/2239/events
https://github.com/ollama/ollama/issues/2239
2,104,043,986
I_kwDOJ0Z1Ps59aSnS
2,239
stablelm2 is missing in the homepage list
{ "login": "AntDX316", "id": 34279421, "node_id": "MDQ6VXNlcjM0Mjc5NDIx", "avatar_url": "https://avatars.githubusercontent.com/u/34279421?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AntDX316", "html_url": "https://github.com/AntDX316", "followers_url": "https://api.github.com/users/AntDX316/followers", "following_url": "https://api.github.com/users/AntDX316/following{/other_user}", "gists_url": "https://api.github.com/users/AntDX316/gists{/gist_id}", "starred_url": "https://api.github.com/users/AntDX316/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AntDX316/subscriptions", "organizations_url": "https://api.github.com/users/AntDX316/orgs", "repos_url": "https://api.github.com/users/AntDX316/repos", "events_url": "https://api.github.com/users/AntDX316/events{/privacy}", "received_events_url": "https://api.github.com/users/AntDX316/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 6677367769, "node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q", "url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info", "name": "needs more info", "color": "BA8041", "default": false, "description": "More information is needed to assist" } ]
closed
false
{ "login": "bmizerany", "id": 46, "node_id": "MDQ6VXNlcjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/46?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bmizerany", "html_url": "https://github.com/bmizerany", "followers_url": "https://api.github.com/users/bmizerany/followers", "following_url": "https://api.github.com/users/bmizerany/following{/other_user}", "gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}", "starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions", "organizations_url": "https://api.github.com/users/bmizerany/orgs", "repos_url": "https://api.github.com/users/bmizerany/repos", "events_url": "https://api.github.com/users/bmizerany/events{/privacy}", "received_events_url": "https://api.github.com/users/bmizerany/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "login": "bmizerany", "id": 46, "node_id": "MDQ6VXNlcjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/46?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bmizerany", "html_url": "https://github.com/bmizerany", "followers_url": "https://api.github.com/users/bmizerany/followers", "following_url": "https://api.github.com/users/bmizerany/following{/other_user}", "gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}", "starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions", "organizations_url": "https://api.github.com/users/bmizerany/orgs", "repos_url": "https://api.github.com/users/bmizerany/repos", "events_url": "https://api.github.com/users/bmizerany/events{/privacy}", "received_events_url": "https://api.github.com/users/bmizerany/received_events", "type": "User", "user_view_type": "public", "site_admin": false } ]
null
4
2024-01-28T08:46:34
2024-03-12T18:34:22
2024-03-12T18:34:22
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
stablelm2 --verbose is missing in the homepage list who knows what else
{ "login": "bmizerany", "id": 46, "node_id": "MDQ6VXNlcjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/46?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bmizerany", "html_url": "https://github.com/bmizerany", "followers_url": "https://api.github.com/users/bmizerany/followers", "following_url": "https://api.github.com/users/bmizerany/following{/other_user}", "gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}", "starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions", "organizations_url": "https://api.github.com/users/bmizerany/orgs", "repos_url": "https://api.github.com/users/bmizerany/repos", "events_url": "https://api.github.com/users/bmizerany/events{/privacy}", "received_events_url": "https://api.github.com/users/bmizerany/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2239/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2239/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1145
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1145/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1145/comments
https://api.github.com/repos/ollama/ollama/issues/1145/events
https://github.com/ollama/ollama/issues/1145
1,995,757,401
I_kwDOJ0Z1Ps529NdZ
1,145
Food for thought use cases: Github Actions :octocat:
{ "login": "marcellodesales", "id": 131457, "node_id": "MDQ6VXNlcjEzMTQ1Nw==", "avatar_url": "https://avatars.githubusercontent.com/u/131457?v=4", "gravatar_id": "", "url": "https://api.github.com/users/marcellodesales", "html_url": "https://github.com/marcellodesales", "followers_url": "https://api.github.com/users/marcellodesales/followers", "following_url": "https://api.github.com/users/marcellodesales/following{/other_user}", "gists_url": "https://api.github.com/users/marcellodesales/gists{/gist_id}", "starred_url": "https://api.github.com/users/marcellodesales/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/marcellodesales/subscriptions", "organizations_url": "https://api.github.com/users/marcellodesales/orgs", "repos_url": "https://api.github.com/users/marcellodesales/repos", "events_url": "https://api.github.com/users/marcellodesales/events{/privacy}", "received_events_url": "https://api.github.com/users/marcellodesales/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
1
2023-11-15T23:34:21
2024-02-20T01:10:05
2024-02-20T01:10:04
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I've been working on the implementation of DevSecOps Platforms and I think I came up with a Github Action that can execute the models... Obviously: * You must have Github Action Runners powered by GPUs * You can implement pretty much anything with the model given you have the file-system and the containers to implement a business logic * I have embedded the server and the pull of the models in caches to try to speed up the process * The intent is to use any model for Source-code, Software Engineering, Cloud, etc... Just food for thought... > **NOTES**: I still have problems of #676 and #1072 for this reason, I build a data container with the models (docker image digests) and push them to a docker registry so that I can by-pass the 403 with a cached version of the models... # 🧠 Select a Model ![Screenshot 2023-11-15 at 3 31 15 PM](https://github.com/jmorganca/ollama/assets/131457/3194554d-4ea6-48eb-a678-6a76665bcab3) ![Screenshot 2023-11-15 at 1 10 43 PM](https://github.com/jmorganca/ollama/assets/131457/7e0d28e5-9eb5-42bc-89c8-7220d9fbe944) # 🏃‍♂️ Running ![Screenshot 2023-11-15 at 3 30 47 PM](https://github.com/jmorganca/ollama/assets/131457/1f0b6dc8-251c-4628-af98-1eb9fd57cd6d) # 🔢 Results ![Screenshot 2023-11-15 at 3 31 31 PM](https://github.com/jmorganca/ollama/assets/131457/c48a0711-0d5b-4e5a-9916-535628741a0b)
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1145/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1145/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4354
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4354/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4354/comments
https://api.github.com/repos/ollama/ollama/issues/4354/events
https://github.com/ollama/ollama/issues/4354
2,290,822,137
I_kwDOJ0Z1Ps6Iiyv5
4,354
Models often don't load on versions after 0.1.132
{ "login": "ProjectMoon", "id": 183856, "node_id": "MDQ6VXNlcjE4Mzg1Ng==", "avatar_url": "https://avatars.githubusercontent.com/u/183856?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ProjectMoon", "html_url": "https://github.com/ProjectMoon", "followers_url": "https://api.github.com/users/ProjectMoon/followers", "following_url": "https://api.github.com/users/ProjectMoon/following{/other_user}", "gists_url": "https://api.github.com/users/ProjectMoon/gists{/gist_id}", "starred_url": "https://api.github.com/users/ProjectMoon/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ProjectMoon/subscriptions", "organizations_url": "https://api.github.com/users/ProjectMoon/orgs", "repos_url": "https://api.github.com/users/ProjectMoon/repos", "events_url": "https://api.github.com/users/ProjectMoon/events{/privacy}", "received_events_url": "https://api.github.com/users/ProjectMoon/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6849881759, "node_id": "LA_kwDOJ0Z1Ps8AAAABmEjmnw", "url": "https://api.github.com/repos/ollama/ollama/labels/memory", "name": "memory", "color": "5017EA", "default": false, "description": "" } ]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false } ]
null
9
2024-05-11T10:17:30
2024-10-16T18:39:33
2024-10-16T18:39:33
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Many models, in particular codegemma 1.1 7b q8_0, don't load for various reasons on versions after 0.1.132. Works fine on 132. I don't have the logs on hand at the moment, but can add them later. The errors relate to out of memory errors and unable to reset the GPU VRAM. This is using ROCm (ollama distribution of it, from the tar.gz) on AMD RX 6800 XT. Is there a centralized issue for this already? ### OS _No response_ ### GPU AMD ### CPU AMD ### Ollama version 0.1.133-0.1.136
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/followers", "following_url": "https://api.github.com/users/pdevine/following{/other_user}", "gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}", "starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pdevine/subscriptions", "organizations_url": "https://api.github.com/users/pdevine/orgs", "repos_url": "https://api.github.com/users/pdevine/repos", "events_url": "https://api.github.com/users/pdevine/events{/privacy}", "received_events_url": "https://api.github.com/users/pdevine/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4354/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/ollama/ollama/issues/4354/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1244
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1244/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1244/comments
https://api.github.com/repos/ollama/ollama/issues/1244/events
https://github.com/ollama/ollama/pull/1244
2,006,873,674
PR_kwDOJ0Z1Ps5gKOHw
1,244
do not fail on unsupported template variables
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/BruceMacD/followers", "following_url": "https://api.github.com/users/BruceMacD/following{/other_user}", "gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}", "starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions", "organizations_url": "https://api.github.com/users/BruceMacD/orgs", "repos_url": "https://api.github.com/users/BruceMacD/repos", "events_url": "https://api.github.com/users/BruceMacD/events{/privacy}", "received_events_url": "https://api.github.com/users/BruceMacD/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
1
2023-11-22T18:11:04
2023-12-06T21:23:05
2023-12-06T21:23:04
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/1244", "html_url": "https://github.com/ollama/ollama/pull/1244", "diff_url": "https://github.com/ollama/ollama/pull/1244.diff", "patch_url": "https://github.com/ollama/ollama/pull/1244.patch", "merged_at": "2023-12-06T21:23:04" }
- do not fail on unsupported parameters in model template resolves #1242
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/followers", "following_url": "https://api.github.com/users/mxyng/following{/other_user}", "gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}", "starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mxyng/subscriptions", "organizations_url": "https://api.github.com/users/mxyng/orgs", "repos_url": "https://api.github.com/users/mxyng/repos", "events_url": "https://api.github.com/users/mxyng/events{/privacy}", "received_events_url": "https://api.github.com/users/mxyng/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1244/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1244/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5660
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5660/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5660/comments
https://api.github.com/repos/ollama/ollama/issues/5660/events
https://github.com/ollama/ollama/issues/5660
2,406,550,200
I_kwDOJ0Z1Ps6PcQq4
5,660
Ollama 0.2.2 cannot read the system prompt when invoking the API using Python.
{ "login": "letdo1945", "id": 64049222, "node_id": "MDQ6VXNlcjY0MDQ5MjIy", "avatar_url": "https://avatars.githubusercontent.com/u/64049222?v=4", "gravatar_id": "", "url": "https://api.github.com/users/letdo1945", "html_url": "https://github.com/letdo1945", "followers_url": "https://api.github.com/users/letdo1945/followers", "following_url": "https://api.github.com/users/letdo1945/following{/other_user}", "gists_url": "https://api.github.com/users/letdo1945/gists{/gist_id}", "starred_url": "https://api.github.com/users/letdo1945/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/letdo1945/subscriptions", "organizations_url": "https://api.github.com/users/letdo1945/orgs", "repos_url": "https://api.github.com/users/letdo1945/repos", "events_url": "https://api.github.com/users/letdo1945/events{/privacy}", "received_events_url": "https://api.github.com/users/letdo1945/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false } ]
null
2
2024-07-13T00:54:34
2024-07-13T05:25:11
2024-07-13T05:25:11
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? model: qwen2&glm4 After the Ollama update, when I invoke Ollama through Python, the model is unable to read the system prompt. ``` def LLM_Process(model, sys_prom, usr_prom): messages = [ {'role': 'user', 'content': usr_prom}, {'role': 'system', 'content': sys_prom} ] resp = ollama.chat(model, messages) try: out = resp['message']['content'] return out except AttributeError: # 可能是信息太长或有违规信息 print("跳过处理。") return None ``` ### OS Windows ### GPU Nvidia ### CPU Intel ### Ollama version 0.2.2
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5660/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5660/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3348
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3348/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3348/comments
https://api.github.com/repos/ollama/ollama/issues/3348/events
https://github.com/ollama/ollama/pull/3348
2,206,679,955
PR_kwDOJ0Z1Ps5qtThd
3,348
Bump llama.cpp to b2527
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
0
2024-03-25T20:48:19
2024-03-25T21:15:56
2024-03-25T21:15:53
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/3348", "html_url": "https://github.com/ollama/ollama/pull/3348", "diff_url": "https://github.com/ollama/ollama/pull/3348.diff", "patch_url": "https://github.com/ollama/ollama/pull/3348.patch", "merged_at": "2024-03-25T21:15:53" }
null
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3348/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3348/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/1675
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1675/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1675/comments
https://api.github.com/repos/ollama/ollama/issues/1675/events
https://github.com/ollama/ollama/pull/1675
2,054,195,066
PR_kwDOJ0Z1Ps5iqpZv
1,675
Quiet down llama.cpp logging by default
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
0
2023-12-22T16:48:08
2023-12-22T16:57:21
2023-12-22T16:57:18
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/1675", "html_url": "https://github.com/ollama/ollama/pull/1675", "diff_url": "https://github.com/ollama/ollama/pull/1675.diff", "patch_url": "https://github.com/ollama/ollama/pull/1675.patch", "merged_at": "2023-12-22T16:57:18" }
By default builds will now produce non-debug and non-verbose binaries. To enable verbose logs in llama.cpp and debug symbols in the native code, set `CGO_CFLAGS=-g`
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1675/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1675/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/3096
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3096/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3096/comments
https://api.github.com/repos/ollama/ollama/issues/3096/events
https://github.com/ollama/ollama/issues/3096
2,183,322,186
I_kwDOJ0Z1Ps6CItpK
3,096
Is it possible to download the models from browser?
{ "login": "OguzcanOzdemir", "id": 24637523, "node_id": "MDQ6VXNlcjI0NjM3NTIz", "avatar_url": "https://avatars.githubusercontent.com/u/24637523?v=4", "gravatar_id": "", "url": "https://api.github.com/users/OguzcanOzdemir", "html_url": "https://github.com/OguzcanOzdemir", "followers_url": "https://api.github.com/users/OguzcanOzdemir/followers", "following_url": "https://api.github.com/users/OguzcanOzdemir/following{/other_user}", "gists_url": "https://api.github.com/users/OguzcanOzdemir/gists{/gist_id}", "starred_url": "https://api.github.com/users/OguzcanOzdemir/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/OguzcanOzdemir/subscriptions", "organizations_url": "https://api.github.com/users/OguzcanOzdemir/orgs", "repos_url": "https://api.github.com/users/OguzcanOzdemir/repos", "events_url": "https://api.github.com/users/OguzcanOzdemir/events{/privacy}", "received_events_url": "https://api.github.com/users/OguzcanOzdemir/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
5
2024-03-13T07:44:28
2024-04-08T16:37:08
2024-04-08T16:37:08
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hello I need the download models from browser. Is it possible?
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/followers", "following_url": "https://api.github.com/users/pdevine/following{/other_user}", "gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}", "starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pdevine/subscriptions", "organizations_url": "https://api.github.com/users/pdevine/orgs", "repos_url": "https://api.github.com/users/pdevine/repos", "events_url": "https://api.github.com/users/pdevine/events{/privacy}", "received_events_url": "https://api.github.com/users/pdevine/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3096/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3096/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1784
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1784/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1784/comments
https://api.github.com/repos/ollama/ollama/issues/1784/events
https://github.com/ollama/ollama/issues/1784
2,065,916,415
I_kwDOJ0Z1Ps57I2H_
1,784
Simpler UI / CLI for predicting model performance on user's device?
{ "login": "TahaScripts", "id": 98236583, "node_id": "U_kgDOBdr4pw", "avatar_url": "https://avatars.githubusercontent.com/u/98236583?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TahaScripts", "html_url": "https://github.com/TahaScripts", "followers_url": "https://api.github.com/users/TahaScripts/followers", "following_url": "https://api.github.com/users/TahaScripts/following{/other_user}", "gists_url": "https://api.github.com/users/TahaScripts/gists{/gist_id}", "starred_url": "https://api.github.com/users/TahaScripts/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TahaScripts/subscriptions", "organizations_url": "https://api.github.com/users/TahaScripts/orgs", "repos_url": "https://api.github.com/users/TahaScripts/repos", "events_url": "https://api.github.com/users/TahaScripts/events{/privacy}", "received_events_url": "https://api.github.com/users/TahaScripts/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
open
false
null
[]
null
0
2024-01-04T16:01:31
2024-01-04T17:47:14
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hi, I was wondering if the current Ollama system grabs any system information regarding CPU and RAM capacities. Especially since a majority of users are on Mac, there's a finite # of hardware specs for Ollama to recognize. Then, Ollama can automatically recommend which models will run best on the user's Mac. I think this would be easier for first-time users, and promote Ollama's accessibility for non-technicals (average MacOS users). If this doesn't exist, I would be more than happy to make a branch with my suggestion. If the underlying CLI/code to recognize device GPU/CPU limitations already exists, I'd love to take a stab at incorporating it into the UI. I'd appreciate any direction for which part of the codebase to observe.
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1784/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1784/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/634
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/634/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/634/comments
https://api.github.com/repos/ollama/ollama/issues/634/events
https://github.com/ollama/ollama/pull/634
1,918,029,920
PR_kwDOJ0Z1Ps5beMgW
634
use int64 consistently
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/followers", "following_url": "https://api.github.com/users/mxyng/following{/other_user}", "gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}", "starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mxyng/subscriptions", "organizations_url": "https://api.github.com/users/mxyng/orgs", "repos_url": "https://api.github.com/users/mxyng/repos", "events_url": "https://api.github.com/users/mxyng/events{/privacy}", "received_events_url": "https://api.github.com/users/mxyng/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
0
2023-09-28T18:07:46
2023-09-28T21:17:49
2023-09-28T21:17:47
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/634", "html_url": "https://github.com/ollama/ollama/pull/634", "diff_url": "https://github.com/ollama/ollama/pull/634.diff", "patch_url": "https://github.com/ollama/ollama/pull/634.patch", "merged_at": "2023-09-28T21:17:47" }
this reduces type conversion
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/followers", "following_url": "https://api.github.com/users/mxyng/following{/other_user}", "gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}", "starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mxyng/subscriptions", "organizations_url": "https://api.github.com/users/mxyng/orgs", "repos_url": "https://api.github.com/users/mxyng/repos", "events_url": "https://api.github.com/users/mxyng/events{/privacy}", "received_events_url": "https://api.github.com/users/mxyng/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/634/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/634/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/7099
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7099/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7099/comments
https://api.github.com/repos/ollama/ollama/issues/7099/events
https://github.com/ollama/ollama/issues/7099
2,565,531,603
I_kwDOJ0Z1Ps6Y6ufT
7,099
Integrate in Chrome, Chrome Extension
{ "login": "kishanios123", "id": 60137209, "node_id": "MDQ6VXNlcjYwMTM3MjA5", "avatar_url": "https://avatars.githubusercontent.com/u/60137209?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kishanios123", "html_url": "https://github.com/kishanios123", "followers_url": "https://api.github.com/users/kishanios123/followers", "following_url": "https://api.github.com/users/kishanios123/following{/other_user}", "gists_url": "https://api.github.com/users/kishanios123/gists{/gist_id}", "starred_url": "https://api.github.com/users/kishanios123/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kishanios123/subscriptions", "organizations_url": "https://api.github.com/users/kishanios123/orgs", "repos_url": "https://api.github.com/users/kishanios123/repos", "events_url": "https://api.github.com/users/kishanios123/events{/privacy}", "received_events_url": "https://api.github.com/users/kishanios123/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
2
2024-10-04T06:09:31
2024-12-02T14:34:54
2024-12-02T14:34:54
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hi Ollama team, I’d like to suggest a feature to integrate Ollama with a Chrome extension that enables auto-replies directly within email platforms (Gmail, Outlook) and other text fields (social media, messaging apps, etc.). Main Benefit: Users could generate replies without leaving the current tab or copy-pasting content back and forth. This would streamline workflows, allowing users to instantly create context-aware responses, improving productivity without interrupting their browser-based tasks. Workflow: Install the Ollama Chrome extension. Click a “Generate Auto-Reply” button within any text box (e.g., Gmail or social media). Ollama generates a response based on the context of the conversation. This would be especially useful for customer support, sales, or anyone handling repetitive communications. Thanks for considering this!
{ "login": "rick-github", "id": 14946854, "node_id": "MDQ6VXNlcjE0OTQ2ODU0", "avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rick-github", "html_url": "https://github.com/rick-github", "followers_url": "https://api.github.com/users/rick-github/followers", "following_url": "https://api.github.com/users/rick-github/following{/other_user}", "gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}", "starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rick-github/subscriptions", "organizations_url": "https://api.github.com/users/rick-github/orgs", "repos_url": "https://api.github.com/users/rick-github/repos", "events_url": "https://api.github.com/users/rick-github/events{/privacy}", "received_events_url": "https://api.github.com/users/rick-github/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7099/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7099/timeline
null
not_planned
false
https://api.github.com/repos/ollama/ollama/issues/6978
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6978/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6978/comments
https://api.github.com/repos/ollama/ollama/issues/6978/events
https://github.com/ollama/ollama/issues/6978
2,550,242,137
I_kwDOJ0Z1Ps6YAZtZ
6,978
rerank model
{ "login": "HARISHSENTHIL", "id": 99972344, "node_id": "U_kgDOBfV0-A", "avatar_url": "https://avatars.githubusercontent.com/u/99972344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/HARISHSENTHIL", "html_url": "https://github.com/HARISHSENTHIL", "followers_url": "https://api.github.com/users/HARISHSENTHIL/followers", "following_url": "https://api.github.com/users/HARISHSENTHIL/following{/other_user}", "gists_url": "https://api.github.com/users/HARISHSENTHIL/gists{/gist_id}", "starred_url": "https://api.github.com/users/HARISHSENTHIL/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/HARISHSENTHIL/subscriptions", "organizations_url": "https://api.github.com/users/HARISHSENTHIL/orgs", "repos_url": "https://api.github.com/users/HARISHSENTHIL/repos", "events_url": "https://api.github.com/users/HARISHSENTHIL/events{/privacy}", "received_events_url": "https://api.github.com/users/HARISHSENTHIL/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
closed
false
null
[]
null
3
2024-09-26T10:57:42
2024-12-02T23:02:07
2024-12-02T23:02:07
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
how can i add HF - BAAI/bge-reranker-v2-m3 rerank model to ollama while trying this approach i am getting architecture error can anyone help to resolve this issue
{ "login": "rick-github", "id": 14946854, "node_id": "MDQ6VXNlcjE0OTQ2ODU0", "avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rick-github", "html_url": "https://github.com/rick-github", "followers_url": "https://api.github.com/users/rick-github/followers", "following_url": "https://api.github.com/users/rick-github/following{/other_user}", "gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}", "starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rick-github/subscriptions", "organizations_url": "https://api.github.com/users/rick-github/orgs", "repos_url": "https://api.github.com/users/rick-github/repos", "events_url": "https://api.github.com/users/rick-github/events{/privacy}", "received_events_url": "https://api.github.com/users/rick-github/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6978/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6978/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3731
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3731/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3731/comments
https://api.github.com/repos/ollama/ollama/issues/3731/events
https://github.com/ollama/ollama/issues/3731
2,250,334,080
I_kwDOJ0Z1Ps6GIV-A
3,731
升级最新版启动报错
{ "login": "hyanqing1", "id": 26663452, "node_id": "MDQ6VXNlcjI2NjYzNDUy", "avatar_url": "https://avatars.githubusercontent.com/u/26663452?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hyanqing1", "html_url": "https://github.com/hyanqing1", "followers_url": "https://api.github.com/users/hyanqing1/followers", "following_url": "https://api.github.com/users/hyanqing1/following{/other_user}", "gists_url": "https://api.github.com/users/hyanqing1/gists{/gist_id}", "starred_url": "https://api.github.com/users/hyanqing1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hyanqing1/subscriptions", "organizations_url": "https://api.github.com/users/hyanqing1/orgs", "repos_url": "https://api.github.com/users/hyanqing1/repos", "events_url": "https://api.github.com/users/hyanqing1/events{/privacy}", "received_events_url": "https://api.github.com/users/hyanqing1/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
0
2024-04-18T10:31:38
2024-04-18T10:32:54
2024-04-18T10:32:54
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? 升级了最新版本0.1.32,启动报错,错误如下: Error: llama runner process no longer running: 3221225785 后来又重装了0.1.31版本,正常启动。 我的是windows10系统 ### OS Windows ### GPU Intel ### CPU Intel ### Ollama version 0.1.32
{ "login": "hyanqing1", "id": 26663452, "node_id": "MDQ6VXNlcjI2NjYzNDUy", "avatar_url": "https://avatars.githubusercontent.com/u/26663452?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hyanqing1", "html_url": "https://github.com/hyanqing1", "followers_url": "https://api.github.com/users/hyanqing1/followers", "following_url": "https://api.github.com/users/hyanqing1/following{/other_user}", "gists_url": "https://api.github.com/users/hyanqing1/gists{/gist_id}", "starred_url": "https://api.github.com/users/hyanqing1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hyanqing1/subscriptions", "organizations_url": "https://api.github.com/users/hyanqing1/orgs", "repos_url": "https://api.github.com/users/hyanqing1/repos", "events_url": "https://api.github.com/users/hyanqing1/events{/privacy}", "received_events_url": "https://api.github.com/users/hyanqing1/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3731/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3731/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2142
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2142/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2142/comments
https://api.github.com/repos/ollama/ollama/issues/2142/events
https://github.com/ollama/ollama/pull/2142
2,094,667,056
PR_kwDOJ0Z1Ps5kwv9m
2,142
Debug logging on init failure
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
0
2024-01-22T20:09:52
2024-01-22T20:29:26
2024-01-22T20:29:23
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/2142", "html_url": "https://github.com/ollama/ollama/pull/2142", "diff_url": "https://github.com/ollama/ollama/pull/2142.diff", "patch_url": "https://github.com/ollama/ollama/pull/2142.patch", "merged_at": "2024-01-22T20:29:23" }
One class of error we're seeing on ROCm looks like this in the log... ``` 2024/01/21 22:00:15 dyn_ext_server.go:90: INFO Loading Dynamic llm server: /tmp/ollama1546965028/rocm_v5/libext_server.so 2024/01/21 22:00:15 dyn_ext_server.go:139: INFO Initializing llama server free(): invalid pointer ``` I'm not sure yet what the root cause is, but hopefully this debug log will yield some more insight.
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2142/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2142/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/3762
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3762/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3762/comments
https://api.github.com/repos/ollama/ollama/issues/3762/events
https://github.com/ollama/ollama/pull/3762
2,253,761,317
PR_kwDOJ0Z1Ps5tNpDc
3,762
chore(deps): Update dependencies
{ "login": "reneleonhardt", "id": 65483435, "node_id": "MDQ6VXNlcjY1NDgzNDM1", "avatar_url": "https://avatars.githubusercontent.com/u/65483435?v=4", "gravatar_id": "", "url": "https://api.github.com/users/reneleonhardt", "html_url": "https://github.com/reneleonhardt", "followers_url": "https://api.github.com/users/reneleonhardt/followers", "following_url": "https://api.github.com/users/reneleonhardt/following{/other_user}", "gists_url": "https://api.github.com/users/reneleonhardt/gists{/gist_id}", "starred_url": "https://api.github.com/users/reneleonhardt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/reneleonhardt/subscriptions", "organizations_url": "https://api.github.com/users/reneleonhardt/orgs", "repos_url": "https://api.github.com/users/reneleonhardt/repos", "events_url": "https://api.github.com/users/reneleonhardt/events{/privacy}", "received_events_url": "https://api.github.com/users/reneleonhardt/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
1
2024-04-19T19:18:34
2024-11-24T22:42:58
2024-11-24T22:42:58
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/3762", "html_url": "https://github.com/ollama/ollama/pull/3762", "diff_url": "https://github.com/ollama/ollama/pull/3762.diff", "patch_url": "https://github.com/ollama/ollama/pull/3762.patch", "merged_at": null }
Please note that most updates are minor except macOS: 12 (2022) to 13 (2023). In any case, even in 12 there would be a much newer (default) Xcode 14.2 available, why has the release been downgraded to 13.4 recently by 2 major versions? I could only see a pull request, but no issue... 🤔 https://github.com/actions/runner-images/blob/main/images/macos/macos-12-Readme.md#xcode And CUDA 12.4 would seem like a low hanging fruit (11.3 is from 2021)... but the server version doesn't matter for performance? 😅
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3762/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3762/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/8593
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8593/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8593/comments
https://api.github.com/repos/ollama/ollama/issues/8593/events
https://github.com/ollama/ollama/issues/8593
2,811,574,264
I_kwDOJ0Z1Ps6nlTf4
8,593
ollama fails to detect old models after update
{ "login": "nevakrien", "id": 101988414, "node_id": "U_kgDOBhQ4Pg", "avatar_url": "https://avatars.githubusercontent.com/u/101988414?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nevakrien", "html_url": "https://github.com/nevakrien", "followers_url": "https://api.github.com/users/nevakrien/followers", "following_url": "https://api.github.com/users/nevakrien/following{/other_user}", "gists_url": "https://api.github.com/users/nevakrien/gists{/gist_id}", "starred_url": "https://api.github.com/users/nevakrien/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nevakrien/subscriptions", "organizations_url": "https://api.github.com/users/nevakrien/orgs", "repos_url": "https://api.github.com/users/nevakrien/repos", "events_url": "https://api.github.com/users/nevakrien/events{/privacy}", "received_events_url": "https://api.github.com/users/nevakrien/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
2
2025-01-26T13:53:46
2025-01-26T14:39:27
2025-01-26T14:39:26
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? so my setup has a semi link for runing ollama model and I think i have over a tera byte of model weights so if there is a way to make it so i dont need to download the entire thing again i would be very happy ### OS Linux ### GPU _No response_ ### CPU _No response_ ### Ollama version 0.5.7
{ "login": "nevakrien", "id": 101988414, "node_id": "U_kgDOBhQ4Pg", "avatar_url": "https://avatars.githubusercontent.com/u/101988414?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nevakrien", "html_url": "https://github.com/nevakrien", "followers_url": "https://api.github.com/users/nevakrien/followers", "following_url": "https://api.github.com/users/nevakrien/following{/other_user}", "gists_url": "https://api.github.com/users/nevakrien/gists{/gist_id}", "starred_url": "https://api.github.com/users/nevakrien/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nevakrien/subscriptions", "organizations_url": "https://api.github.com/users/nevakrien/orgs", "repos_url": "https://api.github.com/users/nevakrien/repos", "events_url": "https://api.github.com/users/nevakrien/events{/privacy}", "received_events_url": "https://api.github.com/users/nevakrien/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8593/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8593/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/8051
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8051/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8051/comments
https://api.github.com/repos/ollama/ollama/issues/8051/events
https://github.com/ollama/ollama/pull/8051
2,733,602,704
PR_kwDOJ0Z1Ps6E5MBB
8,051
feat: add option to specify runner name and path in env
{ "login": "thewh1teagle", "id": 61390950, "node_id": "MDQ6VXNlcjYxMzkwOTUw", "avatar_url": "https://avatars.githubusercontent.com/u/61390950?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thewh1teagle", "html_url": "https://github.com/thewh1teagle", "followers_url": "https://api.github.com/users/thewh1teagle/followers", "following_url": "https://api.github.com/users/thewh1teagle/following{/other_user}", "gists_url": "https://api.github.com/users/thewh1teagle/gists{/gist_id}", "starred_url": "https://api.github.com/users/thewh1teagle/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thewh1teagle/subscriptions", "organizations_url": "https://api.github.com/users/thewh1teagle/orgs", "repos_url": "https://api.github.com/users/thewh1teagle/repos", "events_url": "https://api.github.com/users/thewh1teagle/events{/privacy}", "received_events_url": "https://api.github.com/users/thewh1teagle/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
null
0
2024-12-11T17:45:06
2024-12-11T18:00:46
null
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/8051", "html_url": "https://github.com/ollama/ollama/pull/8051", "diff_url": "https://github.com/ollama/ollama/pull/8051.diff", "patch_url": "https://github.com/ollama/ollama/pull/8051.patch", "merged_at": null }
Add option to specify custom runner path. This will be useful as a temporary solution for [using vulkan](https://github.com/ollama/ollama/pull/5059) until the related PR is merged. macOS: ```console git clone https://github.com/thewh1teagle/ollama -b feat/custom-runner-path cd ollama echo "Building darwin arm64" GOOS=darwin ARCH=arm64 GOARCH=arm64 make -j 8 dist OLLAMA_DEBUG=true OLLAMA_RUNNER_NAME="cpu" OLLAMA_RUNNER_PATH="/path/to/custom/runner" ./dist/darwin-arm64/bin/ollama serve ``` Windows: ```console # https://www.msys2.org/ C:\msys64\msys2_shell.cmd -here -no-start -defterm -clang64 pacman -S --needed $MINGW_PACKAGE_PREFIX-{go,clang,vulkan-devel,github-cli} make git git clone https://github.com/thewh1teagle/ollama -b feat/custom-runner-path cd ollama export GOROOT=/c/msys64/clang64/lib/go export CGO_ENABLED="1" export CC="clang" export CXX="clang++" make -j 8 go build . OLLAMA_DEBUG=true OLLAMA_RUNNER_NAME="cpu" OLLAMA_RUNNER_PATH="/path/to/runner.exe" ./ollama.exe serve ```
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8051/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8051/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/8000
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8000/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8000/comments
https://api.github.com/repos/ollama/ollama/issues/8000/events
https://github.com/ollama/ollama/issues/8000
2,725,579,409
I_kwDOJ0Z1Ps6idQqR
8,000
Structured JSON does not handle arrays at the top level properly
{ "login": "scd31", "id": 57571338, "node_id": "MDQ6VXNlcjU3NTcxMzM4", "avatar_url": "https://avatars.githubusercontent.com/u/57571338?v=4", "gravatar_id": "", "url": "https://api.github.com/users/scd31", "html_url": "https://github.com/scd31", "followers_url": "https://api.github.com/users/scd31/followers", "following_url": "https://api.github.com/users/scd31/following{/other_user}", "gists_url": "https://api.github.com/users/scd31/gists{/gist_id}", "starred_url": "https://api.github.com/users/scd31/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/scd31/subscriptions", "organizations_url": "https://api.github.com/users/scd31/orgs", "repos_url": "https://api.github.com/users/scd31/repos", "events_url": "https://api.github.com/users/scd31/events{/privacy}", "received_events_url": "https://api.github.com/users/scd31/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
{ "login": "ParthSareen", "id": 29360864, "node_id": "MDQ6VXNlcjI5MzYwODY0", "avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ParthSareen", "html_url": "https://github.com/ParthSareen", "followers_url": "https://api.github.com/users/ParthSareen/followers", "following_url": "https://api.github.com/users/ParthSareen/following{/other_user}", "gists_url": "https://api.github.com/users/ParthSareen/gists{/gist_id}", "starred_url": "https://api.github.com/users/ParthSareen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ParthSareen/subscriptions", "organizations_url": "https://api.github.com/users/ParthSareen/orgs", "repos_url": "https://api.github.com/users/ParthSareen/repos", "events_url": "https://api.github.com/users/ParthSareen/events{/privacy}", "received_events_url": "https://api.github.com/users/ParthSareen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "login": "ParthSareen", "id": 29360864, "node_id": "MDQ6VXNlcjI5MzYwODY0", "avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ParthSareen", "html_url": "https://github.com/ParthSareen", "followers_url": "https://api.github.com/users/ParthSareen/followers", "following_url": "https://api.github.com/users/ParthSareen/following{/other_user}", "gists_url": "https://api.github.com/users/ParthSareen/gists{/gist_id}", "starred_url": "https://api.github.com/users/ParthSareen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ParthSareen/subscriptions", "organizations_url": "https://api.github.com/users/ParthSareen/orgs", "repos_url": "https://api.github.com/users/ParthSareen/repos", "events_url": "https://api.github.com/users/ParthSareen/events{/privacy}", "received_events_url": "https://api.github.com/users/ParthSareen/received_events", "type": "User", "user_view_type": "public", "site_admin": false } ]
null
1
2024-12-08T22:55:25
2024-12-20T22:15:32
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? It looks like structured JSON is not respected when an array is specified at the top level. Example 1: Request: ```json { "model": "llama3.1", "messages": [ { "role": "system", "content": "Given a phrase, give a list of categories" }, { "role": "user", "content": "I watched a movie with friends and there was a cat in it" } ], "stream": false, "format": { "type": "array", "items": { "type": "string" } } } ``` Response: ```json { "model": "llama3.1", "created_at": "2024-12-08T22:46:35.146738527Z", "message": { "role": "assistant", "content": "[1] " }, "done_reason": "stop", "done": true, "total_duration": 1520253202, "load_duration": 29133573, "prompt_eval_count": 37, "prompt_eval_duration": 538000000, "eval_count": 5, "eval_duration": 951000000 } ``` Example 2: Request: ```json { "model": "llama3.1", "messages": [ { "role": "system", "content": "Given a phrase, give a list of categories. Respond in valid JSON" }, { "role": "user", "content": "I watched a movie with friends and there was a cat in it" } ], "stream": false, "format": { "type": "array", "items": { "type": "object", "items": {"name": {"type": "string"}, "description": {"type": "string"}}, "required": ["name", "description"] } } } ``` Response: ```json { "model": "llama3.1", "created_at": "2024-12-08T22:52:42.640180789Z", "message": { "role": "assistant", "content": "[\n {\n \"category\": \"Entertainment\"\n },\n {\n \"category\": \"Social\"\n },\n {\n \"category\": \"Animals\"\n }\n]" }, "done_reason": "stop", "done": true, "total_duration": 81840944323, "load_duration": 29780952, "prompt_eval_count": 42, "prompt_eval_duration": 72020000000, "eval_count": 38, "eval_duration": 9789000000 } ``` I realize these examples are kind of nonsensical because I'm asking the LLM to do stuff that doesn't really line up with the schema. But surely it should be impossible for it to disobey the grammar, right? It seems to do fine if the top property is an object instead of an array. ### OS Linux ### GPU _No response_ ### CPU AMD ### Ollama version 0.5.0
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8000/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8000/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/7768
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7768/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7768/comments
https://api.github.com/repos/ollama/ollama/issues/7768/events
https://github.com/ollama/ollama/issues/7768
2,677,036,814
I_kwDOJ0Z1Ps6fkFcO
7,768
Model not loaded on all GPUs for load balancing
{ "login": "brauliobo", "id": 41740, "node_id": "MDQ6VXNlcjQxNzQw", "avatar_url": "https://avatars.githubusercontent.com/u/41740?v=4", "gravatar_id": "", "url": "https://api.github.com/users/brauliobo", "html_url": "https://github.com/brauliobo", "followers_url": "https://api.github.com/users/brauliobo/followers", "following_url": "https://api.github.com/users/brauliobo/following{/other_user}", "gists_url": "https://api.github.com/users/brauliobo/gists{/gist_id}", "starred_url": "https://api.github.com/users/brauliobo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/brauliobo/subscriptions", "organizations_url": "https://api.github.com/users/brauliobo/orgs", "repos_url": "https://api.github.com/users/brauliobo/repos", "events_url": "https://api.github.com/users/brauliobo/events{/privacy}", "received_events_url": "https://api.github.com/users/brauliobo/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
4
2024-11-20T20:04:40
2024-11-20T20:46:24
2024-11-20T20:32:58
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I expect that on a Multi GPU system it would load the model on all GPUs with the docker container loaded with `--gpus all` to balance the requests load between them. Output of `docker logs ollama`: ``` ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 3060, compute capability 8.6, VMM: yes ``` Output of `docker exec -it ollama nvidia-smi`: ``` Wed Nov 20 20:00:03 2024 +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 565.57.01 Driver Version: 565.57.01 CUDA Version: 12.7 | |-----------------------------------------+------------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 NVIDIA GeForce RTX 3060 On | 00000000:01:00.0 On | N/A | | 54% 67C P2 99W / 100W | 7203MiB / 12288MiB | 100% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ | 1 NVIDIA GeForce RTX 3060 On | 00000000:05:00.0 Off | N/A | | 41% 63C P2 99W / 100W | 8057MiB / 12288MiB | 100% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ +-----------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=========================================================================================| | 1 N/A N/A 52 C ...unners/cuda_v12/ollama_llama_server 3344MiB | +-----------------------------------------------------------------------------------------+ ``` ### OS Linux, Docker ### GPU Nvidia ### CPU AMD ### Ollama version 0.4.2 from docker
{ "login": "brauliobo", "id": 41740, "node_id": "MDQ6VXNlcjQxNzQw", "avatar_url": "https://avatars.githubusercontent.com/u/41740?v=4", "gravatar_id": "", "url": "https://api.github.com/users/brauliobo", "html_url": "https://github.com/brauliobo", "followers_url": "https://api.github.com/users/brauliobo/followers", "following_url": "https://api.github.com/users/brauliobo/following{/other_user}", "gists_url": "https://api.github.com/users/brauliobo/gists{/gist_id}", "starred_url": "https://api.github.com/users/brauliobo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/brauliobo/subscriptions", "organizations_url": "https://api.github.com/users/brauliobo/orgs", "repos_url": "https://api.github.com/users/brauliobo/repos", "events_url": "https://api.github.com/users/brauliobo/events{/privacy}", "received_events_url": "https://api.github.com/users/brauliobo/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7768/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7768/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3896
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3896/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3896/comments
https://api.github.com/repos/ollama/ollama/issues/3896/events
https://github.com/ollama/ollama/issues/3896
2,262,375,736
I_kwDOJ0Z1Ps6G2R04
3,896
Command-R fails when using format=json
{ "login": "derenrich", "id": 79513, "node_id": "MDQ6VXNlcjc5NTEz", "avatar_url": "https://avatars.githubusercontent.com/u/79513?v=4", "gravatar_id": "", "url": "https://api.github.com/users/derenrich", "html_url": "https://github.com/derenrich", "followers_url": "https://api.github.com/users/derenrich/followers", "following_url": "https://api.github.com/users/derenrich/following{/other_user}", "gists_url": "https://api.github.com/users/derenrich/gists{/gist_id}", "starred_url": "https://api.github.com/users/derenrich/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/derenrich/subscriptions", "organizations_url": "https://api.github.com/users/derenrich/orgs", "repos_url": "https://api.github.com/users/derenrich/repos", "events_url": "https://api.github.com/users/derenrich/events{/privacy}", "received_events_url": "https://api.github.com/users/derenrich/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
0
2024-04-25T00:00:58
2024-04-25T00:04:03
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? For some reason command-r is failing when put into JSON format mode. It seems to work fine otherwise. ``` % ollama run command-r --verbose "output the usa as a json" ` ``json { "country": "United States of America", "capital": "Washington, D.C.", "population": 333,271,411, "states": 50, "president": "Joe Biden" } ` `` total duration: 6.241768s load duration: 2.127476417s prompt eval count: 12 token(s) prompt eval duration: 249.108ms prompt eval rate: 48.17 tokens/s eval count: 63 token(s) eval duration: 3.86453s eval rate: 16.30 tokens/s % ollama run command-r --format json --verbose "output the usa as a json" Error: an unknown error was encountered while running the model ``` llama3 though does work for json format output: ``` % ollama run llama3 --format json --verbose "output the florida as a json" { "name": "Florida", "capital": "Tallahassee", "population": 21477747, "area": 170312, "cities": ["Jacksonville", "Miami", "Tamiami Trail", "Orlando", "St Petersburg"], "fun_facts": [ "Florida is known as the Sunshine State because of its abundant sunshine and warm weather.", "The state has a diverse range of ecosystems, including beaches, mangroves, and forests." ] } ``` ### OS macOS ### GPU Apple ### CPU Apple ### Ollama version 0.1.32
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3896/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3896/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/3120
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3120/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3120/comments
https://api.github.com/repos/ollama/ollama/issues/3120/events
https://github.com/ollama/ollama/issues/3120
2,184,601,768
I_kwDOJ0Z1Ps6CNmCo
3,120
Ollama cannot open models with unicode in the filepath
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/BruceMacD/followers", "following_url": "https://api.github.com/users/BruceMacD/following{/other_user}", "gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}", "starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions", "organizations_url": "https://api.github.com/users/BruceMacD/orgs", "repos_url": "https://api.github.com/users/BruceMacD/repos", "events_url": "https://api.github.com/users/BruceMacD/events{/privacy}", "received_events_url": "https://api.github.com/users/BruceMacD/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false } ]
null
0
2024-03-13T18:04:35
2024-04-16T21:00:14
2024-04-16T21:00:14
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Tracking this issue here, split from #2753 ``` time=2024-02-26T00:11:49.314+01:00 level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: C:\\Users\\ELJKO~1\\AppData\\Local\\Temp\\ollama816527122\\cpu_avx2\\ext_server.dll" time=2024-02-26T00:11:49.314+01:00 level=INFO source=dyn_ext_server.go:150 msg="Initializing llama server" llama_model_load: error loading model: failed to open C:\Users\Željko\.ollama\models\blobs\sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246: No such file or directory llama_load_model_from_file: failed to load model llama_init_from_gpt_params: error: failed to load model 'C:\Users\Željko\.ollama\models\blobs\sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246' {"timestamp":1708902709,"level":"ERROR","function":"load_model","line":388,"message":"unable to load model","model":"C:\\Users\\Željko\\.ollama\\models\\blobs\\sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246"} time=2024-02-26T00:11:49.314+01:00 level=WARN source=llm.go:162 msg="Failed to load dynamic library C:\\Users\\ELJKO~1\\AppData\\Local\\Temp\\ollama816527122\\cpu_avx2\\ext_server.dll error loading model C:\\Users\\Željko\\.ollama\\models\\blobs\\sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef92" [GIN] 2024/02/26 - 00:11:49 | 500 | 332.6058ms | 127.0.0.1 | POST "/api/chat" [GIN] 2024/02/26 - 00:15:10 | 200 | 509µs | 127.0.0.1 | GET "/api/version" ``` related llama.cpp fix: https://github.com/ggerganov/llama.cpp/pull/5927
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3120/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3120/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/8417
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8417/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8417/comments
https://api.github.com/repos/ollama/ollama/issues/8417/events
https://github.com/ollama/ollama/issues/8417
2,786,428,310
I_kwDOJ0Z1Ps6mFYWW
8,417
Model request for need:QVQ-72B-Preview and qwen2-vl!
{ "login": "twythebest", "id": 89891289, "node_id": "MDQ6VXNlcjg5ODkxMjg5", "avatar_url": "https://avatars.githubusercontent.com/u/89891289?v=4", "gravatar_id": "", "url": "https://api.github.com/users/twythebest", "html_url": "https://github.com/twythebest", "followers_url": "https://api.github.com/users/twythebest/followers", "following_url": "https://api.github.com/users/twythebest/following{/other_user}", "gists_url": "https://api.github.com/users/twythebest/gists{/gist_id}", "starred_url": "https://api.github.com/users/twythebest/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/twythebest/subscriptions", "organizations_url": "https://api.github.com/users/twythebest/orgs", "repos_url": "https://api.github.com/users/twythebest/repos", "events_url": "https://api.github.com/users/twythebest/events{/privacy}", "received_events_url": "https://api.github.com/users/twythebest/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
closed
false
null
[]
null
4
2025-01-14T07:16:09
2025-01-15T22:15:12
2025-01-15T22:15:12
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Please add model:QVQ-72B-Preview and qwen2-vl to ollama!!!!
{ "login": "rick-github", "id": 14946854, "node_id": "MDQ6VXNlcjE0OTQ2ODU0", "avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rick-github", "html_url": "https://github.com/rick-github", "followers_url": "https://api.github.com/users/rick-github/followers", "following_url": "https://api.github.com/users/rick-github/following{/other_user}", "gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}", "starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rick-github/subscriptions", "organizations_url": "https://api.github.com/users/rick-github/orgs", "repos_url": "https://api.github.com/users/rick-github/repos", "events_url": "https://api.github.com/users/rick-github/events{/privacy}", "received_events_url": "https://api.github.com/users/rick-github/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8417/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8417/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/514
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/514/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/514/comments
https://api.github.com/repos/ollama/ollama/issues/514/events
https://github.com/ollama/ollama/pull/514
1,892,272,867
PR_kwDOJ0Z1Ps5aHeZV
514
Allow customization of ollama models etc path
{ "login": "tastycode", "id": 809953, "node_id": "MDQ6VXNlcjgwOTk1Mw==", "avatar_url": "https://avatars.githubusercontent.com/u/809953?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tastycode", "html_url": "https://github.com/tastycode", "followers_url": "https://api.github.com/users/tastycode/followers", "following_url": "https://api.github.com/users/tastycode/following{/other_user}", "gists_url": "https://api.github.com/users/tastycode/gists{/gist_id}", "starred_url": "https://api.github.com/users/tastycode/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tastycode/subscriptions", "organizations_url": "https://api.github.com/users/tastycode/orgs", "repos_url": "https://api.github.com/users/tastycode/repos", "events_url": "https://api.github.com/users/tastycode/events{/privacy}", "received_events_url": "https://api.github.com/users/tastycode/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
1
2023-09-12T11:04:25
2023-10-25T22:35:12
2023-10-25T22:35:11
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/514", "html_url": "https://github.com/ollama/ollama/pull/514", "diff_url": "https://github.com/ollama/ollama/pull/514.diff", "patch_url": "https://github.com/ollama/ollama/pull/514.patch", "merged_at": null }
Responding to https://github.com/jmorganca/ollama/issues/513 It turns out it wasn't that hard to patch it to be customizable via envvar.
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/514/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/ollama/ollama/issues/514/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/7925
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7925/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7925/comments
https://api.github.com/repos/ollama/ollama/issues/7925/events
https://github.com/ollama/ollama/issues/7925
2,716,244,159
I_kwDOJ0Z1Ps6h5pi_
7,925
add code to enable ollama cli cmd logging , or disable the new ' if not tty exit ' code PLZZ
{ "login": "fxmbsw7", "id": 39368685, "node_id": "MDQ6VXNlcjM5MzY4Njg1", "avatar_url": "https://avatars.githubusercontent.com/u/39368685?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fxmbsw7", "html_url": "https://github.com/fxmbsw7", "followers_url": "https://api.github.com/users/fxmbsw7/followers", "following_url": "https://api.github.com/users/fxmbsw7/following{/other_user}", "gists_url": "https://api.github.com/users/fxmbsw7/gists{/gist_id}", "starred_url": "https://api.github.com/users/fxmbsw7/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fxmbsw7/subscriptions", "organizations_url": "https://api.github.com/users/fxmbsw7/orgs", "repos_url": "https://api.github.com/users/fxmbsw7/repos", "events_url": "https://api.github.com/users/fxmbsw7/events{/privacy}", "received_events_url": "https://api.github.com/users/fxmbsw7/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
4
2024-12-03T23:55:42
2024-12-10T21:08:32
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? i have .bashrc like to log ollama cmds newly , this completly stopped working .. neither tee -a $somelog < <( ollama .. ) nor ollama |& tee -a $log nor ollama > >( cat ) .. stay alive .. they exit after ai answers , .. or exit after loading model if no text as cli arguments are given .. before it wasnt perfect but now .. :)) my solution is make tty check either somehow optional , or throw it away .. .. greets .. btw this is on android phone on termux compiled .. i suppose that tty check is a general one , . eg same on debian and else ... ### OS Linux ### GPU Other ### CPU Other ### Ollama version 0.0.0 ( 0.4.5 or so via git )
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7925/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7925/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/3010
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3010/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3010/comments
https://api.github.com/repos/ollama/ollama/issues/3010/events
https://github.com/ollama/ollama/issues/3010
2,176,617,278
I_kwDOJ0Z1Ps6BvIs-
3,010
"Error: invalid file magic" when creating Code Llama model
{ "login": "AI-Guru", "id": 32195399, "node_id": "MDQ6VXNlcjMyMTk1Mzk5", "avatar_url": "https://avatars.githubusercontent.com/u/32195399?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AI-Guru", "html_url": "https://github.com/AI-Guru", "followers_url": "https://api.github.com/users/AI-Guru/followers", "following_url": "https://api.github.com/users/AI-Guru/following{/other_user}", "gists_url": "https://api.github.com/users/AI-Guru/gists{/gist_id}", "starred_url": "https://api.github.com/users/AI-Guru/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AI-Guru/subscriptions", "organizations_url": "https://api.github.com/users/AI-Guru/orgs", "repos_url": "https://api.github.com/users/AI-Guru/repos", "events_url": "https://api.github.com/users/AI-Guru/events{/privacy}", "received_events_url": "https://api.github.com/users/AI-Guru/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/BruceMacD/followers", "following_url": "https://api.github.com/users/BruceMacD/following{/other_user}", "gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}", "starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions", "organizations_url": "https://api.github.com/users/BruceMacD/orgs", "repos_url": "https://api.github.com/users/BruceMacD/repos", "events_url": "https://api.github.com/users/BruceMacD/events{/privacy}", "received_events_url": "https://api.github.com/users/BruceMacD/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/BruceMacD/followers", "following_url": "https://api.github.com/users/BruceMacD/following{/other_user}", "gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}", "starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions", "organizations_url": "https://api.github.com/users/BruceMacD/orgs", "repos_url": "https://api.github.com/users/BruceMacD/repos", "events_url": "https://api.github.com/users/BruceMacD/events{/privacy}", "received_events_url": "https://api.github.com/users/BruceMacD/received_events", "type": "User", "user_view_type": "public", "site_admin": false } ]
null
2
2024-03-08T19:06:04
2024-03-09T13:35:19
2024-03-09T13:35:19
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hello! First and foremost, THANKS A LOT for Ollama! Your software is most useful! I am trying to import a finetune of Code Llama 7B into Ollama. I get this error: ``` $ollama create musicllm -f Modelfile transferring model data creating model layer Error: invalid file magic ``` Here is the model: https://huggingface.co/TristanBehrens/musicllm/tree/main Here is the Modelfile: ``` FROM model.q5_k_m.gguf TEMPLATE "[INST] {{ .Prompt }} [/INST]" ``` Any ideas, where I could looks? All the best, Tristan
{ "login": "AI-Guru", "id": 32195399, "node_id": "MDQ6VXNlcjMyMTk1Mzk5", "avatar_url": "https://avatars.githubusercontent.com/u/32195399?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AI-Guru", "html_url": "https://github.com/AI-Guru", "followers_url": "https://api.github.com/users/AI-Guru/followers", "following_url": "https://api.github.com/users/AI-Guru/following{/other_user}", "gists_url": "https://api.github.com/users/AI-Guru/gists{/gist_id}", "starred_url": "https://api.github.com/users/AI-Guru/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AI-Guru/subscriptions", "organizations_url": "https://api.github.com/users/AI-Guru/orgs", "repos_url": "https://api.github.com/users/AI-Guru/repos", "events_url": "https://api.github.com/users/AI-Guru/events{/privacy}", "received_events_url": "https://api.github.com/users/AI-Guru/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3010/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3010/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7847
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7847/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7847/comments
https://api.github.com/repos/ollama/ollama/issues/7847/events
https://github.com/ollama/ollama/issues/7847
2,695,991,729
I_kwDOJ0Z1Ps6gsZGx
7,847
Support for Nvidia Hymba
{ "login": "WikiLucas00", "id": 63519673, "node_id": "MDQ6VXNlcjYzNTE5Njcz", "avatar_url": "https://avatars.githubusercontent.com/u/63519673?v=4", "gravatar_id": "", "url": "https://api.github.com/users/WikiLucas00", "html_url": "https://github.com/WikiLucas00", "followers_url": "https://api.github.com/users/WikiLucas00/followers", "following_url": "https://api.github.com/users/WikiLucas00/following{/other_user}", "gists_url": "https://api.github.com/users/WikiLucas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/WikiLucas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/WikiLucas00/subscriptions", "organizations_url": "https://api.github.com/users/WikiLucas00/orgs", "repos_url": "https://api.github.com/users/WikiLucas00/repos", "events_url": "https://api.github.com/users/WikiLucas00/events{/privacy}", "received_events_url": "https://api.github.com/users/WikiLucas00/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
open
false
null
[]
null
0
2024-11-26T20:32:19
2024-11-26T20:32:19
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
It would be great to support Hymba in Ollama! https://developer.nvidia.com/blog/hymba-hybrid-head-architecture-boosts-small-language-model-performance/
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7847/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7847/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/5113
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5113/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5113/comments
https://api.github.com/repos/ollama/ollama/issues/5113/events
https://github.com/ollama/ollama/issues/5113
2,359,631,059
I_kwDOJ0Z1Ps6MpRzT
5,113
DeepSeek-Coder-V2-Lite-Instruct out of memory
{ "login": "tincore", "id": 20477204, "node_id": "MDQ6VXNlcjIwNDc3MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/20477204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tincore", "html_url": "https://github.com/tincore", "followers_url": "https://api.github.com/users/tincore/followers", "following_url": "https://api.github.com/users/tincore/following{/other_user}", "gists_url": "https://api.github.com/users/tincore/gists{/gist_id}", "starred_url": "https://api.github.com/users/tincore/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tincore/subscriptions", "organizations_url": "https://api.github.com/users/tincore/orgs", "repos_url": "https://api.github.com/users/tincore/repos", "events_url": "https://api.github.com/users/tincore/events{/privacy}", "received_events_url": "https://api.github.com/users/tincore/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6849881759, "node_id": "LA_kwDOJ0Z1Ps8AAAABmEjmnw", "url": "https://api.github.com/repos/ollama/ollama/labels/memory", "name": "memory", "color": "5017EA", "default": false, "description": "" } ]
closed
false
null
[]
null
0
2024-06-18T11:30:46
2024-06-18T23:30:59
2024-06-18T23:30:59
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Hi, Thanks for the great project. I get a crash (OOM) when trying to load new deepseek-coder-v2. Other models work fine. I've just upgraded to latest pre-release just in case but same behavior. ``` jun 18 13:23:17 ollama[26949]: INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="15" port="41023" tid="137762752704512" timestamp=1718709797 jun 18 13:23:17 ollama[22256]: llama_model_loader: loaded meta data with 38 key-value pairs and 377 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-5ff0abeeac1d2dbdd5455c0b49ba3b29a9ce3c1fb181b2eef2e948689d55d046 (version GGUF V3 (latest)) jun 18 13:23:17 ollama[22256]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv 0: general.architecture str = deepseek2 jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv 1: general.name str = DeepSeek-Coder-V2-Lite-Instruct jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv 2: deepseek2.block_count u32 = 27 jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv 3: deepseek2.context_length u32 = 163840 jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv 4: deepseek2.embedding_length u32 = 2048 jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv 5: deepseek2.feed_forward_length u32 = 10944 jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv 6: deepseek2.attention.head_count u32 = 16 jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv 7: deepseek2.attention.head_count_kv u32 = 16 jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv 8: deepseek2.rope.freq_base f32 = 10000.000000 jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv 9: deepseek2.attention.layer_norm_rms_epsilon f32 = 0.000001 jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv 10: deepseek2.expert_used_count u32 = 6 jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv 11: general.file_type u32 = 2 jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv 12: deepseek2.leading_dense_block_count u32 = 1 jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv 13: deepseek2.vocab_size u32 = 102400 jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv 14: deepseek2.attention.kv_lora_rank u32 = 512 jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv 15: deepseek2.attention.key_length u32 = 192 jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv 16: deepseek2.attention.value_length u32 = 128 jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv 17: deepseek2.expert_feed_forward_length u32 = 1408 jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv 18: deepseek2.expert_count u32 = 64 jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv 19: deepseek2.expert_shared_count u32 = 2 jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv 20: deepseek2.expert_weights_scale f32 = 1.000000 jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv 21: deepseek2.rope.dimension_count u32 = 64 jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv 22: deepseek2.rope.scaling.type str = yarn jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv 23: deepseek2.rope.scaling.factor f32 = 40.000000 jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv 24: deepseek2.rope.scaling.original_context_length u32 = 4096 jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv 25: deepseek2.rope.scaling.yarn_log_multiplier f32 = 0.070700 jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv 26: tokenizer.ggml.model str = gpt2 jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv 27: tokenizer.ggml.pre str = deepseek-llm jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv 28: tokenizer.ggml.tokens arr[str,102400] = ["!", "\"", "#", "$", "%", "&", "'", ... jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv 29: tokenizer.ggml.token_type arr[i32,102400] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv 30: tokenizer.ggml.merges arr[str,99757] = ["Ġ Ġ", "Ġ t", "Ġ a", "i n", "h e... jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv 31: tokenizer.ggml.bos_token_id u32 = 100000 jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv 32: tokenizer.ggml.eos_token_id u32 = 100001 jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv 33: tokenizer.ggml.padding_token_id u32 = 100001 jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv 34: tokenizer.ggml.add_bos_token bool = true jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv 35: tokenizer.ggml.add_eos_token bool = false jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv 36: tokenizer.chat_template str = {% if not add_generation_prompt is de... jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv 37: general.quantization_version u32 = 2 jun 18 13:23:17 ollama[22256]: llama_model_loader: - type f32: 108 tensors jun 18 13:23:17 ollama[22256]: llama_model_loader: - type q4_0: 268 tensors jun 18 13:23:17 ollama[22256]: llama_model_loader: - type q6_K: 1 tensors jun 18 13:23:17 ollama[22256]: llm_load_vocab: special tokens cache size = 2400 jun 18 13:23:17 ollama[22256]: time=2024-06-18T13:23:17.498+02:00 level=INFO source=server.go:582 msg="waiting for server to become available" status="llm server loading model" jun 18 13:23:17 ollama[22256]: llm_load_vocab: token to piece cache size = 0.6661 MB jun 18 13:23:17 ollama[22256]: llm_load_print_meta: format = GGUF V3 (latest) jun 18 13:23:17 ollama[22256]: llm_load_print_meta: arch = deepseek2 jun 18 13:23:17 ollama[22256]: llm_load_print_meta: vocab type = BPE jun 18 13:23:17 ollama[22256]: llm_load_print_meta: n_vocab = 102400 jun 18 13:23:17 ollama[22256]: llm_load_print_meta: n_merges = 99757 jun 18 13:23:17 ollama[22256]: llm_load_print_meta: n_ctx_train = 163840 jun 18 13:23:17 ollama[22256]: llm_load_print_meta: n_embd = 2048 jun 18 13:23:17 ollama[22256]: llm_load_print_meta: n_head = 16 jun 18 13:23:17 ollama[22256]: llm_load_print_meta: n_head_kv = 16 jun 18 13:23:17 ollama[22256]: llm_load_print_meta: n_layer = 27 jun 18 13:23:17 ollama[22256]: llm_load_print_meta: n_rot = 64 jun 18 13:23:17 ollama[22256]: llm_load_print_meta: n_embd_head_k = 192 jun 18 13:23:17 ollama[22256]: llm_load_print_meta: n_embd_head_v = 128 jun 18 13:23:17 ollama[22256]: llm_load_print_meta: n_gqa = 1 jun 18 13:23:17 ollama[22256]: llm_load_print_meta: n_embd_k_gqa = 3072 jun 18 13:23:17 ollama[22256]: llm_load_print_meta: n_embd_v_gqa = 2048 jun 18 13:23:17 ollama[22256]: llm_load_print_meta: f_norm_eps = 0.0e+00 jun 18 13:23:17 ollama[22256]: llm_load_print_meta: f_norm_rms_eps = 1.0e-06 jun 18 13:23:17 ollama[22256]: llm_load_print_meta: f_clamp_kqv = 0.0e+00 jun 18 13:23:17 ollama[22256]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00 jun 18 13:23:17 ollama[22256]: llm_load_print_meta: f_logit_scale = 0.0e+00 jun 18 13:23:17 ollama[22256]: llm_load_print_meta: n_ff = 10944 jun 18 13:23:17 ollama[22256]: llm_load_print_meta: n_expert = 64 jun 18 13:23:17 ollama[22256]: llm_load_print_meta: n_expert_used = 6 jun 18 13:23:17 ollama[22256]: llm_load_print_meta: causal attn = 1 jun 18 13:23:17 ollama[22256]: llm_load_print_meta: pooling type = 0 jun 18 13:23:17 ollama[22256]: llm_load_print_meta: rope type = 0 jun 18 13:23:17 ollama[22256]: llm_load_print_meta: rope scaling = yarn jun 18 13:23:17 ollama[22256]: llm_load_print_meta: freq_base_train = 10000.0 jun 18 13:23:17 ollama[22256]: llm_load_print_meta: freq_scale_train = 0.025 jun 18 13:23:17 ollama[22256]: llm_load_print_meta: n_ctx_orig_yarn = 4096 jun 18 13:23:17 ollama[22256]: llm_load_print_meta: rope_finetuned = unknown jun 18 13:23:17 ollama[22256]: llm_load_print_meta: ssm_d_conv = 0 jun 18 13:23:17 ollama[22256]: llm_load_print_meta: ssm_d_inner = 0 jun 18 13:23:17 ollama[22256]: llm_load_print_meta: ssm_d_state = 0 jun 18 13:23:17 ollama[22256]: llm_load_print_meta: ssm_dt_rank = 0 jun 18 13:23:17 ollama[22256]: llm_load_print_meta: model type = 16B jun 18 13:23:17 ollama[22256]: llm_load_print_meta: model ftype = Q4_0 jun 18 13:23:17 ollama[22256]: llm_load_print_meta: model params = 15.71 B jun 18 13:23:17 ollama[22256]: llm_load_print_meta: model size = 8.29 GiB (4.53 BPW) jun 18 13:23:17 ollama[22256]: llm_load_print_meta: general.name = DeepSeek-Coder-V2-Lite-Instruct jun 18 13:23:17 ollama[22256]: llm_load_print_meta: BOS token = 100000 '<|begin▁of▁sentence|>' jun 18 13:23:17 ollama[22256]: llm_load_print_meta: EOS token = 100001 '<|end▁of▁sentence|>' jun 18 13:23:17 ollama[22256]: llm_load_print_meta: PAD token = 100001 '<|end▁of▁sentence|>' jun 18 13:23:17 ollama[22256]: llm_load_print_meta: LF token = 126 'Ä' jun 18 13:23:17 ollama[22256]: llm_load_print_meta: n_layer_dense_lead = 1 jun 18 13:23:17 ollama[22256]: llm_load_print_meta: n_lora_q = 0 jun 18 13:23:17 ollama[22256]: llm_load_print_meta: n_lora_kv = 512 jun 18 13:23:17 ollama[22256]: llm_load_print_meta: n_ff_exp = 1408 jun 18 13:23:17 ollama[22256]: llm_load_print_meta: n_expert_shared = 2 jun 18 13:23:17 ollama[22256]: llm_load_print_meta: expert_weights_scale = 1.0 jun 18 13:23:17 ollama[22256]: llm_load_print_meta: rope_yarn_log_mul = 0.0707 jun 18 13:23:17 ollama[22256]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: yes jun 18 13:23:17 ollama[22256]: ggml_cuda_init: CUDA_USE_TENSOR_CORES: no jun 18 13:23:17 ollama[22256]: ggml_cuda_init: found 1 CUDA devices: jun 18 13:23:17 ollama[22256]: Device 0: NVIDIA GeForce RTX 3070 Laptop GPU, compute capability 8.6, VMM: yes jun 18 13:23:17 ollama[22256]: llm_load_tensors: ggml ctx size = 0.35 MiB jun 18 13:23:18 ollama[22256]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 8376.27 MiB on device 0: cudaMalloc failed: out of memory jun 18 13:23:18 ollama[22256]: llama_model_load: error loading model: unable to allocate backend buffer jun 18 13:23:18 ollama[22256]: llama_load_model_from_file: exception loading model jun 18 13:23:18 ollama[22256]: terminate called after throwing an instance of 'std::runtime_error' jun 18 13:23:18 ollama[22256]: what(): unable to allocate backend buffer jun 18 13:23:18 ollama[22256]: time=2024-06-18T13:23:18.757+02:00 level=INFO source=server.go:582 msg="waiting for server to become available" status="llm server error" jun 18 13:23:19 ollama[22256]: time=2024-06-18T13:23:19.008+02:00 level=ERROR source=sched.go:388 msg="error loading llama server" error="llama runner process has terminated: signal: aborted (core dumped) cudaMalloc failed: out of memory" jun 18 13:23:19 ollama[22256]: [GIN] 2024/06/18 - 13:23:19 | 500 | 3.168403177s | 127.0.0.1 | POST "/api/chat" jun 18 13:23:24 ollama[22256]: time=2024-06-18T13:23:24.231+02:00 level=WARN source=sched.go:575 msg="gpu VRAM usage didn't recover within timeout" seconds=5.223479956 model=/usr/share/ollama/.ollama/models/blobs/sha256-5ff0abeeac1d2dbdd5455c0b49ba3b29a9ce3c1fb181b2eef2e948689d55d046 ``` ``` +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 555.52.04 Driver Version: 555.52.04 CUDA Version: 12.5 | |-----------------------------------------+------------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 NVIDIA GeForce RTX 3070 ... Off | 00000000:01:00.0 On | N/A | | N/A 60C P0 35W / 115W | 168MiB / 8192MiB | 32% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ ``` ### OS Linux ### GPU Nvidia ### CPU AMD ### Ollama version 0.1.45-rc2, 0.1.44
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/followers", "following_url": "https://api.github.com/users/mxyng/following{/other_user}", "gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}", "starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mxyng/subscriptions", "organizations_url": "https://api.github.com/users/mxyng/orgs", "repos_url": "https://api.github.com/users/mxyng/repos", "events_url": "https://api.github.com/users/mxyng/events{/privacy}", "received_events_url": "https://api.github.com/users/mxyng/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5113/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5113/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1891
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1891/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1891/comments
https://api.github.com/repos/ollama/ollama/issues/1891/events
https://github.com/ollama/ollama/issues/1891
2,074,049,753
I_kwDOJ0Z1Ps57n3zZ
1,891
Add ability to hide/disable/enable models
{ "login": "oliverbob", "id": 23272429, "node_id": "MDQ6VXNlcjIzMjcyNDI5", "avatar_url": "https://avatars.githubusercontent.com/u/23272429?v=4", "gravatar_id": "", "url": "https://api.github.com/users/oliverbob", "html_url": "https://github.com/oliverbob", "followers_url": "https://api.github.com/users/oliverbob/followers", "following_url": "https://api.github.com/users/oliverbob/following{/other_user}", "gists_url": "https://api.github.com/users/oliverbob/gists{/gist_id}", "starred_url": "https://api.github.com/users/oliverbob/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/oliverbob/subscriptions", "organizations_url": "https://api.github.com/users/oliverbob/orgs", "repos_url": "https://api.github.com/users/oliverbob/repos", "events_url": "https://api.github.com/users/oliverbob/events{/privacy}", "received_events_url": "https://api.github.com/users/oliverbob/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
open
false
null
[]
null
1
2024-01-10T10:22:17
2024-03-11T20:42:58
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
If we can have this feature, I'm sure it will help us out of the clutter. Or perhaps, is it possible to provide a way to Categorize models? Practical Application: Downloading large models from ollama site (consumes bandwidth) you don't really want to delete a model but just hide it from your organization or users. Also, what is the best way to migrate the ollama local models directory without redownloading from the official site? Or using the terminal, how do we upload a model to this directory? I wish we have ollma migrate /path/to-models/ which have the ability to sync with non-duplicate models. Thanks.
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1891/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1891/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/1440
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1440/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1440/comments
https://api.github.com/repos/ollama/ollama/issues/1440/events
https://github.com/ollama/ollama/pull/1440
2,033,298,061
PR_kwDOJ0Z1Ps5hjuew
1,440
🛠️ Add service activation prompt
{ "login": "Samk13", "id": 36583694, "node_id": "MDQ6VXNlcjM2NTgzNjk0", "avatar_url": "https://avatars.githubusercontent.com/u/36583694?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Samk13", "html_url": "https://github.com/Samk13", "followers_url": "https://api.github.com/users/Samk13/followers", "following_url": "https://api.github.com/users/Samk13/following{/other_user}", "gists_url": "https://api.github.com/users/Samk13/gists{/gist_id}", "starred_url": "https://api.github.com/users/Samk13/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Samk13/subscriptions", "organizations_url": "https://api.github.com/users/Samk13/orgs", "repos_url": "https://api.github.com/users/Samk13/repos", "events_url": "https://api.github.com/users/Samk13/events{/privacy}", "received_events_url": "https://api.github.com/users/Samk13/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
2
2023-12-08T20:51:25
2024-06-10T08:45:02
2024-06-09T18:07:32
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/1440", "html_url": "https://github.com/ollama/ollama/pull/1440", "diff_url": "https://github.com/ollama/ollama/pull/1440.diff", "patch_url": "https://github.com/ollama/ollama/pull/1440.patch", "merged_at": null }
Closes #1352 ### Key Changes: - Added `ask_to_activate_service` function to prompt users for service activation post-installation. - Integrated the prompt in the script's flow, allowing conditional execution of systemd service configuration. ### Impact: - Improves user experience by providing a choice to activate the service. - Ensures clearer installation process, aligning with user expectations.
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1440/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1440/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/2351
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2351/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2351/comments
https://api.github.com/repos/ollama/ollama/issues/2351/events
https://github.com/ollama/ollama/issues/2351
2,117,240,014
I_kwDOJ0Z1Ps5-MoTO
2,351
JSON mode outputs a stream of newline characters
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "bmizerany", "id": 46, "node_id": "MDQ6VXNlcjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/46?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bmizerany", "html_url": "https://github.com/bmizerany", "followers_url": "https://api.github.com/users/bmizerany/followers", "following_url": "https://api.github.com/users/bmizerany/following{/other_user}", "gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}", "starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions", "organizations_url": "https://api.github.com/users/bmizerany/orgs", "repos_url": "https://api.github.com/users/bmizerany/repos", "events_url": "https://api.github.com/users/bmizerany/events{/privacy}", "received_events_url": "https://api.github.com/users/bmizerany/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "login": "bmizerany", "id": 46, "node_id": "MDQ6VXNlcjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/46?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bmizerany", "html_url": "https://github.com/bmizerany", "followers_url": "https://api.github.com/users/bmizerany/followers", "following_url": "https://api.github.com/users/bmizerany/following{/other_user}", "gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}", "starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions", "organizations_url": "https://api.github.com/users/bmizerany/orgs", "repos_url": "https://api.github.com/users/bmizerany/repos", "events_url": "https://api.github.com/users/bmizerany/events{/privacy}", "received_events_url": "https://api.github.com/users/bmizerany/received_events", "type": "User", "user_view_type": "public", "site_admin": false } ]
null
2
2024-02-04T18:08:43
2024-03-12T01:31:29
2024-03-12T01:31:28
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
null
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2351/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2351/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/136
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/136/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/136/comments
https://api.github.com/repos/ollama/ollama/issues/136/events
https://github.com/ollama/ollama/pull/136
1,814,139,199
PR_kwDOJ0Z1Ps5WAoyG
136
Delete models.json
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/BruceMacD/followers", "following_url": "https://api.github.com/users/BruceMacD/following{/other_user}", "gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}", "starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions", "organizations_url": "https://api.github.com/users/BruceMacD/orgs", "repos_url": "https://api.github.com/users/BruceMacD/repos", "events_url": "https://api.github.com/users/BruceMacD/events{/privacy}", "received_events_url": "https://api.github.com/users/BruceMacD/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
0
2023-07-20T14:33:15
2023-07-24T19:30:55
2023-07-20T14:40:46
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/136", "html_url": "https://github.com/ollama/ollama/pull/136", "diff_url": "https://github.com/ollama/ollama/pull/136.diff", "patch_url": "https://github.com/ollama/ollama/pull/136.patch", "merged_at": "2023-07-20T14:40:46" }
This is no longer used.
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/BruceMacD/followers", "following_url": "https://api.github.com/users/BruceMacD/following{/other_user}", "gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}", "starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions", "organizations_url": "https://api.github.com/users/BruceMacD/orgs", "repos_url": "https://api.github.com/users/BruceMacD/repos", "events_url": "https://api.github.com/users/BruceMacD/events{/privacy}", "received_events_url": "https://api.github.com/users/BruceMacD/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/136/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/136/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/110
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/110/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/110/comments
https://api.github.com/repos/ollama/ollama/issues/110/events
https://github.com/ollama/ollama/pull/110
1,810,993,466
PR_kwDOJ0Z1Ps5V12ts
110
fix pull 0 bytes on completed layer
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/followers", "following_url": "https://api.github.com/users/mxyng/following{/other_user}", "gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}", "starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mxyng/subscriptions", "organizations_url": "https://api.github.com/users/mxyng/orgs", "repos_url": "https://api.github.com/users/mxyng/repos", "events_url": "https://api.github.com/users/mxyng/events{/privacy}", "received_events_url": "https://api.github.com/users/mxyng/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
1
2023-07-19T01:53:18
2023-07-19T02:39:02
2023-07-19T02:38:59
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/110", "html_url": "https://github.com/ollama/ollama/pull/110", "diff_url": "https://github.com/ollama/ollama/pull/110.diff", "patch_url": "https://github.com/ollama/ollama/pull/110.patch", "merged_at": "2023-07-19T02:38:59" }
This PR fixes the bug where when the progress bar displays 0B for a layer when the layer already exists: ``` $ ollama pull llama2 pulling manifest pulling 8daa9615cce30c25... 0% | | ( 0 B/3.5 GB) [0s:0s] pulling c929c04af928be41... 0% | | ( 0 B/3.5 GB) [0s:0s] pulling cf39c1a5c36937e4... 100% |██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| (3.5/3.5 GB, 53 TB/s) writing manifest success ``` ``` $ ollama pull llama2 pulling manifest pulling 8daa9615cce30c25... 100% |██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| (3.5/3.5 GB, 16 TB/s) pulling c929c04af928be41... 100% |███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| (547/547 B, 12 MB/s) pulling cf39c1a5c36937e4... 100% |██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| (225/225 B, 5.0 MB/s) writing manifest success ``` Now each layer also correctly reports the layer's size rather than the total bundle size. Refactor `Pull/PushProgress` into `ProgressResponse` since they share the exact same attributes and remove `Percent` since it's not being used and the caller can easily compute it for themselves
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/followers", "following_url": "https://api.github.com/users/mxyng/following{/other_user}", "gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}", "starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mxyng/subscriptions", "organizations_url": "https://api.github.com/users/mxyng/orgs", "repos_url": "https://api.github.com/users/mxyng/repos", "events_url": "https://api.github.com/users/mxyng/events{/privacy}", "received_events_url": "https://api.github.com/users/mxyng/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/110/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/110/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/2444
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2444/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2444/comments
https://api.github.com/repos/ollama/ollama/issues/2444/events
https://github.com/ollama/ollama/issues/2444
2,128,816,951
I_kwDOJ0Z1Ps5-4ys3
2,444
Ollama docker container crash full WSL2 Ubuntu
{ "login": "wizd", "id": 2835415, "node_id": "MDQ6VXNlcjI4MzU0MTU=", "avatar_url": "https://avatars.githubusercontent.com/u/2835415?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wizd", "html_url": "https://github.com/wizd", "followers_url": "https://api.github.com/users/wizd/followers", "following_url": "https://api.github.com/users/wizd/following{/other_user}", "gists_url": "https://api.github.com/users/wizd/gists{/gist_id}", "starred_url": "https://api.github.com/users/wizd/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wizd/subscriptions", "organizations_url": "https://api.github.com/users/wizd/orgs", "repos_url": "https://api.github.com/users/wizd/repos", "events_url": "https://api.github.com/users/wizd/events{/privacy}", "received_events_url": "https://api.github.com/users/wizd/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false } ]
null
2
2024-02-11T03:41:22
2024-03-27T20:58:36
2024-03-27T20:58:35
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
docker container setup as bellow. ``` version: "3.7" services: ollama: container_name: ollama image: ollama/ollama:latest ports: - "5310:11434" volumes: - ./ollama:/root/.ollama restart: unless-stopped environment: - CUDA_VISIBLE_DEVICES=0,1 - OLLAMA_ORIGINS=* deploy: resources: reservations: devices: - driver: nvidia count: all capabilities: [gpu] ollama-webui: image: ghcr.io/ollama-webui/ollama-webui:main container_name: ollama-webui ports: - "7000:8080" volumes: - ./ollama-webui1:/app/backend/data environment: - 'OLLAMA_API_BASE_URL=http://ollama:11434/api' restart: unless-stopped ``` modelfile as bellow: ``` FROM miqu-1-70b.q2_k.gguf PARAMETER num_ctx 24000 PARAMETER num_gpu 81 ``` logs: ``` ... .................................................................................................... llama_new_context_with_model: n_ctx = 26000 llama_new_context_with_model: freq_base = 1000000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: CUDA0 KV buffer size = 4164.06 MiB llama_kv_cache_init: CUDA1 KV buffer size = 3960.94 MiB llama_new_context_with_model: KV self size = 8125.00 MiB, K (f16): 4062.50 MiB, V (f16): 4062.50 MiB llama_new_context_with_model: CUDA_Host input buffer size = 66.88 MiB llama_new_context_with_model: CUDA0 compute buffer size = 3683.66 MiB llama_new_context_with_model: CUDA1 compute buffer size = 3683.66 MiB llama_new_context_with_model: CUDA_Host compute buffer size = 17.60 MiB llama_new_context_with_model: graph splits (measure): 5 time=2024-02-10T01:16:44.820Z level=INFO source=dyn_ext_server.go:156 msg="Starting llama main loop" [GIN] 2024/02/10 - 01:16:52 | 200 | 46.01445749s | 172.22.0.1 | POST "/api/generate" [GIN] 2024/02/10 - 01:17:02 | 200 | 9.530461175s | 172.22.0.1 | POST "/api/generate" [GIN] 2024/02/10 - 01:17:11 | 200 | 9.062014736s | 172.22.0.1 | POST "/api/generate" error from daemon in stream: Error grabbing logs: invalid character '\x00' looking for beginning of value (base) super@dev-local-ai:~/data/ollama$ docker logs -f ollama [已退出进程,代码为 1 (0x00000001)] 现在可以使用Ctrl+D关闭此终端,或按 Enter 重新启动。 远程主机强迫关闭了一个现有的连接。 Error code: Wsl/Service/0x80072746 Press any key to continue... ```
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2444/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2444/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4293
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4293/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4293/comments
https://api.github.com/repos/ollama/ollama/issues/4293/events
https://github.com/ollama/ollama/issues/4293
2,288,113,574
I_kwDOJ0Z1Ps6IYdem
4,293
longtext llama3-gradient bug
{ "login": "bambooqj", "id": 20792621, "node_id": "MDQ6VXNlcjIwNzkyNjIx", "avatar_url": "https://avatars.githubusercontent.com/u/20792621?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bambooqj", "html_url": "https://github.com/bambooqj", "followers_url": "https://api.github.com/users/bambooqj/followers", "following_url": "https://api.github.com/users/bambooqj/following{/other_user}", "gists_url": "https://api.github.com/users/bambooqj/gists{/gist_id}", "starred_url": "https://api.github.com/users/bambooqj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bambooqj/subscriptions", "organizations_url": "https://api.github.com/users/bambooqj/orgs", "repos_url": "https://api.github.com/users/bambooqj/repos", "events_url": "https://api.github.com/users/bambooqj/events{/privacy}", "received_events_url": "https://api.github.com/users/bambooqj/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
0
2024-05-09T17:12:59
2024-05-09T17:16:02
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? If I use 'ollama' for long text processing, then the 'system' statements will no longer be effective. Instead, it will produce random outputs. The model is 'llama3-gradient'. ``` systemmsg=""" Please analyze the type of website based on the 'body' content I provide, and return to me in JSON format with the structure {type:..., why:...}. """ def get_webpage_content(url): try: if response.status_code == 200: return response.text else: print(f'Failed to retrieve the webpage. Status code: {response.status_code}') return None except requests.exceptions.RequestException as e: print(f'An error occurred: {e}') return None body = get_webpage_content('http://www.sohu.com') response = ollama.generate(model='llama3-gradient', prompt=body,format='json',options={ "seed": 123, "num_ctx": 32000 }, system=systemmsg) print(response['response']) ``` ### OS Windows ### GPU Nvidia ### CPU Intel ### Ollama version 0.1.34
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4293/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4293/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/402
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/402/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/402/comments
https://api.github.com/repos/ollama/ollama/issues/402/events
https://github.com/ollama/ollama/issues/402
1,862,851,976
I_kwDOJ0Z1Ps5vCN2I
402
Uncensored models can't be customised
{ "login": "velkir", "id": 52069224, "node_id": "MDQ6VXNlcjUyMDY5MjI0", "avatar_url": "https://avatars.githubusercontent.com/u/52069224?v=4", "gravatar_id": "", "url": "https://api.github.com/users/velkir", "html_url": "https://github.com/velkir", "followers_url": "https://api.github.com/users/velkir/followers", "following_url": "https://api.github.com/users/velkir/following{/other_user}", "gists_url": "https://api.github.com/users/velkir/gists{/gist_id}", "starred_url": "https://api.github.com/users/velkir/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/velkir/subscriptions", "organizations_url": "https://api.github.com/users/velkir/orgs", "repos_url": "https://api.github.com/users/velkir/repos", "events_url": "https://api.github.com/users/velkir/events{/privacy}", "received_events_url": "https://api.github.com/users/velkir/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
4
2023-08-23T08:39:44
2023-09-01T16:04:48
2023-09-01T16:04:47
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hi! Thanks for the cool tool:) Tried to customize: -llama2 - customisable -llama2-uncensored - no result -nous-hermes - customisable -wizard-vicuna-uncensored - no result -wizardlm-uncensored - no result The system msg used: FROM wizardlm-uncensored SYSTEM """ You are the Geralt of Rivia from the Witcher. Act like Geralt, be him. """ Is that intentional not to let customize uncensored models or a bug?
{ "login": "technovangelist", "id": 633681, "node_id": "MDQ6VXNlcjYzMzY4MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4", "gravatar_id": "", "url": "https://api.github.com/users/technovangelist", "html_url": "https://github.com/technovangelist", "followers_url": "https://api.github.com/users/technovangelist/followers", "following_url": "https://api.github.com/users/technovangelist/following{/other_user}", "gists_url": "https://api.github.com/users/technovangelist/gists{/gist_id}", "starred_url": "https://api.github.com/users/technovangelist/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/technovangelist/subscriptions", "organizations_url": "https://api.github.com/users/technovangelist/orgs", "repos_url": "https://api.github.com/users/technovangelist/repos", "events_url": "https://api.github.com/users/technovangelist/events{/privacy}", "received_events_url": "https://api.github.com/users/technovangelist/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/402/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/402/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2694
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2694/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2694/comments
https://api.github.com/repos/ollama/ollama/issues/2694/events
https://github.com/ollama/ollama/issues/2694
2,149,878,121
I_kwDOJ0Z1Ps6AJIlp
2,694
Add another binary that the linux install script could use on ROCm accelerated systems.
{ "login": "TimTheBig", "id": 132001783, "node_id": "U_kgDOB94v9w", "avatar_url": "https://avatars.githubusercontent.com/u/132001783?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TimTheBig", "html_url": "https://github.com/TimTheBig", "followers_url": "https://api.github.com/users/TimTheBig/followers", "following_url": "https://api.github.com/users/TimTheBig/following{/other_user}", "gists_url": "https://api.github.com/users/TimTheBig/gists{/gist_id}", "starred_url": "https://api.github.com/users/TimTheBig/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TimTheBig/subscriptions", "organizations_url": "https://api.github.com/users/TimTheBig/orgs", "repos_url": "https://api.github.com/users/TimTheBig/repos", "events_url": "https://api.github.com/users/TimTheBig/events{/privacy}", "received_events_url": "https://api.github.com/users/TimTheBig/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false } ]
null
3
2024-02-22T20:24:42
2024-03-12T00:08:26
2024-03-12T00:08:22
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Another binary that the install script could use on `ROCm` accelerated systems would be useful. Releases are not compiled with `HIP`, therefore *non-NVidia* GPU acceleration support is not present. https://github.com/ollama/ollama/issues/2685#issuecomment-1959937668
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2694/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2694/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1954
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1954/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1954/comments
https://api.github.com/repos/ollama/ollama/issues/1954/events
https://github.com/ollama/ollama/issues/1954
2,079,092,214
I_kwDOJ0Z1Ps577G32
1,954
Support GPU A500
{ "login": "aemonge", "id": 1322348, "node_id": "MDQ6VXNlcjEzMjIzNDg=", "avatar_url": "https://avatars.githubusercontent.com/u/1322348?v=4", "gravatar_id": "", "url": "https://api.github.com/users/aemonge", "html_url": "https://github.com/aemonge", "followers_url": "https://api.github.com/users/aemonge/followers", "following_url": "https://api.github.com/users/aemonge/following{/other_user}", "gists_url": "https://api.github.com/users/aemonge/gists{/gist_id}", "starred_url": "https://api.github.com/users/aemonge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aemonge/subscriptions", "organizations_url": "https://api.github.com/users/aemonge/orgs", "repos_url": "https://api.github.com/users/aemonge/repos", "events_url": "https://api.github.com/users/aemonge/events{/privacy}", "received_events_url": "https://api.github.com/users/aemonge/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
4
2024-01-12T15:22:53
2024-01-15T08:05:14
2024-01-15T08:05:14
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Can't get model tu run on GPU: ``` Fri Jan 12 16:22:20 2024 +---------------------------------------------------------------------------------------+ | NVIDIA-SMI 545.29.06 Driver Version: 545.29.06 CUDA Version: 12.3 | |-----------------------------------------+----------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+======================+======================| | 0 NVIDIA RTX A500 Laptop GPU Off | 00000000:03:00.0 Off | N/A | | N/A 53C P8 4W / 20W | 7MiB / 4096MiB | 0% Default | | | | N/A | +-----------------------------------------+----------------------+----------------------+ +---------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=======================================================================================| | 0 N/A N/A 1404 G /usr/lib/Xorg 4MiB | +---------------------------------------------------------------------------------------+ ``` I'm on arch and installed via `pacman -S ollama`
{ "login": "aemonge", "id": 1322348, "node_id": "MDQ6VXNlcjEzMjIzNDg=", "avatar_url": "https://avatars.githubusercontent.com/u/1322348?v=4", "gravatar_id": "", "url": "https://api.github.com/users/aemonge", "html_url": "https://github.com/aemonge", "followers_url": "https://api.github.com/users/aemonge/followers", "following_url": "https://api.github.com/users/aemonge/following{/other_user}", "gists_url": "https://api.github.com/users/aemonge/gists{/gist_id}", "starred_url": "https://api.github.com/users/aemonge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aemonge/subscriptions", "organizations_url": "https://api.github.com/users/aemonge/orgs", "repos_url": "https://api.github.com/users/aemonge/repos", "events_url": "https://api.github.com/users/aemonge/events{/privacy}", "received_events_url": "https://api.github.com/users/aemonge/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1954/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1954/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6083
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6083/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6083/comments
https://api.github.com/repos/ollama/ollama/issues/6083/events
https://github.com/ollama/ollama/pull/6083
2,438,936,086
PR_kwDOJ0Z1Ps5273xv
6,083
Update README to include Firebase Genkit
{ "login": "mchiang0610", "id": 3325447, "node_id": "MDQ6VXNlcjMzMjU0NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mchiang0610", "html_url": "https://github.com/mchiang0610", "followers_url": "https://api.github.com/users/mchiang0610/followers", "following_url": "https://api.github.com/users/mchiang0610/following{/other_user}", "gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}", "starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions", "organizations_url": "https://api.github.com/users/mchiang0610/orgs", "repos_url": "https://api.github.com/users/mchiang0610/repos", "events_url": "https://api.github.com/users/mchiang0610/events{/privacy}", "received_events_url": "https://api.github.com/users/mchiang0610/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
0
2024-07-31T01:38:31
2024-07-31T01:40:11
2024-07-31T01:40:09
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/6083", "html_url": "https://github.com/ollama/ollama/pull/6083", "diff_url": "https://github.com/ollama/ollama/pull/6083.diff", "patch_url": "https://github.com/ollama/ollama/pull/6083.patch", "merged_at": "2024-07-31T01:40:09" }
Firebase Genkit
{ "login": "mchiang0610", "id": 3325447, "node_id": "MDQ6VXNlcjMzMjU0NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mchiang0610", "html_url": "https://github.com/mchiang0610", "followers_url": "https://api.github.com/users/mchiang0610/followers", "following_url": "https://api.github.com/users/mchiang0610/following{/other_user}", "gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}", "starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions", "organizations_url": "https://api.github.com/users/mchiang0610/orgs", "repos_url": "https://api.github.com/users/mchiang0610/repos", "events_url": "https://api.github.com/users/mchiang0610/events{/privacy}", "received_events_url": "https://api.github.com/users/mchiang0610/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6083/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6083/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5040
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5040/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5040/comments
https://api.github.com/repos/ollama/ollama/issues/5040/events
https://github.com/ollama/ollama/pull/5040
2,352,462,965
PR_kwDOJ0Z1Ps5ybx0V
5,040
chore: add openapi 3.1 spec for public api
{ "login": "JerrettDavis", "id": 2610199, "node_id": "MDQ6VXNlcjI2MTAxOTk=", "avatar_url": "https://avatars.githubusercontent.com/u/2610199?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JerrettDavis", "html_url": "https://github.com/JerrettDavis", "followers_url": "https://api.github.com/users/JerrettDavis/followers", "following_url": "https://api.github.com/users/JerrettDavis/following{/other_user}", "gists_url": "https://api.github.com/users/JerrettDavis/gists{/gist_id}", "starred_url": "https://api.github.com/users/JerrettDavis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JerrettDavis/subscriptions", "organizations_url": "https://api.github.com/users/JerrettDavis/orgs", "repos_url": "https://api.github.com/users/JerrettDavis/repos", "events_url": "https://api.github.com/users/JerrettDavis/events{/privacy}", "received_events_url": "https://api.github.com/users/JerrettDavis/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
3
2024-06-14T04:26:50
2025-01-28T07:13:47
2024-11-22T01:36:00
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5040", "html_url": "https://github.com/ollama/ollama/pull/5040", "diff_url": "https://github.com/ollama/ollama/pull/5040.diff", "patch_url": "https://github.com/ollama/ollama/pull/5040.patch", "merged_at": null }
Addresses issue #3383. Targets OpenAPI 3.1.0 as that's the most recent, and it appears to be the only version that supports DELETE with a request body. Also added [spectral](https://github.com/stoplightio/spectral-action) to the github test pipeline to lint the spec to ensure it's valid. Swagger utilizing this spec can be seen in browser here: https://validator.swagger.io/?url=https://raw.githubusercontent.com/ollama/ollama/ef7c6cb43aa8cc8c38a4e51d4d7e78b66e08a5c1/specs/openapi-3.1.yaml#/models/getModels Redoc visualization here: https://redocly.github.io/redoc/?url=https://raw.githubusercontent.com/ollama/ollama/ef7c6cb43aa8cc8c38a4e51d4d7e78b66e08a5c1/specs/openapi-3.1.yaml#tag/generate
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5040/reactions", "total_count": 9, "+1": 7, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5040/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/8263
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8263/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8263/comments
https://api.github.com/repos/ollama/ollama/issues/8263/events
https://github.com/ollama/ollama/issues/8263
2,761,911,580
I_kwDOJ0Z1Ps6kn20c
8,263
Ollama with AMD GPU Issue
{ "login": "kannszzz", "id": 23491305, "node_id": "MDQ6VXNlcjIzNDkxMzA1", "avatar_url": "https://avatars.githubusercontent.com/u/23491305?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kannszzz", "html_url": "https://github.com/kannszzz", "followers_url": "https://api.github.com/users/kannszzz/followers", "following_url": "https://api.github.com/users/kannszzz/following{/other_user}", "gists_url": "https://api.github.com/users/kannszzz/gists{/gist_id}", "starred_url": "https://api.github.com/users/kannszzz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kannszzz/subscriptions", "organizations_url": "https://api.github.com/users/kannszzz/orgs", "repos_url": "https://api.github.com/users/kannszzz/repos", "events_url": "https://api.github.com/users/kannszzz/events{/privacy}", "received_events_url": "https://api.github.com/users/kannszzz/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
4
2024-12-28T20:22:58
2024-12-29T03:21:37
2024-12-29T03:21:34
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
[Ollama.log](https://github.com/user-attachments/files/18268220/Ollama.log) ### What is the issue? Environment: Debian 12 virtualized on proxmox with GPU passthrough GPU: 6650 XT (Unsupported) Using Environment="HSA_OVERRIDE_GFX_VERSION=10.3.0" Systemd log: `Dec 28 15:03:19 AI-ML ollama[2378]: time=2024-12-28T15:03:19.515-05:00 level=INFO source=routes.go:1310 msg="Listening on [::]:11434 (version 0.5.4)" Dec 28 15:03:19 AI-ML ollama[2378]: time=2024-12-28T15:03:19.516-05:00 level=INFO source=common.go:131 msg="GPU runner incompatible with host system, CPU does not have AVX" runner=cuda_v11_avx Dec 28 15:03:19 AI-ML ollama[2378]: time=2024-12-28T15:03:19.516-05:00 level=INFO source=common.go:131 msg="GPU runner incompatible with host system, CPU does not have AVX" runner=cuda_v12_avx Dec 28 15:03:19 AI-ML ollama[2378]: time=2024-12-28T15:03:19.516-05:00 level=INFO source=routes.go:1339 msg="Dynamic LLM libraries" runners="[cpu_avx2 rocm_avx cpu cpu_avx]" Dec 28 15:03:19 AI-ML ollama[2378]: time=2024-12-28T15:03:19.516-05:00 level=INFO source=gpu.go:226 msg="looking for compatible GPUs" Dec 28 15:03:19 AI-ML ollama[2378]: time=2024-12-28T15:03:19.523-05:00 level=INFO source=amd_linux.go:391 msg="skipping rocm gfx compatibility check" HSA_OVERRIDE_GFX_VERSION=10.3.0 Dec 28 15:03:19 AI-ML ollama[2378]: time=2024-12-28T15:03:19.527-05:00 level=INFO source=types.go:131 msg="inference compute" id=0 library=rocm variant="" compute=gfx1032 driver=6.10 name=1002:73ef total="8.0 GiB available="7.9 GiB` When I try running any model even the lightest one smol I get the following error, `Error: llama runner process has terminated: signal: illegal instruction ` Checked the journalctl log `Dec 28 15:17:30 AI-ML ollama[788]: [GIN-debug] POST /api/pull --> github.com/ollama/ollama/server.(*Server).PullHandler-fm (5 handlers) Dec 28 15:17:30 AI-ML ollama[788]: [GIN-debug] POST /api/generate --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers) Dec 28 15:17:30 AI-ML ollama[788]: [GIN-debug] POST /api/chat --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers) Dec 28 15:17:30 AI-ML ollama[788]: [GIN-debug] POST /api/embed --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers) Dec 28 15:17:30 AI-ML ollama[788]: [GIN-debug] POST /api/embeddings --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers) Dec 28 15:17:30 AI-ML ollama[788]: [GIN-debug] POST /api/create --> github.com/ollama/ollama/server.(*Server).CreateHandler-fm (5 handlers) Dec 28 15:17:30 AI-ML ollama[788]: [GIN-debug] POST /api/push --> github.com/ollama/ollama/server.(*Server).PushHandler-fm (5 handlers) Dec 28 15:17:30 AI-ML ollama[788]: [GIN-debug] POST /api/copy --> github.com/ollama/ollama/server.(*Server).CopyHandler-fm (5 handlers) Dec 28 15:17:30 AI-ML ollama[788]: [GIN-debug] DELETE /api/delete --> github.com/ollama/ollama/server.(*Server).DeleteHandler-fm (5 handlers) Dec 28 15:17:30 AI-ML ollama[788]: [GIN-debug] POST /api/show --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (5 handlers) Dec 28 15:17:30 AI-ML ollama[788]: [GIN-debug] POST /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers) Dec 28 15:17:30 AI-ML ollama[788]: [GIN-debug] HEAD /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers) Dec 28 15:17:30 AI-ML ollama[788]: [GIN-debug] GET /api/ps --> github.com/ollama/ollama/server.(*Server).PsHandler-fm (5 handlers) Dec 28 15:17:30 AI-ML ollama[788]: [GIN-debug] POST /v1/chat/completions --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers) Dec 28 15:17:30 AI-ML ollama[788]: [GIN-debug] POST /v1/completions --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers) Dec 28 15:17:30 AI-ML ollama[788]: [GIN-debug] POST /v1/embeddings --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers) Dec 28 15:17:30 AI-ML ollama[788]: [GIN-debug] GET /v1/models --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (6 handlers) Dec 28 15:17:30 AI-ML ollama[788]: [GIN-debug] GET /v1/models/:model --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (6 handlers) Dec 28 15:17:30 AI-ML ollama[788]: [GIN-debug] GET / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers) Dec 28 15:17:30 AI-ML ollama[788]: [GIN-debug] GET /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers) Dec 28 15:17:30 AI-ML ollama[788]: [GIN-debug] GET /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers) Dec 28 15:17:30 AI-ML ollama[788]: [GIN-debug] HEAD / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers) Dec 28 15:17:30 AI-ML ollama[788]: [GIN-debug] HEAD /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers) Dec 28 15:17:30 AI-ML ollama[788]: [GIN-debug] HEAD /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers) Dec 28 15:17:30 AI-ML ollama[788]: time=2024-12-28T15:17:30.915-05:00 level=INFO source=routes.go:1310 msg="Listening on [::]:11434 (version 0.5.4)" Dec 28 15:17:30 AI-ML ollama[788]: time=2024-12-28T15:17:30.917-05:00 level=INFO source=common.go:131 msg="GPU runner incompatible with host system, CPU does not have AVX" runner=cuda_v11_avx Dec 28 15:17:30 AI-ML ollama[788]: time=2024-12-28T15:17:30.917-05:00 level=INFO source=common.go:131 msg="GPU runner incompatible with host system, CPU does not have AVX" runner=cuda_v12_avx Dec 28 15:17:30 AI-ML ollama[788]: time=2024-12-28T15:17:30.917-05:00 level=INFO source=routes.go:1339 msg="Dynamic LLM libraries" runners="[rocm_avx cpu cpu_avx cpu_avx2]" Dec 28 15:17:30 AI-ML ollama[788]: time=2024-12-28T15:17:30.918-05:00 level=INFO source=gpu.go:226 msg="looking for compatible GPUs" Dec 28 15:17:30 AI-ML ollama[788]: time=2024-12-28T15:17:30.928-05:00 level=INFO source=amd_linux.go:391 msg="skipping rocm gfx compatibility check" HSA_OVERRIDE_GFX_VERSION=10.3.0 Dec 28 15:17:30 AI-ML ollama[788]: time=2024-12-28T15:17:30.933-05:00 level=INFO source=types.go:131 msg="inference compute" id=0 library=rocm variant="" compute=gfx1032 driver=6.10 name=1002:73ef total="8.0 GiB" > Dec 28 15:18:14 AI-ML ollama[788]: [GIN] 2024/12/28 - 15:18:14 | 200 | 811.616µs | 127.0.0.1 | HEAD "/" Dec 28 15:18:14 AI-ML ollama[788]: [GIN] 2024/12/28 - 15:18:14 | 200 | 23.292812ms | 127.0.0.1 | POST "/api/show" Dec 28 15:18:14 AI-ML ollama[788]: time=2024-12-28T15:18:14.092-05:00 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/usr/share/ollama/.ollama/models/blobs/s> Dec 28 15:18:14 AI-ML ollama[788]: time=2024-12-28T15:18:14.092-05:00 level=INFO source=server.go:104 msg="system memory" total="31.3 GiB" free="29.5 GiB" free_swap="0 B" Dec 28 15:18:14 AI-ML ollama[788]: time=2024-12-28T15:18:14.093-05:00 level=INFO source=memory.go:356 msg="offload to rocm" layers.requested=-1 layers.model=29 layers.offload=29 layers.split="" memory.available="[> Dec 28 15:18:14 AI-ML ollama[788]: time=2024-12-28T15:18:14.093-05:00 level=INFO source=common.go:131 msg="GPU runner incompatible with host system, CPU does not have AVX" runner=cuda_v11_avx Dec 28 15:18:14 AI-ML ollama[788]: time=2024-12-28T15:18:14.093-05:00 level=INFO source=common.go:131 msg="GPU runner incompatible with host system, CPU does not have AVX" runner=cuda_v12_avx Dec 28 15:18:14 AI-ML ollama[788]: time=2024-12-28T15:18:14.093-05:00 level=INFO source=common.go:131 msg="GPU runner incompatible with host system, CPU does not have AVX" runner=cuda_v11_avx Dec 28 15:18:14 AI-ML ollama[788]: time=2024-12-28T15:18:14.093-05:00 level=INFO source=common.go:131 msg="GPU runner incompatible with host system, CPU does not have AVX" runner=cuda_v12_avx Dec 28 15:18:14 AI-ML ollama[788]: time=2024-12-28T15:18:14.093-05:00 level=INFO source=server.go:376 msg="starting llama server" cmd="/usr/local/lib/ollama/runners/rocm_avx/ollama_llama_server runner --model /usr> Dec 28 15:18:14 AI-ML ollama[788]: time=2024-12-28T15:18:14.094-05:00 level=INFO source=sched.go:449 msg="loaded runners" count=1 Dec 28 15:18:14 AI-ML ollama[788]: time=2024-12-28T15:18:14.094-05:00 level=INFO source=server.go:555 msg="waiting for llama runner to start responding" Dec 28 15:18:14 AI-ML ollama[788]: time=2024-12-28T15:18:14.095-05:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error" Dec 28 15:18:14 AI-ML ollama[788]: time=2024-12-28T15:18:14.346-05:00 level=ERROR source=sched.go:455 msg="error loading llama server" error="llama runner process has terminated: signal: illegal instruction" Dec 28 15:18:14 AI-ML ollama[788]: [GIN] 2024/12/28 - 15:18:14 | 500 | 305.479625ms | 127.0.0.1 | POST "/api/generate" Dec 28 15:18:19 AI-ML ollama[788]: time=2024-12-28T15:18:19.347-05:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.000717787 model=/usr/share/ollama/.ollama/models/bl> Dec 28 15:18:19 AI-ML ollama[788]: time=2024-12-28T15:18:19.597-05:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.250446487 model=/usr/share/ollama/.ollama/models/bl> Dec 28 15:18:19 AI-ML ollama[788]: time=2024-12-28T15:18:19.846-05:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.500215086 model=/usr/share/ollama/.ollama/models/bl` Not sure if I am missing any config but seems like vram is running out but model ram for llama3.2 is around 3-4 gb. ### OS Linux ### GPU AMD ### CPU Intel ### Ollama version 0.5.4
{ "login": "kannszzz", "id": 23491305, "node_id": "MDQ6VXNlcjIzNDkxMzA1", "avatar_url": "https://avatars.githubusercontent.com/u/23491305?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kannszzz", "html_url": "https://github.com/kannszzz", "followers_url": "https://api.github.com/users/kannszzz/followers", "following_url": "https://api.github.com/users/kannszzz/following{/other_user}", "gists_url": "https://api.github.com/users/kannszzz/gists{/gist_id}", "starred_url": "https://api.github.com/users/kannszzz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kannszzz/subscriptions", "organizations_url": "https://api.github.com/users/kannszzz/orgs", "repos_url": "https://api.github.com/users/kannszzz/repos", "events_url": "https://api.github.com/users/kannszzz/events{/privacy}", "received_events_url": "https://api.github.com/users/kannszzz/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8263/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8263/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4592
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4592/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4592/comments
https://api.github.com/repos/ollama/ollama/issues/4592/events
https://github.com/ollama/ollama/issues/4592
2,313,244,559
I_kwDOJ0Z1Ps6J4U-P
4,592
Mistral-7B instruct v3 FP16 Please
{ "login": "Donno191", "id": 10705947, "node_id": "MDQ6VXNlcjEwNzA1OTQ3", "avatar_url": "https://avatars.githubusercontent.com/u/10705947?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Donno191", "html_url": "https://github.com/Donno191", "followers_url": "https://api.github.com/users/Donno191/followers", "following_url": "https://api.github.com/users/Donno191/following{/other_user}", "gists_url": "https://api.github.com/users/Donno191/gists{/gist_id}", "starred_url": "https://api.github.com/users/Donno191/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Donno191/subscriptions", "organizations_url": "https://api.github.com/users/Donno191/orgs", "repos_url": "https://api.github.com/users/Donno191/repos", "events_url": "https://api.github.com/users/Donno191/events{/privacy}", "received_events_url": "https://api.github.com/users/Donno191/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
closed
false
null
[]
null
1
2024-05-23T15:41:26
2024-05-23T15:43:17
2024-05-23T15:43:17
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Mistral-7B instruct v3 FP16 Please - https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3/tree/main
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4592/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4592/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4425
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4425/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4425/comments
https://api.github.com/repos/ollama/ollama/issues/4425/events
https://github.com/ollama/ollama/issues/4425
2,294,928,917
I_kwDOJ0Z1Ps6IydYV
4,425
joanfm / jina-embeddings-v2-base-en and -de fail with error code 500
{ "login": "qsdhj", "id": 166700412, "node_id": "U_kgDOCe-lfA", "avatar_url": "https://avatars.githubusercontent.com/u/166700412?v=4", "gravatar_id": "", "url": "https://api.github.com/users/qsdhj", "html_url": "https://github.com/qsdhj", "followers_url": "https://api.github.com/users/qsdhj/followers", "following_url": "https://api.github.com/users/qsdhj/following{/other_user}", "gists_url": "https://api.github.com/users/qsdhj/gists{/gist_id}", "starred_url": "https://api.github.com/users/qsdhj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/qsdhj/subscriptions", "organizations_url": "https://api.github.com/users/qsdhj/orgs", "repos_url": "https://api.github.com/users/qsdhj/repos", "events_url": "https://api.github.com/users/qsdhj/events{/privacy}", "received_events_url": "https://api.github.com/users/qsdhj/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false } ]
null
6
2024-05-14T09:30:50
2024-08-01T22:39:17
2024-08-01T22:39:17
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I tried to integerate the german embedding model **joanfm/jina-embeddings-v2-base-de** , into my LlamaIndex RAG application. During the creation of the embeddings the process ollama fails with **error 500: llama runner process has terminated: exit status 0xc0000409**. When calling: ```python pass_embedding = Settings.embed_model.get_text_embedding_batch( ["This is a passage!", "This is another passage"], show_progress=True ) ``` ```python ValueError Traceback (most recent call last) Cell In[16], [line 2](vscode-notebook-cell:?execution_count=16&line=2) [1](vscode-notebook-cell:?execution_count=16&line=1) # Test the embedding model ----> [2](vscode-notebook-cell:?execution_count=16&line=2) pass_embedding = Settings.embed_model.get_text_embedding_batch( [3](vscode-notebook-cell:?execution_count=16&line=3) ["This is a passage!", "This is another passage"], show_progress=True [4](vscode-notebook-cell:?execution_count=16&line=4) ) [5](vscode-notebook-cell:?execution_count=16&line=5) print(pass_embedding) [7](vscode-notebook-cell:?execution_count=16&line=7) query_embedding = Settings.embed_model.get_query_embedding("Where is blue?") File c:\Users\Stefan.Mueller\AppData\Local\miniconda3\envs\llamaindex\Lib\site-packages\llama_index\core\instrumentation\dispatcher.py:274, in Dispatcher.span.<locals>.wrapper(func, instance, args, kwargs) [270](file:///C:/Users/Stefan.Mueller/AppData/Local/miniconda3/envs/llamaindex/Lib/site-packages/llama_index/core/instrumentation/dispatcher.py:270) self.span_enter( [271](file:///C:/Users/Stefan.Mueller/AppData/Local/miniconda3/envs/llamaindex/Lib/site-packages/llama_index/core/instrumentation/dispatcher.py:271) id_=id_, bound_args=bound_args, instance=instance, parent_id=parent_id [272](file:///C:/Users/Stefan.Mueller/AppData/Local/miniconda3/envs/llamaindex/Lib/site-packages/llama_index/core/instrumentation/dispatcher.py:272) ) [273](file:///C:/Users/Stefan.Mueller/AppData/Local/miniconda3/envs/llamaindex/Lib/site-packages/llama_index/core/instrumentation/dispatcher.py:273) try: --> [274](file:///C:/Users/Stefan.Mueller/AppData/Local/miniconda3/envs/llamaindex/Lib/site-packages/llama_index/core/instrumentation/dispatcher.py:274) result = func(*args, **kwargs) [275](file:///C:/Users/Stefan.Mueller/AppData/Local/miniconda3/envs/llamaindex/Lib/site-packages/llama_index/core/instrumentation/dispatcher.py:275) except BaseException as e: [276](file:///C:/Users/Stefan.Mueller/AppData/Local/miniconda3/envs/llamaindex/Lib/site-packages/llama_index/core/instrumentation/dispatcher.py:276) self.event(SpanDropEvent(span_id=id_, err_str=str(e))) File c:\Users\Stefan.Mueller\AppData\Local\miniconda3\envs\llamaindex\Lib\site-packages\llama_index\core\base\embeddings\base.py:331, in BaseEmbedding.get_text_embedding_batch(self, texts, show_progress, **kwargs) [322](file:///C:/Users/Stefan.Mueller/AppData/Local/miniconda3/envs/llamaindex/Lib/site-packages/llama_index/core/base/embeddings/base.py:322) dispatch_event( [323](file:///C:/Users/Stefan.Mueller/AppData/Local/miniconda3/envs/llamaindex/Lib/site-packages/llama_index/core/base/embeddings/base.py:323) EmbeddingStartEvent( [324](file:///C:/Users/Stefan.Mueller/AppData/Local/miniconda3/envs/llamaindex/Lib/site-packages/llama_index/core/base/embeddings/base.py:324) model_dict=self.to_dict(), [325](file:///C:/Users/Stefan.Mueller/AppData/Local/miniconda3/envs/llamaindex/Lib/site-packages/llama_index/core/base/embeddings/base.py:325) ) [326](file:///C:/Users/Stefan.Mueller/AppData/Local/miniconda3/envs/llamaindex/Lib/site-packages/llama_index/core/base/embeddings/base.py:326) ) ... [100](file:///C:/Users/Stefan.Mueller/AppData/Local/miniconda3/envs/llamaindex/Lib/site-packages/llama_index/embeddings/ollama/base.py:100) ) [102](file:///C:/Users/Stefan.Mueller/AppData/Local/miniconda3/envs/llamaindex/Lib/site-packages/llama_index/embeddings/ollama/base.py:102) try: [103](file:///C:/Users/Stefan.Mueller/AppData/Local/miniconda3/envs/llamaindex/Lib/site-packages/llama_index/embeddings/ollama/base.py:103) return response.json()["embedding"] ``` With **mxbai-embed-large:latest** this works without an error. ### OS Windows ### GPU Nvidia ### CPU Intel ### Ollama version 0.1.37
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4425/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4425/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/424
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/424/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/424/comments
https://api.github.com/repos/ollama/ollama/issues/424/events
https://github.com/ollama/ollama/issues/424
1,868,175,503
I_kwDOJ0Z1Ps5vWhiP
424
Error: Head "http://localhost:11434/": dial tcp: lookup localhost: no such host
{ "login": "DreamDevourer", "id": 24636471, "node_id": "MDQ6VXNlcjI0NjM2NDcx", "avatar_url": "https://avatars.githubusercontent.com/u/24636471?v=4", "gravatar_id": "", "url": "https://api.github.com/users/DreamDevourer", "html_url": "https://github.com/DreamDevourer", "followers_url": "https://api.github.com/users/DreamDevourer/followers", "following_url": "https://api.github.com/users/DreamDevourer/following{/other_user}", "gists_url": "https://api.github.com/users/DreamDevourer/gists{/gist_id}", "starred_url": "https://api.github.com/users/DreamDevourer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DreamDevourer/subscriptions", "organizations_url": "https://api.github.com/users/DreamDevourer/orgs", "repos_url": "https://api.github.com/users/DreamDevourer/repos", "events_url": "https://api.github.com/users/DreamDevourer/events{/privacy}", "received_events_url": "https://api.github.com/users/DreamDevourer/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
1
2023-08-26T17:04:59
2023-08-26T19:00:34
2023-08-26T18:59:37
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
This is very odd, but after updating to the latest v0.0.16, this error started showing when I use ollama. For example,if I try to run a simple "ollama list" this shows up: Error: Head "http://localhost:11434/": dial tcp: lookup localhost: no such host I've cleaned any DNS traces, hosts file is untouched and there are no firewall conflicts. What could be causing this issue? Is it only happening on my end?
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/424/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/424/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/958
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/958/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/958/comments
https://api.github.com/repos/ollama/ollama/issues/958/events
https://github.com/ollama/ollama/pull/958
1,971,409,643
PR_kwDOJ0Z1Ps5eSIfZ
958
append LD_LIBRARY_PATH
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/followers", "following_url": "https://api.github.com/users/mxyng/following{/other_user}", "gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}", "starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mxyng/subscriptions", "organizations_url": "https://api.github.com/users/mxyng/orgs", "repos_url": "https://api.github.com/users/mxyng/repos", "events_url": "https://api.github.com/users/mxyng/events{/privacy}", "received_events_url": "https://api.github.com/users/mxyng/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
0
2023-10-31T22:55:25
2023-11-01T15:30:39
2023-11-01T15:30:38
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/958", "html_url": "https://github.com/ollama/ollama/pull/958", "diff_url": "https://github.com/ollama/ollama/pull/958.diff", "patch_url": "https://github.com/ollama/ollama/pull/958.patch", "merged_at": "2023-11-01T15:30:38" }
only append LD_LIBRARY_PATH in case it's already set Related #758
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/followers", "following_url": "https://api.github.com/users/mxyng/following{/other_user}", "gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}", "starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mxyng/subscriptions", "organizations_url": "https://api.github.com/users/mxyng/orgs", "repos_url": "https://api.github.com/users/mxyng/repos", "events_url": "https://api.github.com/users/mxyng/events{/privacy}", "received_events_url": "https://api.github.com/users/mxyng/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/958/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/958/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/3692
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3692/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3692/comments
https://api.github.com/repos/ollama/ollama/issues/3692/events
https://github.com/ollama/ollama/issues/3692
2,247,394,328
I_kwDOJ0Z1Ps6F9IQY
3,692
How do I get sentence-transformers/all-mpnet-base-v2 in Ollama?
{ "login": "Kanishk-Kumar", "id": 45518770, "node_id": "MDQ6VXNlcjQ1NTE4Nzcw", "avatar_url": "https://avatars.githubusercontent.com/u/45518770?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Kanishk-Kumar", "html_url": "https://github.com/Kanishk-Kumar", "followers_url": "https://api.github.com/users/Kanishk-Kumar/followers", "following_url": "https://api.github.com/users/Kanishk-Kumar/following{/other_user}", "gists_url": "https://api.github.com/users/Kanishk-Kumar/gists{/gist_id}", "starred_url": "https://api.github.com/users/Kanishk-Kumar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Kanishk-Kumar/subscriptions", "organizations_url": "https://api.github.com/users/Kanishk-Kumar/orgs", "repos_url": "https://api.github.com/users/Kanishk-Kumar/repos", "events_url": "https://api.github.com/users/Kanishk-Kumar/events{/privacy}", "received_events_url": "https://api.github.com/users/Kanishk-Kumar/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
null
2
2024-04-17T05:19:36
2024-04-29T12:54:51
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What model would you like? I'd like to use sentence-transformers/all-mpnet-base-v2 for embeddings. Thanks.
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3692/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 2 }
https://api.github.com/repos/ollama/ollama/issues/3692/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/6259
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6259/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6259/comments
https://api.github.com/repos/ollama/ollama/issues/6259/events
https://github.com/ollama/ollama/issues/6259
2,456,339,263
I_kwDOJ0Z1Ps6SaMM_
6,259
Inference fails with "llama_get_logits_ith: invalid logits id 7, reason: no logits"
{ "login": "yurivict", "id": 271906, "node_id": "MDQ6VXNlcjI3MTkwNg==", "avatar_url": "https://avatars.githubusercontent.com/u/271906?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yurivict", "html_url": "https://github.com/yurivict", "followers_url": "https://api.github.com/users/yurivict/followers", "following_url": "https://api.github.com/users/yurivict/following{/other_user}", "gists_url": "https://api.github.com/users/yurivict/gists{/gist_id}", "starred_url": "https://api.github.com/users/yurivict/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yurivict/subscriptions", "organizations_url": "https://api.github.com/users/yurivict/orgs", "repos_url": "https://api.github.com/users/yurivict/repos", "events_url": "https://api.github.com/users/yurivict/events{/privacy}", "received_events_url": "https://api.github.com/users/yurivict/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
9
2024-08-08T17:56:07
2024-08-09T19:14:03
2024-08-09T19:14:03
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Here is the error log: ``` [GIN] 2024/08/07 - 10:01:31 | 200 | 7.589808394s | 127.0.0.1 | POST "/api/chat" time=2024-08-07T10:01:31.521-07:00 level=DEBUG source=sched.go:462 msg="context for request finished" time=2024-08-07T10:01:31.521-07:00 level=DEBUG source=sched.go:334 msg="runner with non-zero duration has gone idle, adding timer" modelPath=/home/yuri/.ollama/models/blobs/sha256-ff82381e2bea77d91c1b824c7afb83f6fb73e9f7de9dda631bcdbca564aa5435 duration=5m0s time=2024-08-07T10:01:31.521-07:00 level=DEBUG source=sched.go:352 msg="after processing request finished event" modelPath=/home/yuri/.ollama/models/blobs/sha256-ff82381e2bea77d91c1b824c7afb83f6fb73e9f7de9dda631bcdbca564aa5435 refCount=0 time=2024-08-07T10:01:52.804-07:00 level=DEBUG source=sched.go:571 msg="evaluating already loaded" model=/home/yuri/.ollama/models/blobs/sha256-ff82381e2bea77d91c1b824c7afb83f6fb73e9f7de9dda631bcdbca564aa5435 DEBUG [process_single_task] slot data | n_idle_slots=4 n_processing_slots=0 task_id=1 tid="0x236209412000" timestamp=1723050112 time=2024-08-07T10:01:52.854-07:00 level=DEBUG source=routes.go:1346 msg="chat request" images=0 prompt="[INST] Say something.[/INST] " DEBUG [process_single_task] slot data | n_idle_slots=4 n_processing_slots=0 task_id=2 tid="0x236209412000" timestamp=1723050112 DEBUG [launch_slot_with_data] slot is processing task | slot_id=0 task_id=3 tid="0x236209412000" timestamp=1723050112 DEBUG [update_slots] slot progression | ga_i=0 n_past=0 n_past_se=0 n_prompt_tokens_processed=8 slot_id=0 task_id=3 tid="0x236209412000" timestamp=1723050112 DEBUG [update_slots] kv cache rm [p0, end) | p0=0 slot_id=0 task_id=3 tid="0x236209412000" timestamp=1723050112 llama_get_logits_ith: invalid logits id 7, reason: no logits time=2024-08-07T10:01:53.403-07:00 level=DEBUG source=server.go:1048 msg="stopping llama server" time=2024-08-07T10:01:53.403-07:00 level=DEBUG source=server.go:1054 msg="waiting for llama server to exit" time=2024-08-07T10:01:53.951-07:00 level=DEBUG source=server.go:1058 msg="llama server stopped" ``` The llama-cpp project maintainers seem to be puzzled by this error: https://github.com/ggerganov/llama.cpp/issues/8911 ### OS _No response_ ### GPU Nvidia ### CPU Intel ### Ollama version 0.3.4
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6259/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6259/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7816
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7816/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7816/comments
https://api.github.com/repos/ollama/ollama/issues/7816/events
https://github.com/ollama/ollama/issues/7816
2,687,774,719
I_kwDOJ0Z1Ps6gNC__
7,816
I import a IQ_4XS model but get an IQ1_M
{ "login": "CberYellowstone", "id": 37031767, "node_id": "MDQ6VXNlcjM3MDMxNzY3", "avatar_url": "https://avatars.githubusercontent.com/u/37031767?v=4", "gravatar_id": "", "url": "https://api.github.com/users/CberYellowstone", "html_url": "https://github.com/CberYellowstone", "followers_url": "https://api.github.com/users/CberYellowstone/followers", "following_url": "https://api.github.com/users/CberYellowstone/following{/other_user}", "gists_url": "https://api.github.com/users/CberYellowstone/gists{/gist_id}", "starred_url": "https://api.github.com/users/CberYellowstone/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/CberYellowstone/subscriptions", "organizations_url": "https://api.github.com/users/CberYellowstone/orgs", "repos_url": "https://api.github.com/users/CberYellowstone/repos", "events_url": "https://api.github.com/users/CberYellowstone/events{/privacy}", "received_events_url": "https://api.github.com/users/CberYellowstone/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6573197867, "node_id": "LA_kwDOJ0Z1Ps8AAAABh8sKKw", "url": "https://api.github.com/repos/ollama/ollama/labels/ollama.com", "name": "ollama.com", "color": "ffffff", "default": false, "description": "" } ]
closed
false
null
[]
null
13
2024-11-24T14:06:50
2024-12-23T23:54:48
2024-11-24T18:33:34
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? As the title says, I imported a custom gguf model, which is of the IQ_4XS quantization type. But after importing it, ollama show displays it as IQ1_M. Is this behavior expected? Because I saw in previous issues that support for IQ_4XS has been added, so this confuses me. ![image](https://github.com/user-attachments/assets/aab46156-2b61-4e36-a24f-c512ea3b7ce1) ![image](https://github.com/user-attachments/assets/aef1b4a8-4e46-414f-9c0f-1547a905b510) [the gguf file](https://huggingface.co/SakuraLLM/Sakura-14B-Qwen2.5-v1.0-GGUF/blob/main/sakura-14b-qwen2.5-v1.0-iq4xs.gguf) ### OS Windows ### GPU Nvidia ### CPU Intel ### Ollama version 0.4.4
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7816/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7816/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7726
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7726/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7726/comments
https://api.github.com/repos/ollama/ollama/issues/7726/events
https://github.com/ollama/ollama/issues/7726
2,668,392,205
I_kwDOJ0Z1Ps6fDG8N
7,726
Proxy does not work for ollama, but does work for curl
{ "login": "lk-1984", "id": 105721994, "node_id": "U_kgDOBk0wig", "avatar_url": "https://avatars.githubusercontent.com/u/105721994?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lk-1984", "html_url": "https://github.com/lk-1984", "followers_url": "https://api.github.com/users/lk-1984/followers", "following_url": "https://api.github.com/users/lk-1984/following{/other_user}", "gists_url": "https://api.github.com/users/lk-1984/gists{/gist_id}", "starred_url": "https://api.github.com/users/lk-1984/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lk-1984/subscriptions", "organizations_url": "https://api.github.com/users/lk-1984/orgs", "repos_url": "https://api.github.com/users/lk-1984/repos", "events_url": "https://api.github.com/users/lk-1984/events{/privacy}", "received_events_url": "https://api.github.com/users/lk-1984/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
2
2024-11-18T12:29:24
2024-11-18T13:08:50
2024-11-18T13:08:49
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Ollama does not work with the HTTPS_PROXY ``` foo@FOOBAR ~ % ollama pull llama3.2 pulling manifest Error: pull model manifest: Get "https://registry.ollama.ai/v2/library/llama3.2/manifests/latest": dial tcp 104.21.75.227:443: i/o timeout ``` Curl works in same terminal with the same HTTPS_PROXY value. ``` foobar@FOOBAR~ % curl https://registry.ollama.ai/v2/library/llama3.2/manifests/latest {"schemaVersion":2,"mediaType":"application/vnd.docker.distribution.manifest.v2+json","config":{"digest":"sha256:34bb5ab01051a11372a91f95f3fbbc51173eed8e7f13ec395b9ae9b8bd0e242b","mediaType":"application/vnd.docker.container.image.v1+json","size":561},"layers":[{"digest":"sha256:dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff","mediaType":"application/vnd.ollama.image.model","size":2019377376},{"digest":"sha256:966de95ca8a62200913e3f8bfbf84c8494536f1b94b49166851e76644e966396","mediaType":"application/vnd.ollama.image.template","size":1429},{"digest":"sha256:fcc5a6bec9daf9b561a68827b67ab6088e1dba9d1fa2a50d7bbcc8384e0a265d","mediaType":"application/vnd.ollama.image.license","size":7711},{"digest":"sha256:a70ff7e570d97baaf4e62ac6e6ad9975e04caa6d900d3742d37698494479e0cd","mediaType":"application/vnd.ollama.image.license","size":6016},{"digest":"sha256:56bb8bd477a519ffa694fc449c2413c6f0e1d3b1c88fa7e3c9d88d3ae49d4dcb","mediaType":"application/vnd.ollama.image.params","size":96}]} ``` Note: the company proxy starts with "http://" even for HTTPS_PROXY.. ### OS macOS ### GPU Apple ### CPU Apple ### Ollama version 0.4.2
{ "login": "lk-1984", "id": 105721994, "node_id": "U_kgDOBk0wig", "avatar_url": "https://avatars.githubusercontent.com/u/105721994?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lk-1984", "html_url": "https://github.com/lk-1984", "followers_url": "https://api.github.com/users/lk-1984/followers", "following_url": "https://api.github.com/users/lk-1984/following{/other_user}", "gists_url": "https://api.github.com/users/lk-1984/gists{/gist_id}", "starred_url": "https://api.github.com/users/lk-1984/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lk-1984/subscriptions", "organizations_url": "https://api.github.com/users/lk-1984/orgs", "repos_url": "https://api.github.com/users/lk-1984/repos", "events_url": "https://api.github.com/users/lk-1984/events{/privacy}", "received_events_url": "https://api.github.com/users/lk-1984/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7726/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7726/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/8261
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8261/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8261/comments
https://api.github.com/repos/ollama/ollama/issues/8261/events
https://github.com/ollama/ollama/issues/8261
2,761,628,815
I_kwDOJ0Z1Ps6kmxyP
8,261
Error: listen tcp 127.0.0.1:11434: bind: Only one usage of each socket address
{ "login": "davincitr", "id": 125030930, "node_id": "U_kgDOB3PSEg", "avatar_url": "https://avatars.githubusercontent.com/u/125030930?v=4", "gravatar_id": "", "url": "https://api.github.com/users/davincitr", "html_url": "https://github.com/davincitr", "followers_url": "https://api.github.com/users/davincitr/followers", "following_url": "https://api.github.com/users/davincitr/following{/other_user}", "gists_url": "https://api.github.com/users/davincitr/gists{/gist_id}", "starred_url": "https://api.github.com/users/davincitr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/davincitr/subscriptions", "organizations_url": "https://api.github.com/users/davincitr/orgs", "repos_url": "https://api.github.com/users/davincitr/repos", "events_url": "https://api.github.com/users/davincitr/events{/privacy}", "received_events_url": "https://api.github.com/users/davincitr/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6677367769, "node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q", "url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info", "name": "needs more info", "color": "BA8041", "default": false, "description": "More information is needed to assist" } ]
closed
false
null
[]
null
4
2024-12-28T08:26:56
2025-01-15T11:47:10
2025-01-08T18:02:56
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Hello i tried everything on the internet even ı formatted my pc. Error: listen tcp 127.0.0.1:11434: bind: Normal olarak her yuva adresi (iletişim kuralı/ağ adresi/bağlantı noktası) için yalnızca bir kullanıma izin veriliyor. ![image](https://github.com/user-attachments/assets/0ab05a6b-ffaa-41e1-bc87-d70ad41a7747) ![image](https://github.com/user-attachments/assets/9f04a3ef-e92b-4c9a-a38d-4d772dc7002c) ![image](https://github.com/user-attachments/assets/b7005ccb-5f69-4740-b508-dd0eb6b6e804) ![image](https://github.com/user-attachments/assets/c263a6fd-39ac-4bcc-96ac-617124395f68) i tried kill from task bar too. I tried using other ports too. I added one named 0.0.0.0 but it doesnt worked too ### OS Windows ### GPU Nvidia ### CPU Intel ### Ollama version 0.5.0 and 0.5.4
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/followers", "following_url": "https://api.github.com/users/pdevine/following{/other_user}", "gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}", "starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pdevine/subscriptions", "organizations_url": "https://api.github.com/users/pdevine/orgs", "repos_url": "https://api.github.com/users/pdevine/repos", "events_url": "https://api.github.com/users/pdevine/events{/privacy}", "received_events_url": "https://api.github.com/users/pdevine/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8261/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8261/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/8476
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8476/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8476/comments
https://api.github.com/repos/ollama/ollama/issues/8476/events
https://github.com/ollama/ollama/issues/8476
2,796,588,748
I_kwDOJ0Z1Ps6msI7M
8,476
Receiving a new error when trying to create modelfile with same code
{ "login": "indigotechtutorials", "id": 63070125, "node_id": "MDQ6VXNlcjYzMDcwMTI1", "avatar_url": "https://avatars.githubusercontent.com/u/63070125?v=4", "gravatar_id": "", "url": "https://api.github.com/users/indigotechtutorials", "html_url": "https://github.com/indigotechtutorials", "followers_url": "https://api.github.com/users/indigotechtutorials/followers", "following_url": "https://api.github.com/users/indigotechtutorials/following{/other_user}", "gists_url": "https://api.github.com/users/indigotechtutorials/gists{/gist_id}", "starred_url": "https://api.github.com/users/indigotechtutorials/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/indigotechtutorials/subscriptions", "organizations_url": "https://api.github.com/users/indigotechtutorials/orgs", "repos_url": "https://api.github.com/users/indigotechtutorials/repos", "events_url": "https://api.github.com/users/indigotechtutorials/events{/privacy}", "received_events_url": "https://api.github.com/users/indigotechtutorials/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
1
2025-01-18T02:21:02
2025-01-19T17:04:40
2025-01-19T17:04:40
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I have an existing app which was working on an older version of ollama to create the modelfile I was able to do this using a string and using the "FROM model \n SYSTEM instructions" syntax. I am using the ruby-ollama API wrapper gem. The error I am seeing is neither 'from' or 'files' was specified I verified that I am passing in a valid prompt but it is still not working. Any ideas why the newer Ollama would process the modelfile different I noticed there was some changes to the parser in a recent commit ### OS _No response_ ### GPU _No response_ ### CPU _No response_ ### Ollama version _No response_
{ "login": "rick-github", "id": 14946854, "node_id": "MDQ6VXNlcjE0OTQ2ODU0", "avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rick-github", "html_url": "https://github.com/rick-github", "followers_url": "https://api.github.com/users/rick-github/followers", "following_url": "https://api.github.com/users/rick-github/following{/other_user}", "gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}", "starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rick-github/subscriptions", "organizations_url": "https://api.github.com/users/rick-github/orgs", "repos_url": "https://api.github.com/users/rick-github/repos", "events_url": "https://api.github.com/users/rick-github/events{/privacy}", "received_events_url": "https://api.github.com/users/rick-github/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8476/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8476/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4523
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4523/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4523/comments
https://api.github.com/repos/ollama/ollama/issues/4523/events
https://github.com/ollama/ollama/issues/4523
2,304,730,924
I_kwDOJ0Z1Ps6JX2cs
4,523
GGUF imported model crashes only in v0.1.38
{ "login": "mindspawn", "id": 5296802, "node_id": "MDQ6VXNlcjUyOTY4MDI=", "avatar_url": "https://avatars.githubusercontent.com/u/5296802?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mindspawn", "html_url": "https://github.com/mindspawn", "followers_url": "https://api.github.com/users/mindspawn/followers", "following_url": "https://api.github.com/users/mindspawn/following{/other_user}", "gists_url": "https://api.github.com/users/mindspawn/gists{/gist_id}", "starred_url": "https://api.github.com/users/mindspawn/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mindspawn/subscriptions", "organizations_url": "https://api.github.com/users/mindspawn/orgs", "repos_url": "https://api.github.com/users/mindspawn/repos", "events_url": "https://api.github.com/users/mindspawn/events{/privacy}", "received_events_url": "https://api.github.com/users/mindspawn/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6947643302, "node_id": "LA_kwDOJ0Z1Ps8AAAABnhyfpg", "url": "https://api.github.com/repos/ollama/ollama/labels/create", "name": "create", "color": "b60205", "default": false, "description": "Issues relating to ollama create" } ]
closed
false
null
[]
null
5
2024-05-19T18:36:50
2024-06-30T23:19:01
2024-06-30T23:19:01
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? heegyu/EEVE-Korean-Instruct-10.8B-v1.0-GGUF worked in all prior versions of ollama. Since v0.1.38 it core dumps. I have temporarily reverted to v0.1.37 to resolve the issue. Any help is appreciated. ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.1.38
{ "login": "joshyan1", "id": 76125168, "node_id": "MDQ6VXNlcjc2MTI1MTY4", "avatar_url": "https://avatars.githubusercontent.com/u/76125168?v=4", "gravatar_id": "", "url": "https://api.github.com/users/joshyan1", "html_url": "https://github.com/joshyan1", "followers_url": "https://api.github.com/users/joshyan1/followers", "following_url": "https://api.github.com/users/joshyan1/following{/other_user}", "gists_url": "https://api.github.com/users/joshyan1/gists{/gist_id}", "starred_url": "https://api.github.com/users/joshyan1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/joshyan1/subscriptions", "organizations_url": "https://api.github.com/users/joshyan1/orgs", "repos_url": "https://api.github.com/users/joshyan1/repos", "events_url": "https://api.github.com/users/joshyan1/events{/privacy}", "received_events_url": "https://api.github.com/users/joshyan1/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4523/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4523/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4483
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4483/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4483/comments
https://api.github.com/repos/ollama/ollama/issues/4483/events
https://github.com/ollama/ollama/pull/4483
2,301,541,051
PR_kwDOJ0Z1Ps5vumh2
4,483
Don't return error on signal exit
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
2
2024-05-16T23:26:28
2024-05-17T19:02:40
2024-05-17T18:41:57
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/4483", "html_url": "https://github.com/ollama/ollama/pull/4483", "diff_url": "https://github.com/ollama/ollama/pull/4483.diff", "patch_url": "https://github.com/ollama/ollama/pull/4483.patch", "merged_at": "2024-05-17T18:41:57" }
null
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4483/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4483/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/3904
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3904/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3904/comments
https://api.github.com/repos/ollama/ollama/issues/3904/events
https://github.com/ollama/ollama/issues/3904
2,262,742,129
I_kwDOJ0Z1Ps6G3rRx
3,904
Error: llama runner process no longer running: -1
{ "login": "parthV2", "id": 163822058, "node_id": "U_kgDOCcO56g", "avatar_url": "https://avatars.githubusercontent.com/u/163822058?v=4", "gravatar_id": "", "url": "https://api.github.com/users/parthV2", "html_url": "https://github.com/parthV2", "followers_url": "https://api.github.com/users/parthV2/followers", "following_url": "https://api.github.com/users/parthV2/following{/other_user}", "gists_url": "https://api.github.com/users/parthV2/gists{/gist_id}", "starred_url": "https://api.github.com/users/parthV2/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/parthV2/subscriptions", "organizations_url": "https://api.github.com/users/parthV2/orgs", "repos_url": "https://api.github.com/users/parthV2/repos", "events_url": "https://api.github.com/users/parthV2/events{/privacy}", "received_events_url": "https://api.github.com/users/parthV2/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false } ]
null
9
2024-04-25T06:01:58
2024-06-21T18:27:34
2024-05-03T05:41:06
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Was trying to run a finetuned version of llama2 having a gguf of 13.5gb. ![Screenshot from 2024-04-25 11-30-55](https://github.com/ollama/ollama/assets/163822058/99f79650-123b-4c84-8de2-ad697d97002a) ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version v0.1.32
{ "login": "parthV2", "id": 163822058, "node_id": "U_kgDOCcO56g", "avatar_url": "https://avatars.githubusercontent.com/u/163822058?v=4", "gravatar_id": "", "url": "https://api.github.com/users/parthV2", "html_url": "https://github.com/parthV2", "followers_url": "https://api.github.com/users/parthV2/followers", "following_url": "https://api.github.com/users/parthV2/following{/other_user}", "gists_url": "https://api.github.com/users/parthV2/gists{/gist_id}", "starred_url": "https://api.github.com/users/parthV2/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/parthV2/subscriptions", "organizations_url": "https://api.github.com/users/parthV2/orgs", "repos_url": "https://api.github.com/users/parthV2/repos", "events_url": "https://api.github.com/users/parthV2/events{/privacy}", "received_events_url": "https://api.github.com/users/parthV2/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3904/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3904/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3942
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3942/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3942/comments
https://api.github.com/repos/ollama/ollama/issues/3942/events
https://github.com/ollama/ollama/pull/3942
2,265,618,138
PR_kwDOJ0Z1Ps5t1Xdq
3,942
Fix curl command in documentation
{ "login": "Isaakkamau", "id": 95031660, "node_id": "U_kgDOBaoRbA", "avatar_url": "https://avatars.githubusercontent.com/u/95031660?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Isaakkamau", "html_url": "https://github.com/Isaakkamau", "followers_url": "https://api.github.com/users/Isaakkamau/followers", "following_url": "https://api.github.com/users/Isaakkamau/following{/other_user}", "gists_url": "https://api.github.com/users/Isaakkamau/gists{/gist_id}", "starred_url": "https://api.github.com/users/Isaakkamau/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Isaakkamau/subscriptions", "organizations_url": "https://api.github.com/users/Isaakkamau/orgs", "repos_url": "https://api.github.com/users/Isaakkamau/repos", "events_url": "https://api.github.com/users/Isaakkamau/events{/privacy}", "received_events_url": "https://api.github.com/users/Isaakkamau/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
2
2024-04-26T11:50:11
2024-04-29T11:36:41
2024-04-29T11:36:41
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/3942", "html_url": "https://github.com/ollama/ollama/pull/3942", "diff_url": "https://github.com/ollama/ollama/pull/3942.diff", "patch_url": "https://github.com/ollama/ollama/pull/3942.patch", "merged_at": null }
Explicitly set the HTTP method to POST using the -X flag to avoid errors when creating a new model from `modelfile`
{ "login": "Isaakkamau", "id": 95031660, "node_id": "U_kgDOBaoRbA", "avatar_url": "https://avatars.githubusercontent.com/u/95031660?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Isaakkamau", "html_url": "https://github.com/Isaakkamau", "followers_url": "https://api.github.com/users/Isaakkamau/followers", "following_url": "https://api.github.com/users/Isaakkamau/following{/other_user}", "gists_url": "https://api.github.com/users/Isaakkamau/gists{/gist_id}", "starred_url": "https://api.github.com/users/Isaakkamau/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Isaakkamau/subscriptions", "organizations_url": "https://api.github.com/users/Isaakkamau/orgs", "repos_url": "https://api.github.com/users/Isaakkamau/repos", "events_url": "https://api.github.com/users/Isaakkamau/events{/privacy}", "received_events_url": "https://api.github.com/users/Isaakkamau/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3942/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3942/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/7140
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7140/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7140/comments
https://api.github.com/repos/ollama/ollama/issues/7140/events
https://github.com/ollama/ollama/pull/7140
2,573,673,780
PR_kwDOJ0Z1Ps59-U2Q
7,140
llama: cgo ggml
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
1
2024-10-08T16:23:06
2024-10-29T16:52:05
2024-10-29T16:52:05
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/7140", "html_url": "https://github.com/ollama/ollama/pull/7140", "diff_url": "https://github.com/ollama/ollama/pull/7140.diff", "patch_url": "https://github.com/ollama/ollama/pull/7140.patch", "merged_at": null }
This builds a ~minimal cgo wrapper on ggml APIs. Replaces #7103 on main
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7140/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7140/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5079
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5079/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5079/comments
https://api.github.com/repos/ollama/ollama/issues/5079/events
https://github.com/ollama/ollama/pull/5079
2,355,789,462
PR_kwDOJ0Z1Ps5ynH7F
5,079
Add Chinese translation of README
{ "login": "sumingcheng", "id": 21992204, "node_id": "MDQ6VXNlcjIxOTkyMjA0", "avatar_url": "https://avatars.githubusercontent.com/u/21992204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sumingcheng", "html_url": "https://github.com/sumingcheng", "followers_url": "https://api.github.com/users/sumingcheng/followers", "following_url": "https://api.github.com/users/sumingcheng/following{/other_user}", "gists_url": "https://api.github.com/users/sumingcheng/gists{/gist_id}", "starred_url": "https://api.github.com/users/sumingcheng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sumingcheng/subscriptions", "organizations_url": "https://api.github.com/users/sumingcheng/orgs", "repos_url": "https://api.github.com/users/sumingcheng/repos", "events_url": "https://api.github.com/users/sumingcheng/events{/privacy}", "received_events_url": "https://api.github.com/users/sumingcheng/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
1
2024-06-16T14:03:26
2024-11-21T08:33:00
2024-11-21T08:33:00
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5079", "html_url": "https://github.com/ollama/ollama/pull/5079", "diff_url": "https://github.com/ollama/ollama/pull/5079.diff", "patch_url": "https://github.com/ollama/ollama/pull/5079.patch", "merged_at": null }
This pull request adds a Chinese translation of the README file to help native Chinese speakers better understand the project.
{ "login": "mchiang0610", "id": 3325447, "node_id": "MDQ6VXNlcjMzMjU0NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mchiang0610", "html_url": "https://github.com/mchiang0610", "followers_url": "https://api.github.com/users/mchiang0610/followers", "following_url": "https://api.github.com/users/mchiang0610/following{/other_user}", "gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}", "starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions", "organizations_url": "https://api.github.com/users/mchiang0610/orgs", "repos_url": "https://api.github.com/users/mchiang0610/repos", "events_url": "https://api.github.com/users/mchiang0610/events{/privacy}", "received_events_url": "https://api.github.com/users/mchiang0610/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5079/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5079/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/2697
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2697/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2697/comments
https://api.github.com/repos/ollama/ollama/issues/2697/events
https://github.com/ollama/ollama/issues/2697
2,150,060,376
I_kwDOJ0Z1Ps6AJ1FY
2,697
Unable to build Ollama on Cluster
{ "login": "Anirudh257", "id": 16001446, "node_id": "MDQ6VXNlcjE2MDAxNDQ2", "avatar_url": "https://avatars.githubusercontent.com/u/16001446?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Anirudh257", "html_url": "https://github.com/Anirudh257", "followers_url": "https://api.github.com/users/Anirudh257/followers", "following_url": "https://api.github.com/users/Anirudh257/following{/other_user}", "gists_url": "https://api.github.com/users/Anirudh257/gists{/gist_id}", "starred_url": "https://api.github.com/users/Anirudh257/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Anirudh257/subscriptions", "organizations_url": "https://api.github.com/users/Anirudh257/orgs", "repos_url": "https://api.github.com/users/Anirudh257/repos", "events_url": "https://api.github.com/users/Anirudh257/events{/privacy}", "received_events_url": "https://api.github.com/users/Anirudh257/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false } ]
null
8
2024-02-22T22:37:54
2024-03-12T14:50:16
2024-03-12T14:50:16
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hi, Thanks for this great work. I am trying to build Ollama on my cluster and I don't have administrative access. My cluster has the following configuration: ``` LSB Version: :core-4.1-amd64:core-4.1-noarch Distributor ID: CentOS Description: CentOS Linux release 7.9.2009 (Core) Release: 7.9.2009 Codename: Core ``` I follow these steps: a) Clone the ollama repo using: `git clone https://github.com/ollama/ollama` b) Following https://github.com/ollama/ollama/blob/main/docs/development.md, I do `go generate ./...` but I get the error: ![image](https://github.com/ollama/ollama/assets/16001446/5651b8bf-8722-47f3-bff3-0753bcdfb9f2) I tried cloning the repo with the recursive submodules but it didn't help me much. Also, I tried older commits but it wasn't helpful either.
{ "login": "Anirudh257", "id": 16001446, "node_id": "MDQ6VXNlcjE2MDAxNDQ2", "avatar_url": "https://avatars.githubusercontent.com/u/16001446?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Anirudh257", "html_url": "https://github.com/Anirudh257", "followers_url": "https://api.github.com/users/Anirudh257/followers", "following_url": "https://api.github.com/users/Anirudh257/following{/other_user}", "gists_url": "https://api.github.com/users/Anirudh257/gists{/gist_id}", "starred_url": "https://api.github.com/users/Anirudh257/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Anirudh257/subscriptions", "organizations_url": "https://api.github.com/users/Anirudh257/orgs", "repos_url": "https://api.github.com/users/Anirudh257/repos", "events_url": "https://api.github.com/users/Anirudh257/events{/privacy}", "received_events_url": "https://api.github.com/users/Anirudh257/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2697/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2697/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4810
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4810/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4810/comments
https://api.github.com/repos/ollama/ollama/issues/4810/events
https://github.com/ollama/ollama/issues/4810
2,333,112,595
I_kwDOJ0Z1Ps6LEHkT
4,810
"Server disconnected without sending a response" after ~60seconds.
{ "login": "michaelgloeckner", "id": 56082327, "node_id": "MDQ6VXNlcjU2MDgyMzI3", "avatar_url": "https://avatars.githubusercontent.com/u/56082327?v=4", "gravatar_id": "", "url": "https://api.github.com/users/michaelgloeckner", "html_url": "https://github.com/michaelgloeckner", "followers_url": "https://api.github.com/users/michaelgloeckner/followers", "following_url": "https://api.github.com/users/michaelgloeckner/following{/other_user}", "gists_url": "https://api.github.com/users/michaelgloeckner/gists{/gist_id}", "starred_url": "https://api.github.com/users/michaelgloeckner/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/michaelgloeckner/subscriptions", "organizations_url": "https://api.github.com/users/michaelgloeckner/orgs", "repos_url": "https://api.github.com/users/michaelgloeckner/repos", "events_url": "https://api.github.com/users/michaelgloeckner/events{/privacy}", "received_events_url": "https://api.github.com/users/michaelgloeckner/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
10
2024-06-04T10:06:11
2024-06-19T07:40:51
2024-06-06T13:06:53
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I run mixtral model and using api/generate. If I run a bigger prompt it returns "Server disconnected without sending a response." I checked ollama logs and see: [GIN] 2024/06/04 - 09:36:43 | 500 | 59.693208463s | 10.0.101.220 | POST "/api/generate" Is there a way to increase this kind of internal timeout? I already tried setting timeout but it does not have an impact on this from ollama import Client client = Client(host='OllamaServer', timeout=120) ### OS Linux, Containerd, eks ### GPU Nvidia ### CPU AMD ### Ollama version 0.1.40
{ "login": "michaelgloeckner", "id": 56082327, "node_id": "MDQ6VXNlcjU2MDgyMzI3", "avatar_url": "https://avatars.githubusercontent.com/u/56082327?v=4", "gravatar_id": "", "url": "https://api.github.com/users/michaelgloeckner", "html_url": "https://github.com/michaelgloeckner", "followers_url": "https://api.github.com/users/michaelgloeckner/followers", "following_url": "https://api.github.com/users/michaelgloeckner/following{/other_user}", "gists_url": "https://api.github.com/users/michaelgloeckner/gists{/gist_id}", "starred_url": "https://api.github.com/users/michaelgloeckner/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/michaelgloeckner/subscriptions", "organizations_url": "https://api.github.com/users/michaelgloeckner/orgs", "repos_url": "https://api.github.com/users/michaelgloeckner/repos", "events_url": "https://api.github.com/users/michaelgloeckner/events{/privacy}", "received_events_url": "https://api.github.com/users/michaelgloeckner/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4810/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4810/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/496
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/496/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/496/comments
https://api.github.com/repos/ollama/ollama/issues/496/events
https://github.com/ollama/ollama/issues/496
1,887,542,945
I_kwDOJ0Z1Ps5wgZ6h
496
CodeLlama tokenizer `<FILL_ME>` token support
{ "login": "regularfry", "id": 39277, "node_id": "MDQ6VXNlcjM5Mjc3", "avatar_url": "https://avatars.githubusercontent.com/u/39277?v=4", "gravatar_id": "", "url": "https://api.github.com/users/regularfry", "html_url": "https://github.com/regularfry", "followers_url": "https://api.github.com/users/regularfry/followers", "following_url": "https://api.github.com/users/regularfry/following{/other_user}", "gists_url": "https://api.github.com/users/regularfry/gists{/gist_id}", "starred_url": "https://api.github.com/users/regularfry/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/regularfry/subscriptions", "organizations_url": "https://api.github.com/users/regularfry/orgs", "repos_url": "https://api.github.com/users/regularfry/repos", "events_url": "https://api.github.com/users/regularfry/events{/privacy}", "received_events_url": "https://api.github.com/users/regularfry/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
3
2023-09-08T12:00:25
2024-07-16T21:37:33
2024-07-16T21:37:33
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
It might be that I just can't find the right setting to make this work, but CodeLlama's upstream model docs refer to a [fill_token](https://huggingface.co/docs/transformers/main/model_doc/code_llama#transformers.CodeLlamaTokenizer.fill_token) for splitting the input and constructing the prompt for code infill. I can't seem to make this work on any of the `codellama:7b` variants using that token, whereas the HF hosted version of 13b seems to support it fine. They give this example prompt for using `<FILL_ME>`: ``` def remove_non_ascii(s: str) -> str: """<FILL_ME> return result ``` Here's the ollama output for the online 13b-instruct version: ``` def remove_non_ascii(s: str) -> str: """Remove non-ASCII characters from a string.""" return "".join(i for i in s if ord(i) < 128) ``` Here's the output for local 7b: Sure! Here's the code to remove non-ASCII characters from a string in Python: ```python def remove_non_ascii(s): # Create a new string with only ASCII characters result = "" for char in s: if ord(char) < 128: result += char return result ``` This function takes a string as input and returns a new string that contains only ASCII characters. The `ord()` function is used to convert each character to its corresponding Unicode code point, which allows us to check if the character is in the ASCII range. If it is not, then we skip adding it to the result string. The code is ok (other than that it ignored the multiline docstring prompt); the surrounding commentary and markdown formatting is not. I know this isn't a direct like-for-like comparison, but I can't run 13b locally, and I can't seem to find 7b hosted online anywhere; it's just too big for HF's free tier. Am I holding it wrong?
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/followers", "following_url": "https://api.github.com/users/mxyng/following{/other_user}", "gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}", "starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mxyng/subscriptions", "organizations_url": "https://api.github.com/users/mxyng/orgs", "repos_url": "https://api.github.com/users/mxyng/repos", "events_url": "https://api.github.com/users/mxyng/events{/privacy}", "received_events_url": "https://api.github.com/users/mxyng/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/496/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/496/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/8473
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8473/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8473/comments
https://api.github.com/repos/ollama/ollama/issues/8473/events
https://github.com/ollama/ollama/issues/8473
2,796,566,382
I_kwDOJ0Z1Ps6msDdu
8,473
HSA_OVERRIDE_GFX_VERSION_0 while running on only one GPU
{ "login": "occasional-contributor", "id": 140330290, "node_id": "U_kgDOCF1FMg", "avatar_url": "https://avatars.githubusercontent.com/u/140330290?v=4", "gravatar_id": "", "url": "https://api.github.com/users/occasional-contributor", "html_url": "https://github.com/occasional-contributor", "followers_url": "https://api.github.com/users/occasional-contributor/followers", "following_url": "https://api.github.com/users/occasional-contributor/following{/other_user}", "gists_url": "https://api.github.com/users/occasional-contributor/gists{/gist_id}", "starred_url": "https://api.github.com/users/occasional-contributor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/occasional-contributor/subscriptions", "organizations_url": "https://api.github.com/users/occasional-contributor/orgs", "repos_url": "https://api.github.com/users/occasional-contributor/repos", "events_url": "https://api.github.com/users/occasional-contributor/events{/privacy}", "received_events_url": "https://api.github.com/users/occasional-contributor/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
0
2025-01-18T01:28:21
2025-01-18T01:28:21
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I am running `ollama:rocm` in a docker container on Ubuntu 24.04. My GPU is an RX 6600 (`gfx1032`). Everything works fine when I run `ollama` using ```bash docker run -d \ --device /dev/kfd \ --device /dev/dri \ -v ollama:/root/.ollama \ -p 11434:11434 \ --restart unless-stopped \ --env HSA_OVERRIDE_GFX_VERSION="10.3.0" \ --env OLLAMA_KEEP_ALIVE="-1" \ --name ollama-rocm \ ollama/ollama:rocm ``` ``` time=2025-01-18T01:24:54.470Z level=INFO source=routes.go:1238 msg="Listening on [::]:11434 (version 0.5.5-0-g32bd37a-dirty)" time=2025-01-18T01:24:54.470Z level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners="[cpu_avx2 rocm_avx cpu cpu_avx]" time=2025-01-18T01:24:54.470Z level=INFO source=gpu.go:226 msg="looking for compatible GPUs" time=2025-01-18T01:24:54.472Z level=WARN source=amd_linux.go:61 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory" time=2025-01-18T01:24:54.472Z level=INFO source=amd_linux.go:391 msg="skipping rocm gfx compatibility check" HSA_OVERRIDE_GFX_VERSION=10.3.0 time=2025-01-18T01:24:54.472Z level=INFO source=types.go:131 msg="inference compute" id=0 library=rocm variant="" compute=gfx1032 driver=0.0 name=1002:73ff total="8.0 GiB" available="8.0 GiB" ``` However, when I run using ```bash docker run -d \ --device /dev/kfd \ --device /dev/dri \ -v ollama:/root/.ollama \ -p 11434:11434 \ --restart unless-stopped \ --env HSA_OVERRIDE_GFX_VERSION_0="10.3.0" \ --env OLLAMA_KEEP_ALIVE="-1" \ --name ollama-rocm \ ollama/ollama:rocm ``` `ollama` runs only on CPU: ``` time=2025-01-18T01:25:58.373Z level=INFO source=routes.go:1238 msg="Listening on [::]:11434 (version 0.5.5-0-g32bd37a-dirty)" time=2025-01-18T01:25:58.373Z level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners="[cpu_avx2 rocm_avx cpu cpu_avx]" time=2025-01-18T01:25:58.373Z level=INFO source=gpu.go:226 msg="looking for compatible GPUs" time=2025-01-18T01:25:58.375Z level=WARN source=amd_linux.go:61 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory" time=2025-01-18T01:25:58.378Z level=WARN source=amd_linux.go:378 msg="amdgpu is not supported (supported types:[gfx1030 gfx1100 gfx1101 gfx1102 gfx900 gfx906 gfx908 gfx90a gfx940 gfx941 gfx942])" gpu_type=gfx1032 gpu=0 library=/usr/lib/ollama time=2025-01-18T01:25:58.378Z level=WARN source=amd_linux.go:385 msg="See https://github.com/ollama/ollama/blob/main/docs/gpu.md#overrides for HSA_OVERRIDE_GFX_VERSION usage" time=2025-01-18T01:25:58.378Z level=INFO source=amd_linux.go:404 msg="no compatible amdgpu devices detected" time=2025-01-18T01:25:58.378Z level=INFO source=gpu.go:392 msg="no compatible GPUs were discovered" time=2025-01-18T01:25:58.378Z level=INFO source=types.go:131 msg="inference compute" id=0 library=cpu variant=avx2 compute="" driver=0.0 name="" total="62.7 GiB" available="61.2 GiB" ``` I am trying this because eventually I want to get newer AMD GPUs and use them concurrently for `ollama`. Is this not supported when running `ollama` in `docker`? If I want to use this RX 6600 and an RX 7800 in the same system, how should I do it with `docker`? ### OS Linux ### GPU AMD ### CPU Intel ### Ollama version 0.5.5-0-g32bd37a-dirty
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8473/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8473/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/5934
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5934/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5934/comments
https://api.github.com/repos/ollama/ollama/issues/5934/events
https://github.com/ollama/ollama/pull/5934
2,428,684,180
PR_kwDOJ0Z1Ps52ZmBy
5,934
Report better error on cuda unsupported os/arch
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
0
2024-07-25T00:11:57
2024-07-29T21:24:23
2024-07-29T21:24:21
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5934", "html_url": "https://github.com/ollama/ollama/pull/5934", "diff_url": "https://github.com/ollama/ollama/pull/5934.diff", "patch_url": "https://github.com/ollama/ollama/pull/5934.patch", "merged_at": "2024-07-29T21:24:20" }
If we detect an NVIDIA GPU, but nvidia doesn't support the os/arch, this will report a better error for the user and point them to docs to self-install the drivers if possible. Fixes #3261 #2302 Example output on Ubuntu 22.04 on AWS g5g.xlarge (arm64) ``` % sh ./install.sh >>> Downloading ollama... ######################################################################################################################################### 100.0%######################################################################################################################################### 100.0% >>> Installing ollama to /usr/local/bin... >>> Adding ollama user to render group... >>> Adding ollama user to video group... >>> Adding current user to ollama group... >>> Creating ollama systemd service... >>> Enabling and starting ollama service... >>> Installing NVIDIA repository... ERROR NVIDIA GPU detected, but your OS and Architecture are not supported by NVIDIA. Please install the CUDA driver manually https://docs.nvidia.com/cuda/cuda-installation-guide-linux/ % echo $? 1 ```
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5934/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5934/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/1438
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1438/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1438/comments
https://api.github.com/repos/ollama/ollama/issues/1438/events
https://github.com/ollama/ollama/issues/1438
2,033,080,984
I_kwDOJ0Z1Ps55LlqY
1,438
Openchat in Ollama
{ "login": "itscvenk", "id": 117738376, "node_id": "U_kgDOBwSLiA", "avatar_url": "https://avatars.githubusercontent.com/u/117738376?v=4", "gravatar_id": "", "url": "https://api.github.com/users/itscvenk", "html_url": "https://github.com/itscvenk", "followers_url": "https://api.github.com/users/itscvenk/followers", "following_url": "https://api.github.com/users/itscvenk/following{/other_user}", "gists_url": "https://api.github.com/users/itscvenk/gists{/gist_id}", "starred_url": "https://api.github.com/users/itscvenk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/itscvenk/subscriptions", "organizations_url": "https://api.github.com/users/itscvenk/orgs", "repos_url": "https://api.github.com/users/itscvenk/repos", "events_url": "https://api.github.com/users/itscvenk/events{/privacy}", "received_events_url": "https://api.github.com/users/itscvenk/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
2
2023-12-08T17:50:35
2023-12-09T07:41:02
2023-12-08T19:30:39
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hello Nvidia, CUDA, are all installed and working fine. Phew. How do I verify that Ollama is actually using the GPU while responding. I am using the openchat model Thanks a million for Ollama and especially for including the openchat model. Stay blessed & happy folks! Regards
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/followers", "following_url": "https://api.github.com/users/pdevine/following{/other_user}", "gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}", "starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pdevine/subscriptions", "organizations_url": "https://api.github.com/users/pdevine/orgs", "repos_url": "https://api.github.com/users/pdevine/repos", "events_url": "https://api.github.com/users/pdevine/events{/privacy}", "received_events_url": "https://api.github.com/users/pdevine/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1438/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1438/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4876
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4876/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4876/comments
https://api.github.com/repos/ollama/ollama/issues/4876/events
https://github.com/ollama/ollama/pull/4876
2,338,854,827
PR_kwDOJ0Z1Ps5xtsS6
4,876
Intel GPU build support
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
10
2024-06-06T17:56:05
2025-01-24T23:15:19
2024-11-21T18:23:32
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
true
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/4876", "html_url": "https://github.com/ollama/ollama/pull/4876", "diff_url": "https://github.com/ollama/ollama/pull/4876.diff", "patch_url": "https://github.com/ollama/ollama/pull/4876.patch", "merged_at": null }
This enables linux, but still needs some more work to get it wired up to the Windows official builds. Fixes #1590
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4876/reactions", "total_count": 24, "+1": 3, "-1": 0, "laugh": 0, "hooray": 12, "confused": 0, "heart": 9, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4876/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/3949
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3949/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3949/comments
https://api.github.com/repos/ollama/ollama/issues/3949/events
https://github.com/ollama/ollama/issues/3949
2,266,096,639
I_kwDOJ0Z1Ps6HEeP_
3,949
Inconsistent 500 errors when generating
{ "login": "ronangaillard", "id": 5607736, "node_id": "MDQ6VXNlcjU2MDc3MzY=", "avatar_url": "https://avatars.githubusercontent.com/u/5607736?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ronangaillard", "html_url": "https://github.com/ronangaillard", "followers_url": "https://api.github.com/users/ronangaillard/followers", "following_url": "https://api.github.com/users/ronangaillard/following{/other_user}", "gists_url": "https://api.github.com/users/ronangaillard/gists{/gist_id}", "starred_url": "https://api.github.com/users/ronangaillard/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ronangaillard/subscriptions", "organizations_url": "https://api.github.com/users/ronangaillard/orgs", "repos_url": "https://api.github.com/users/ronangaillard/repos", "events_url": "https://api.github.com/users/ronangaillard/events{/privacy}", "received_events_url": "https://api.github.com/users/ronangaillard/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false } ]
null
2
2024-04-26T16:09:48
2024-05-09T21:21:23
2024-05-09T21:21:23
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? EDIT : I tried with `0.1.29` and I don't have the issue Hello, I'm using docker image, tag latest without any GPU. I downloaded mistral model, and I'm trying to generate answers using the promt from the doc (I have the same issue with llama model) : ``` curl http://localhost:11434/api/generate -d '{ "model": "llama2", "prompt": "Why is the sky blue?" }' ``` But often (not always) I get an error 500 even though tokens have been generated. I know tokens have been generated because I see them in the API response. API response : ``` {"model":"mistral","created_at":"2024-04-26T16:07:28.07709035Z","response":" The","done":false} {"model":"mistral","created_at":"2024-04-26T16:07:28.631254126Z","response":" color","done":false} {"model":"mistral","created_at":"2024-04-26T16:07:29.18215074Z","response":" of","done":false} {"model":"mistral","created_at":"2024-04-26T16:07:29.664237413Z","response":" the","done":false} {"model":"mistral","created_at":"2024-04-26T16:07:30.273469909Z","response":" sky","done":false} {"model":"mistral","created_at":"2024-04-26T16:07:30.781810227Z","response":" appears","done":false} {"model":"mistral","created_at":"2024-04-26T16:07:31.40826255Z","response":" blue","done":false} {"model":"mistral","created_at":"2024-04-26T16:07:31.989479606Z","response":" due","done":false} {"model":"mistral","created_at":"2024-04-26T16:07:32.525336503Z","response":" to","done":false} {"model":"mistral","created_at":"2024-04-26T16:07:33.026269498Z","response":" a","done":false} {"model":"mistral","created_at":"2024-04-26T16:07:33.549170938Z","response":" process","done":false} {"model":"mistral","created_at":"2024-04-26T16:07:34.081219456Z","response":" called","done":false} {"model":"mistral","created_at":"2024-04-26T16:07:34.619630847Z","response":" Ray","done":false} {"model":"mistral","created_at":"2024-04-26T16:07:35.137036869Z","response":"le","done":false} {"model":"mistral","created_at":"2024-04-26T16:07:35.704325551Z","response":"igh","done":false} {"model":"mistral","created_at":"2024-04-26T16:07:36.267680082Z","response":" scattering","done":false} {"model":"mistral","created_at":"2024-04-26T16:07:36.793143541Z","response":".","done":false} {"model":"mistral","created_at":"2024-04-26T16:07:37.282849465Z","response":" As","done":false} {"model":"mistral","created_at":"2024-04-26T16:07:37.842596991Z","response":" sunlight","done":false} {"model":"mistral","created_at":"2024-04-26T16:07:38.38296667Z","response":" reaches","done":false} {"model":"mistral","created_at":"2024-04-26T16:07:38.903363958Z","response":" Earth","done":false} {"model":"mistral","created_at":"2024-04-26T16:07:39.726083579Z","response":"'","done":false} {"model":"mistral","created_at":"2024-04-26T16:07:40.494767417Z","response":"s","done":false} {"model":"mistral","created_at":"2024-04-26T16:07:41.06925115Z","response":" atmosphere","done":false} {"model":"mistral","created_at":"2024-04-26T16:07:41.613843427Z","response":",","done":false} {"model":"mistral","created_at":"2024-04-26T16:07:42.128205209Z","response":" it","done":false} {"model":"mistral","created_at":"2024-04-26T16:07:42.667473181Z","response":" inter","done":false} {"model":"mistral","created_at":"2024-04-26T16:07:43.173160182Z","response":"acts","done":false} {"model":"mistral","created_at":"2024-04-26T16:07:43.716459779Z","response":" with","done":false} {"model":"mistral","created_at":"2024-04-26T16:07:44.268444653Z","response":" different","done":false} {"model":"mistral","created_at":"2024-04-26T16:07:44.817945821Z","response":" g","done":false} {"model":"mistral","created_at":"2024-04-26T16:07:45.426567347Z","response":"ases","done":false} {"model":"mistral","created_at":"2024-04-26T16:07:45.932192978Z","response":" and","done":false} {"model":"mistral","created_at":"2024-04-26T16:07:46.498713045Z","response":" particles","done":false} {"model":"mistral","created_at":"2024-04-26T16:07:47.052741443Z","response":" in","done":false} {"model":"mistral","created_at":"2024-04-26T16:07:47.580913744Z","response":" the","done":false} {"model":"mistral","created_at":"2024-04-26T16:07:48.090460265Z","response":" air","done":false} {"model":"mistral","created_at":"2024-04-26T16:07:48.596333364Z","response":".","done":false} {"model":"mistral","created_at":"2024-04-26T16:07:49.115045502Z","response":" Blue","done":false} {"model":"mistral","created_at":"2024-04-26T16:07:49.642993369Z","response":" light","done":false} {"model":"mistral","created_at":"2024-04-26T16:07:50.18264068Z","response":" has","done":false} {"model":"mistral","created_at":"2024-04-26T16:07:50.685417256Z","response":" a","done":false} {"model":"mistral","created_at":"2024-04-26T16:07:51.19918961Z","response":" shorter","done":false} {"model":"mistral","created_at":"2024-04-26T16:07:51.836339325Z","response":" w","done":false} {"model":"mistral","created_at":"2024-04-26T16:07:52.320055654Z","response":"avelength","done":false} {"model":"mistral","created_at":"2024-04-26T16:07:52.947997012Z","response":" and","done":false} {"model":"mistral","created_at":"2024-04-26T16:07:53.441787158Z","response":" gets","done":false} {"model":"mistral","created_at":"2024-04-26T16:07:53.990117921Z","response":" scattered","done":false} {"model":"mistral","created_at":"2024-04-26T16:07:54.46448626Z","response":" more","done":false} {"model":"mistral","created_at":"2024-04-26T16:07:55.003384218Z","response":" easily","done":false} {"model":"mistral","created_at":"2024-04-26T16:07:55.475699829Z","response":" than","done":false} {"model":"mistral","created_at":"2024-04-26T16:07:56.062239913Z","response":" other","done":false} {"model":"mistral","created_at":"2024-04-26T16:07:56.576008126Z","response":" colors","done":false} {"model":"mistral","created_at":"2024-04-26T16:07:57.275554243Z","response":",","done":false} {"model":"mistral","created_at":"2024-04-26T16:07:57.874225128Z","response":" such","done":false} {"model":"mistral","created_at":"2024-04-26T16:07:58.451544809Z","response":" as","done":false} {"model":"mistral","created_at":"2024-04-26T16:07:58.937056211Z","response":" red","done":false} {"model":"mistral","created_at":"2024-04-26T16:07:59.49227394Z","response":" or","done":false} {"model":"mistral","created_at":"2024-04-26T16:08:00.030899371Z","response":" yellow","done":false} {"model":"mistral","created_at":"2024-04-26T16:08:00.573231159Z","response":".","done":false} {"model":"mistral","created_at":"2024-04-26T16:08:01.060011721Z","response":" As","done":false} {"model":"mistral","created_at":"2024-04-26T16:08:01.59222445Z","response":" a","done":false} {"model":"mistral","created_at":"2024-04-26T16:08:02.25517787Z","response":" result","done":false} {"model":"mistral","created_at":"2024-04-26T16:08:02.741180311Z","response":",","done":false} {"model":"mistral","created_at":"2024-04-26T16:08:03.243095355Z","response":" when","done":false} {"model":"mistral","created_at":"2024-04-26T16:08:03.759198116Z","response":" we","done":false} {"model":"mistral","created_at":"2024-04-26T16:08:04.293691209Z","response":" look","done":false} {"model":"mistral","created_at":"2024-04-26T16:08:04.830839629Z","response":" up","done":false} {"model":"mistral","created_at":"2024-04-26T16:08:05.34726204Z","response":" at","done":false} {"model":"mistral","created_at":"2024-04-26T16:08:05.871421624Z","response":" the","done":false} {"model":"mistral","created_at":"2024-04-26T16:08:06.423681143Z","response":" sky","done":false} {"model":"mistral","created_at":"2024-04-26T16:08:06.931967006Z","response":",","done":false} {"model":"mistral","created_at":"2024-04-26T16:08:07.473513959Z","response":" we","done":false} {"model":"mistral","created_at":"2024-04-26T16:08:07.989916516Z","response":" predomin","done":false} {"model":"mistral","created_at":"2024-04-26T16:08:08.595876379Z","response":"antly","done":false} {"model":"mistral","created_at":"2024-04-26T16:08:09.109355685Z","response":" see","done":false} {"model":"mistral","created_at":"2024-04-26T16:08:09.616627341Z","response":" the","done":false} {"model":"mistral","created_at":"2024-04-26T16:08:10.167182816Z","response":" scattered","done":false} {"model":"mistral","created_at":"2024-04-26T16:08:10.697195496Z","response":" blue","done":false} {"model":"mistral","created_at":"2024-04-26T16:08:11.227974595Z","response":" light","done":false} {"model":"mistral","created_at":"2024-04-26T16:08:11.722090129Z","response":".","done":false} {"model":"mistral","created_at":"2024-04-26T16:08:12.25041971Z","response":" This","done":false} {"model":"mistral","created_at":"2024-04-26T16:08:12.829180966Z","response":" is","done":false} {"model":"mistral","created_at":"2024-04-26T16:08:13.343747729Z","response":" why","done":false} {"model":"mistral","created_at":"2024-04-26T16:08:13.849074334Z","response":" we","done":false} {"model":"mistral","created_at":"2024-04-26T16:08:14.428382283Z","response":" typically","done":false} {"model":"mistral","created_at":"2024-04-26T16:08:14.975249303Z","response":" per","done":false} {"model":"mistral","created_at":"2024-04-26T16:08:15.557414171Z","response":"ceive","done":false} {"model":"mistral","created_at":"2024-04-26T16:08:16.179399225Z","response":" the","done":false} {"model":"mistral","created_at":"2024-04-26T16:08:16.713970612Z","response":" sky","done":false} {"model":"mistral","created_at":"2024-04-26T16:08:17.279799166Z","response":" as","done":false} {"model":"mistral","created_at":"2024-04-26T16:08:17.749814713Z","response":" having","done":false} {"model":"mistral","created_at":"2024-04-26T16:08:18.349225178Z","response":" a","done":false} {"model":"mistral","created_at":"2024-04-26T16:08:18.874442601Z","response":" blue","done":false} {"model":"mistral","created_at":"2024-04-26T16:08:19.385743625Z","response":" h","done":false} {"model":"mistral","created_at":"2024-04-26T16:08:19.856165748Z","response":"ue","done":false} {"model":"mistral","created_at":"2024-04-26T16:08:20.434154839Z","response":".","done":false} {"model":"mistral","created_at":"2024-04-26T16:08:20.959613693Z","response":" However","done":false} {"model":"mistral","created_at":"2024-04-26T16:08:21.477155944Z","response":",","done":false} {"model":"mistral","created_at":"2024-04-26T16:08:21.948582002Z","response":" during","done":false} {"model":"mistral","created_at":"2024-04-26T16:08:22.510912787Z","response":" sun","done":false} {"model":"mistral","created_at":"2024-04-26T16:08:23.124877491Z","response":"rise","done":false} {"model":"mistral","created_at":"2024-04-26T16:08:23.703203542Z","response":" or","done":false} {"model":"mistral","created_at":"2024-04-26T16:08:24.197086988Z","response":" sun","done":false} {"model":"mistral","created_at":"2024-04-26T16:08:24.731125053Z","response":"set","done":false} {"model":"mistral","created_at":"2024-04-26T16:08:25.260788753Z","response":",","done":false} {"model":"mistral","created_at":"2024-04-26T16:08:25.784992149Z","response":" the","done":false} {"model":"mistral","created_at":"2024-04-26T16:08:26.29265038Z","response":" sky","done":false} {"model":"mistral","created_at":"2024-04-26T16:08:26.838179737Z","response":" can","done":false} {"model":"mistral","created_at":"2024-04-26T16:08:27.363118194Z","response":" take","done":false} {"model":"mistral","created_at":"2024-04-26T16:08:28.101355293Z","response":" on","done":false} {"model":"mistral","created_at":"2024-04-26T16:08:28.667340507Z","response":" various","done":false} {"model":"mistral","created_at":"2024-04-26T16:08:29.197801841Z","response":" sh","done":false} {"model":"mistral","created_at":"2024-04-26T16:08:29.757229536Z","response":"ades","done":false} {"model":"mistral","created_at":"2024-04-26T16:08:30.283713028Z","response":" of","done":false} {"model":"mistral","created_at":"2024-04-26T16:08:30.791915363Z","response":" red","done":false} {"model":"mistral","created_at":"2024-04-26T16:08:31.309829303Z","response":",","done":false} {"model":"mistral","created_at":"2024-04-26T16:08:31.842367364Z","response":" pink","done":false} {"model":"mistral","created_at":"2024-04-26T16:08:32.371649431Z","response":",","done":false} {"model":"mistral","created_at":"2024-04-26T16:08:32.900765864Z","response":" orange","done":false} {"model":"mistral","created_at":"2024-04-26T16:08:33.422518694Z","response":",","done":false} {"model":"mistral","created_at":"2024-04-26T16:08:34.002004535Z","response":" and","done":false} {"model":"mistral","created_at":"2024-04-26T16:08:34.50465466Z","response":" purple","done":false} {"model":"mistral","created_at":"2024-04-26T16:08:35.020001089Z","response":" due","done":false} {"model":"mistral","created_at":"2024-04-26T16:08:35.552555833Z","response":" to","done":false} {"model":"mistral","created_at":"2024-04-26T16:08:36.100933033Z","response":" the","done":false} {"model":"mistral","created_at":"2024-04-26T16:08:36.624066249Z","response":" scattering","done":false} {"model":"mistral","created_at":"2024-04-26T16:08:37.127360511Z","response":" of","done":false} {"model":"mistral","created_at":"2024-04-26T16:08:37.645663874Z","response":" sunlight","done":false} {"model":"mistral","created_at":"2024-04-26T16:08:38.188634483Z","response":" in","done":false} {"model":"mistral","created_at":"2024-04-26T16:08:38.726616954Z","response":" the","done":false} {"model":"mistral","created_at":"2024-04-26T16:08:39.478114703Z","response":" Earth","done":false} {"model":"mistral","created_at":"2024-04-26T16:08:40.205710019Z","response":"'","done":false} {"model":"mistral","created_at":"2024-04-26T16:08:40.717148082Z","response":"s","done":false} {"model":"mistral","created_at":"2024-04-26T16:08:41.214359608Z","response":" atmosphere","done":false} {"model":"mistral","created_at":"2024-04-26T16:08:41.753657558Z","response":" at","done":false} {"model":"mistral","created_at":"2024-04-26T16:08:42.293513538Z","response":" those","done":false} {"model":"mistral","created_at":"2024-04-26T16:08:42.831315764Z","response":" specific","done":false} {"model":"mistral","created_at":"2024-04-26T16:08:43.325317833Z","response":" angles","done":false} {"model":"mistral","created_at":"2024-04-26T16:08:43.851507537Z","response":".","done":false} {"error":"unexpected server status: 1"} ``` Here are the logs from the docker container (I see no errors) : ``` text error warn system array login {"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":52940,"status":200,"tid":"23175901271616","timestamp":1714146582} {"function":"launch_slot_with_data","level":"INFO","line":830,"msg":"slot is processing task","slot_id":0,"task_id":787,"tid":"23180484790144","timestamp":1714146582} {"function":"update_slots","ga_i":0,"level":"INFO","line":1809,"msg":"slot progression","n_past":72,"n_past_se":0,"n_prompt_tokens_processed":22,"slot_id":0,"task_id":787,"tid":"23180484790144","timestamp":1714146582} {"function":"update_slots","level":"INFO","line":1836,"msg":"kv cache rm [p0, end)","p0":72,"slot_id":0,"task_id":787,"tid":"23180484790144","timestamp":1714146582} [GIN] 2024/04/26 - 15:50:19 | 200 | 36.865107759s | 192.168.1.167 | POST "/api/generate" {"function":"log_server_request","level":"INFO","line":2734,"method":"POST","msg":"request","params":{},"path":"/completion","remote_addr":"127.0.0.1","remote_port":52940,"status":200,"tid":"23175901271616","timestamp":1714146620} {"function":"update_slots","level":"INFO","line":1640,"msg":"slot released","n_cache_tokens":152,"n_ctx":2048,"n_past":151,"n_system_tokens":0,"slot_id":0,"task_id":787,"tid":"23180484790144","timestamp":1714146620,"truncated":false} {"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":848,"tid":"23180484790144","timestamp":1714146654} {"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":41324,"status":200,"tid":"23175903372864","timestamp":1714146654} {"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":849,"tid":"23180484790144","timestamp":1714146654} {"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":41324,"status":200,"tid":"23175903372864","timestamp":1714146654} {"function":"launch_slot_with_data","level":"INFO","line":830,"msg":"slot is processing task","slot_id":0,"task_id":850,"tid":"23180484790144","timestamp":1714146654} {"function":"update_slots","ga_i":0,"level":"INFO","line":1809,"msg":"slot progression","n_past":72,"n_past_se":0,"n_prompt_tokens_processed":11,"slot_id":0,"task_id":850,"tid":"23180484790144","timestamp":1714146654} {"function":"update_slots","level":"INFO","line":1836,"msg":"kv cache rm [p0, end)","p0":72,"slot_id":0,"task_id":850,"tid":"23180484790144","timestamp":1714146654} {"function":"print_timings","level":"INFO","line":269,"msg":"prompt eval time = 4190.76 ms / 11 tokens ( 380.98 ms per token, 2.62 tokens per second)","n_prompt_tokens_processed":11,"n_tokens_second":2.624825359630902,"slot_id":0,"t_prompt_processing":4190.755,"t_token":380.9777272727273,"task_id":850,"tid":"23180484790144","timestamp":1714146690} {"function":"print_timings","level":"INFO","line":283,"msg":"generation eval time = 31807.07 ms / 64 runs ( 496.99 ms per token, 2.01 tokens per second)","n_decoded":64,"n_tokens_second":2.0121314549373572,"slot_id":0,"t_token":496.985421875,"t_token_generation":31807.067,"task_id":850,"tid":"23180484790144","timestamp":1714146690} {"function":"print_timings","level":"INFO","line":293,"msg":" total time = 35997.82 ms","slot_id":0,"t_prompt_processing":4190.755,"t_token_generation":31807.067,"t_total":35997.822,"task_id":850,"tid":"23180484790144","timestamp":1714146690} {"function":"log_server_request","level":"INFO","line":2734,"method":"POST","msg":"request","params":{},"path":"/completion","remote_addr":"127.0.0.1","remote_port":41324,"status":200,"tid":"23175903372864","timestamp":1714146690} {"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":0,"n_processing_slots":1,"task_id":916,"tid":"23180484790144","timestamp":1714146690} {"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":41324,"status":200,"tid":"23175903372864","timestamp":1714146690} {"function":"update_slots","level":"INFO","line":1640,"msg":"slot released","n_cache_tokens":147,"n_ctx":2048,"n_past":146,"n_system_tokens":0,"slot_id":0,"task_id":850,"tid":"23180484790144","timestamp":1714146690,"truncated":false} [GIN] 2024/04/26 - 15:51:30 | 500 | 36.085232311s | 192.168.1.167 | POST "/api/generate" {"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":918,"tid":"23180484790144","timestamp":1714146702} {"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":52920,"status":200,"tid":"23175899170368","timestamp":1714146702} {"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":919,"tid":"23180484790144","timestamp":1714146702} {"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":52920,"status":200,"tid":"23175899170368","timestamp":1714146702} {"function":"launch_slot_with_data","level":"INFO","line":830,"msg":"slot is processing task","slot_id":0,"task_id":920,"tid":"23180484790144","timestamp":1714146702} {"function":"update_slots","ga_i":0,"level":"INFO","line":1809,"msg":"slot progression","n_past":83,"n_past_se":0,"n_prompt_tokens_processed":0,"slot_id":0,"task_id":920,"tid":"23180484790144","timestamp":1714146702} {"function":"update_slots","level":"INFO","line":1824,"msg":"we have to evaluate at least 1 token to generate logits","slot_id":0,"task_id":920,"tid":"23180484790144","timestamp":1714146702} {"function":"update_slots","level":"INFO","line":1836,"msg":"kv cache rm [p0, end)","p0":82,"slot_id":0,"task_id":920,"tid":"23180484790144","timestamp":1714146702} [GIN] 2024/04/26 - 15:52:19 | 200 | 37.126512884s | 192.168.1.167 | POST "/api/generate" {"function":"log_server_request","level":"INFO","line":2734,"method":"POST","msg":"request","params":{},"path":"/completion","remote_addr":"127.0.0.1","remote_port":52920,"status":200,"tid":"23175899170368","timestamp":1714146740} {"function":"update_slots","level":"INFO","line":1640,"msg":"slot released","n_cache_tokens":157,"n_ctx":2048,"n_past":156,"n_system_tokens":0,"slot_id":0,"task_id":920,"tid":"23180484790144","timestamp":1714146740,"truncated":false} {"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":997,"tid":"23180484790144","timestamp":1714146772} {"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":60708,"status":200,"tid":"23175901271616","timestamp":1714146772} {"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":998,"tid":"23180484790144","timestamp":1714146772} {"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":60708,"status":200,"tid":"23175901271616","timestamp":1714146772} {"function":"launch_slot_with_data","level":"INFO","line":830,"msg":"slot is processing task","slot_id":0,"task_id":999,"tid":"23180484790144","timestamp":1714146772} {"function":"update_slots","ga_i":0,"level":"INFO","line":1809,"msg":"slot progression","n_past":59,"n_past_se":0,"n_prompt_tokens_processed":26,"slot_id":0,"task_id":999,"tid":"23180484790144","timestamp":1714146772} {"function":"update_slots","level":"INFO","line":1836,"msg":"kv cache rm [p0, end)","p0":59,"slot_id":0,"task_id":999,"tid":"23180484790144","timestamp":1714146772} {"function":"print_timings","level":"INFO","line":269,"msg":"prompt eval time = 9954.13 ms / 26 tokens ( 382.85 ms per token, 2.61 tokens per second)","n_prompt_tokens_processed":26,"n_tokens_second":2.611980632766373,"slot_id":0,"t_prompt_processing":9954.132,"t_token":382.85123076923077,"task_id":999,"tid":"23180484790144","timestamp":1714146816} {"function":"print_timings","level":"INFO","line":283,"msg":"generation eval time = 34261.63 ms / 69 runs ( 496.55 ms per token, 2.01 tokens per second)","n_decoded":69,"n_tokens_second":2.0139148087183716,"slot_id":0,"t_token":496.5453333333333,"t_token_generation":34261.628,"task_id":999,"tid":"23180484790144","timestamp":1714146816} {"function":"print_timings","level":"INFO","line":293,"msg":" total time = 44215.76 ms","slot_id":0,"t_prompt_processing":9954.132,"t_token_generation":34261.628,"t_total":44215.759999999995,"task_id":999,"tid":"23180484790144","timestamp":1714146816} {"function":"log_server_request","level":"INFO","line":2734,"method":"POST","msg":"request","params":{},"path":"/completion","remote_addr":"127.0.0.1","remote_port":60708,"status":200,"tid":"23175901271616","timestamp":1714146816} {"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":0,"n_processing_slots":1,"task_id":1070,"tid":"23180484790144","timestamp":1714146816} {"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":60708,"status":200,"tid":"23175901271616","timestamp":1714146816} {"function":"update_slots","level":"INFO","line":1640,"msg":"slot released","n_cache_tokens":154,"n_ctx":2048,"n_past":153,"n_system_tokens":0,"slot_id":0,"task_id":999,"tid":"23180484790144","timestamp":1714146816,"truncated":false} [GIN] 2024/04/26 - 15:53:36 | 500 | 44.303565407s | 192.168.1.167 | POST "/api/generate" {"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":1072,"tid":"23180484790144","timestamp":1714146826} {"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":43070,"status":200,"tid":"23175903372864","timestamp":1714146826} {"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":1073,"tid":"23180484790144","timestamp":1714146826} {"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":43070,"status":200,"tid":"23175903372864","timestamp":1714146826} {"function":"launch_slot_with_data","level":"INFO","line":830,"msg":"slot is processing task","slot_id":0,"task_id":1074,"tid":"23180484790144","timestamp":1714146826} {"function":"update_slots","ga_i":0,"level":"INFO","line":1809,"msg":"slot progression","n_past":85,"n_past_se":0,"n_prompt_tokens_processed":0,"slot_id":0,"task_id":1074,"tid":"23180484790144","timestamp":1714146826} {"function":"update_slots","level":"INFO","line":1824,"msg":"we have to evaluate at least 1 token to generate logits","slot_id":0,"task_id":1074,"tid":"23180484790144","timestamp":1714146826} {"function":"update_slots","level":"INFO","line":1836,"msg":"kv cache rm [p0, end)","p0":84,"slot_id":0,"task_id":1074,"tid":"23180484790144","timestamp":1714146826} {"function":"print_timings","level":"INFO","line":269,"msg":"prompt eval time = 517.12 ms / 0 tokens ( inf ms per token, 0.00 tokens per second)","n_prompt_tokens_processed":0,"n_tokens_second":0.0,"slot_id":0,"t_prompt_processing":517.12,"t_token":null,"task_id":1074,"tid":"23180484790144","timestamp":1714146893} {"function":"print_timings","level":"INFO","line":283,"msg":"generation eval time = 66105.63 ms / 119 runs ( 555.51 ms per token, 1.80 tokens per second)","n_decoded":119,"n_tokens_second":1.8001493549126968,"slot_id":0,"t_token":555.509462184874,"t_token_generation":66105.626,"task_id":1074,"tid":"23180484790144","timestamp":1714146893} {"function":"print_timings","level":"INFO","line":293,"msg":" total time = 66622.75 ms","slot_id":0,"t_prompt_processing":517.12,"t_token_generation":66105.626,"t_total":66622.746,"task_id":1074,"tid":"23180484790144","timestamp":1714146893} {"function":"log_server_request","level":"INFO","line":2734,"method":"POST","msg":"request","params":{},"path":"/completion","remote_addr":"127.0.0.1","remote_port":43070,"status":200,"tid":"23175903372864","timestamp":1714146893} {"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":0,"n_processing_slots":1,"task_id":1195,"tid":"23180484790144","timestamp":1714146893} {"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":43070,"status":200,"tid":"23175903372864","timestamp":1714146893} {"function":"update_slots","level":"INFO","line":1640,"msg":"slot released","n_cache_tokens":204,"n_ctx":2048,"n_past":203,"n_system_tokens":0,"slot_id":0,"task_id":1074,"tid":"23180484790144","timestamp":1714146893,"truncated":false} [GIN] 2024/04/26 - 15:54:53 | 500 | 1m6s | 192.168.1.167 | POST "/api/generate" {"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":1197,"tid":"23180484790144","timestamp":1714146898} {"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":57848,"status":200,"tid":"23175899170368","timestamp":1714146898} {"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":1198,"tid":"23180484790144","timestamp":1714146898} {"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":57848,"status":200,"tid":"23175899170368","timestamp":1714146898} {"function":"launch_slot_with_data","level":"INFO","line":830,"msg":"slot is processing task","slot_id":0,"task_id":1199,"tid":"23180484790144","timestamp":1714146898} {"function":"update_slots","ga_i":0,"level":"INFO","line":1809,"msg":"slot progression","n_past":85,"n_past_se":0,"n_prompt_tokens_processed":0,"slot_id":0,"task_id":1199,"tid":"23180484790144","timestamp":1714146898} {"function":"update_slots","level":"INFO","line":1824,"msg":"we have to evaluate at least 1 token to generate logits","slot_id":0,"task_id":1199,"tid":"23180484790144","timestamp":1714146898} {"function":"update_slots","level":"INFO","line":1836,"msg":"kv cache rm [p0, end)","p0":84,"slot_id":0,"task_id":1199,"tid":"23180484790144","timestamp":1714146898} [GIN] 2024/04/26 - 15:56:21 | 200 | 1m23s | 192.168.1.167 | POST "/api/generate" {"function":"log_server_request","level":"INFO","line":2734,"method":"POST","msg":"request","params":{},"path":"/completion","remote_addr":"127.0.0.1","remote_port":57848,"status":200,"tid":"23175899170368","timestamp":1714146982} {"function":"update_slots","level":"INFO","line":1640,"msg":"slot released","n_cache_tokens":235,"n_ctx":2048,"n_past":234,"n_system_tokens":0,"slot_id":0,"task_id":1199,"tid":"23180484790144","timestamp":1714146982,"truncated":false} {"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":1352,"tid":"23180484790144","timestamp":1714147011} {"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":53832,"status":200,"tid":"23175901271616","timestamp":1714147011} {"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":1353,"tid":"23180484790144","timestamp":1714147011} {"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":53832,"status":200,"tid":"23175901271616","timestamp":1714147011} {"function":"launch_slot_with_data","level":"INFO","line":830,"msg":"slot is processing task","slot_id":0,"task_id":1354,"tid":"23180484790144","timestamp":1714147011} {"function":"update_slots","ga_i":0,"level":"INFO","line":1809,"msg":"slot progression","n_past":74,"n_past_se":0,"n_prompt_tokens_processed":4,"slot_id":0,"task_id":1354,"tid":"23180484790144","timestamp":1714147011} {"function":"update_slots","level":"INFO","line":1836,"msg":"kv cache rm [p0, end)","p0":74,"slot_id":0,"task_id":1354,"tid":"23180484790144","timestamp":1714147011} {"function":"print_timings","level":"INFO","line":269,"msg":"prompt eval time = 1759.11 ms / 4 tokens ( 439.78 ms per token, 2.27 tokens per second)","n_prompt_tokens_processed":4,"n_tokens_second":2.2738797163107667,"slot_id":0,"t_prompt_processing":1759.108,"t_token":439.777,"task_id":1354,"tid":"23180484790144","timestamp":1714147065} {"function":"print_timings","level":"INFO","line":283,"msg":"generation eval time = 52489.85 ms / 98 runs ( 535.61 ms per token, 1.87 tokens per second)","n_decoded":98,"n_tokens_second":1.867027589771592,"slot_id":0,"t_token":535.610724489796,"t_token_generation":52489.851,"task_id":1354,"tid":"23180484790144","timestamp":1714147065} {"function":"print_timings","level":"INFO","line":293,"msg":" total time = 54248.96 ms","slot_id":0,"t_prompt_processing":1759.108,"t_token_generation":52489.851,"t_total":54248.959,"task_id":1354,"tid":"23180484790144","timestamp":1714147065} {"function":"log_server_request","level":"INFO","line":2734,"method":"POST","msg":"request","params":{},"path":"/completion","remote_addr":"127.0.0.1","remote_port":53832,"status":200,"tid":"23175901271616","timestamp":1714147065} {"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":0,"n_processing_slots":1,"task_id":1454,"tid":"23180484790144","timestamp":1714147065} {"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":34160,"status":200,"tid":"23175903372864","timestamp":1714147065} [GIN] 2024/04/26 - 15:57:45 | 500 | 54.29739368s | 192.168.1.167 | POST "/api/generate" {"function":"update_slots","level":"INFO","line":1640,"msg":"slot released","n_cache_tokens":176,"n_ctx":2048,"n_past":175,"n_system_tokens":0,"slot_id":0,"task_id":1354,"tid":"23180484790144","timestamp":1714147065,"truncated":false} time=2024-04-26T16:06:50.536Z level=INFO source=gpu.go:121 msg="Detecting GPU type" time=2024-04-26T16:06:50.536Z level=INFO source=gpu.go:268 msg="Searching for GPU management library libcudart.so*" time=2024-04-26T16:06:50.537Z level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [/tmp/ollama3539318188/runners/cuda_v11/libcudart.so.11.0]" time=2024-04-26T16:06:50.579Z level=INFO source=gpu.go:343 msg="Unable to load cudart CUDA management library /tmp/ollama3539318188/runners/cuda_v11/libcudart.so.11.0: your nvidia driver is too old or missing, please upgrade to run ollama" time=2024-04-26T16:06:50.579Z level=INFO source=gpu.go:268 msg="Searching for GPU management library libnvidia-ml.so" time=2024-04-26T16:06:50.580Z level=INFO source=gpu.go:314 msg="Discovered GPU libraries: []" time=2024-04-26T16:06:50.580Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-04-26T16:06:50.580Z level=INFO source=gpu.go:121 msg="Detecting GPU type" time=2024-04-26T16:06:50.580Z level=INFO source=gpu.go:268 msg="Searching for GPU management library libcudart.so*" time=2024-04-26T16:06:50.581Z level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [/tmp/ollama3539318188/runners/cuda_v11/libcudart.so.11.0]" time=2024-04-26T16:06:50.581Z level=INFO source=gpu.go:343 msg="Unable to load cudart CUDA management library /tmp/ollama3539318188/runners/cuda_v11/libcudart.so.11.0: your nvidia driver is too old or missing, please upgrade to run ollama" time=2024-04-26T16:06:50.582Z level=INFO source=gpu.go:268 msg="Searching for GPU management library libnvidia-ml.so" time=2024-04-26T16:06:50.582Z level=INFO source=gpu.go:314 msg="Discovered GPU libraries: []" time=2024-04-26T16:06:50.583Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-04-26T16:06:50.583Z level=INFO source=server.go:127 msg="offload to gpu" reallayers=0 layers=0 required="4267.5 MiB" used="181.0 MiB" available="0 B" kv="256.0 MiB" fulloffload="164.0 MiB" partialoffload="181.0 MiB" time=2024-04-26T16:06:50.584Z level=INFO source=server.go:264 msg="starting llama server" cmd="/tmp/ollama3539318188/runners/cpu_avx2/ollama_llama_server --model /root/.ollama/models/blobs/sha256-e8a35b5937a5e6d5c35d1f2a15f161e07eefe5e5bb0a3cdd42998ee79b057730 --ctx-size 2048 --batch-size 512 --embedding --log-disable --n-gpu-layers 0 --port 39645" time=2024-04-26T16:06:50.585Z level=INFO source=server.go:389 msg="waiting for llama runner to start responding" {"function":"server_params_parse","level":"INFO","line":2603,"msg":"logging to file is disabled.","tid":"22658407044992","timestamp":1714147610} {"function":"server_params_parse","level":"WARN","line":2380,"msg":"Not compiled with GPU offload support, --n-gpu-layers option will be ignored. See main README.md for information on enabling GPU BLAS support","n_gpu_layers":-1,"tid":"22658407044992","timestamp":1714147610} {"build":1,"commit":"7593639","function":"main","level":"INFO","line":2819,"msg":"build info","tid":"22658407044992","timestamp":1714147610} {"function":"main","level":"INFO","line":2822,"msg":"system info","n_threads":1,"n_threads_batch":-1,"system_info":"AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | ","tid":"22658407044992","timestamp":1714147610,"total_threads":1} llama_model_loader: loaded meta data with 24 key-value pairs and 291 tensors from /root/.ollama/models/blobs/sha256-e8a35b5937a5e6d5c35d1f2a15f161e07eefe5e5bb0a3cdd42998ee79b057730 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str = mistralai llama_model_loader: - kv 2: llama.context_length u32 = 32768 llama_model_loader: - kv 3: llama.embedding_length u32 = 4096 llama_model_loader: - kv 4: llama.block_count u32 = 32 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336 llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 7: llama.attention.head_count u32 = 32 llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 10: llama.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 11: general.file_type u32 = 2 llama_model_loader: - kv 12: tokenizer.ggml.model str = llama llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,32000] = ["<unk>", "<s>", "</s>", "<0x00>", "<... llama_model_loader: - kv 14: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000... llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ... llama_model_loader: - kv 16: tokenizer.ggml.merges arr[str,58980] = ["▁ t", "i n", "e r", "▁ a", "h e... llama_model_loader: - kv 17: tokenizer.ggml.bos_token_id u32 = 1 llama_model_loader: - kv 18: tokenizer.ggml.eos_token_id u32 = 2 llama_model_loader: - kv 19: tokenizer.ggml.unknown_token_id u32 = 0 llama_model_loader: - kv 20: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 21: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 22: tokenizer.chat_template str = {{ bos_token }}{% for message in mess... llama_model_loader: - kv 23: general.quantization_version u32 = 2 llama_model_loader: - type f32: 65 tensors llama_model_loader: - type q4_0: 225 tensors llama_model_loader: - type q6_K: 1 tensors llm_load_vocab: special tokens definition check successful ( 259/32000 ). llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = SPM llm_load_print_meta: n_vocab = 32000 llm_load_print_meta: n_merges = 0 llm_load_print_meta: n_ctx_train = 32768 llm_load_print_meta: n_embd = 4096 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_layer = 32 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 4 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 14336 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 1000000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_yarn_orig_ctx = 32768 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: model type = 7B llm_load_print_meta: model ftype = Q4_0 llm_load_print_meta: model params = 7.24 B llm_load_print_meta: model size = 3.83 GiB (4.54 BPW) llm_load_print_meta: general.name = mistralai llm_load_print_meta: BOS token = 1 '<s>' llm_load_print_meta: EOS token = 2 '</s>' llm_load_print_meta: UNK token = 0 '<unk>' llm_load_print_meta: LF token = 13 '<0x0A>' llm_load_tensors: ggml ctx size = 0.11 MiB llm_load_tensors: CPU buffer size = 3917.87 MiB .................................................................................................. llama_new_context_with_model: n_ctx = 2048 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: freq_base = 1000000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: CPU KV buffer size = 256.00 MiB llama_new_context_with_model: KV self size = 256.00 MiB, K (f16): 128.00 MiB, V (f16): 128.00 MiB llama_new_context_with_model: CPU output buffer size = 0.14 MiB llama_new_context_with_model: CPU compute buffer size = 164.01 MiB llama_new_context_with_model: graph nodes = 1030 llama_new_context_with_model: graph splits = 1 {"function":"initialize","level":"INFO","line":448,"msg":"initializing slots","n_slots":1,"tid":"22658407044992","timestamp":1714147640} {"function":"initialize","level":"INFO","line":457,"msg":"new slot","n_ctx_slot":2048,"slot_id":0,"tid":"22658407044992","timestamp":1714147640} {"function":"main","level":"INFO","line":3064,"msg":"model loaded","tid":"22658407044992","timestamp":1714147640} {"function":"main","hostname":"127.0.0.1","level":"INFO","line":3267,"msg":"HTTP server listening","n_threads_http":"3","port":"39645","tid":"22658407044992","timestamp":1714147640} {"function":"update_slots","level":"INFO","line":1578,"msg":"all slots are idle and system prompt is empty, clear the KV cache","tid":"22658407044992","timestamp":1714147640} {"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":0,"tid":"22658407044992","timestamp":1714147640} {"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":44072,"status":200,"tid":"22653821568576","timestamp":1714147640} {"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":1,"tid":"22658407044992","timestamp":1714147640} {"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":44078,"status":200,"tid":"22653821568576","timestamp":1714147640} {"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":2,"tid":"22658407044992","timestamp":1714147640} {"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":44094,"status":200,"tid":"22653821568576","timestamp":1714147640} {"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":3,"tid":"22658407044992","timestamp":1714147640} {"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":44102,"status":200,"tid":"22653821568576","timestamp":1714147640} {"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":4,"tid":"22658407044992","timestamp":1714147640} {"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":44120,"status":200,"tid":"22653821568576","timestamp":1714147640} {"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":5,"tid":"22658407044992","timestamp":1714147640} {"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":44116,"status":200,"tid":"22653823669824","timestamp":1714147640} {"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":6,"tid":"22658407044992","timestamp":1714147641} {"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":44942,"status":200,"tid":"22653823669824","timestamp":1714147641} {"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":7,"tid":"22658407044992","timestamp":1714147641} {"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":44942,"status":200,"tid":"22653823669824","timestamp":1714147641} {"function":"launch_slot_with_data","level":"INFO","line":830,"msg":"slot is processing task","slot_id":0,"task_id":8,"tid":"22658407044992","timestamp":1714147641} {"function":"update_slots","ga_i":0,"level":"INFO","line":1809,"msg":"slot progression","n_past":0,"n_past_se":0,"n_prompt_tokens_processed":15,"slot_id":0,"task_id":8,"tid":"22658407044992","timestamp":1714147641} {"function":"update_slots","level":"INFO","line":1836,"msg":"kv cache rm [p0, end)","p0":0,"slot_id":0,"task_id":8,"tid":"22658407044992","timestamp":1714147641} [GIN] 2024/04/26 - 16:07:21 | 200 | 66.373µs | 127.0.0.1 | GET "/api/version" {"function":"print_timings","level":"INFO","line":269,"msg":"prompt eval time = 6808.76 ms / 15 tokens ( 453.92 ms per token, 2.20 tokens per second)","n_prompt_tokens_processed":15,"n_tokens_second":2.2030430192616457,"slot_id":0,"t_prompt_processing":6808.764,"t_token":453.9176,"task_id":8,"tid":"22658407044992","timestamp":1714147724} {"function":"print_timings","level":"INFO","line":283,"msg":"generation eval time = 76398.96 ms / 141 runs ( 541.84 ms per token, 1.85 tokens per second)","n_decoded":141,"n_tokens_second":1.845574957856754,"slot_id":0,"t_token":541.8365673758866,"t_token_generation":76398.956,"task_id":8,"tid":"22658407044992","timestamp":1714147724} {"function":"print_timings","level":"INFO","line":293,"msg":" total time = 83207.72 ms","slot_id":0,"t_prompt_processing":6808.764,"t_token_generation":76398.956,"t_total":83207.72,"task_id":8,"tid":"22658407044992","timestamp":1714147724} {"function":"log_server_request","level":"INFO","line":2734,"method":"POST","msg":"request","params":{},"path":"/completion","remote_addr":"127.0.0.1","remote_port":44942,"status":200,"tid":"22653823669824","timestamp":1714147724} {"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":0,"n_processing_slots":1,"task_id":151,"tid":"22658407044992","timestamp":1714147724} {"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":44942,"status":200,"tid":"22653823669824","timestamp":1714147724} {"function":"update_slots","level":"INFO","line":1640,"msg":"slot released","n_cache_tokens":156,"n_ctx":2048,"n_past":155,"n_system_tokens":0,"slot_id":0,"task_id":8,"tid":"22658407044992","timestamp":1714147724,"truncated":false} [GIN] 2024/04/26 - 16:08:44 | 200 | 1m57s | 192.168.1.167 | POST "/api/generate" ``` ### OS Linux ### GPU Other ### CPU Intel ### Ollama version 0.1.32
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3949/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3949/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2666
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2666/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2666/comments
https://api.github.com/repos/ollama/ollama/issues/2666/events
https://github.com/ollama/ollama/pull/2666
2,148,340,077
PR_kwDOJ0Z1Ps5nm7PN
2,666
Update client.py
{ "login": "Yuan-ManX", "id": 68322456, "node_id": "MDQ6VXNlcjY4MzIyNDU2", "avatar_url": "https://avatars.githubusercontent.com/u/68322456?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Yuan-ManX", "html_url": "https://github.com/Yuan-ManX", "followers_url": "https://api.github.com/users/Yuan-ManX/followers", "following_url": "https://api.github.com/users/Yuan-ManX/following{/other_user}", "gists_url": "https://api.github.com/users/Yuan-ManX/gists{/gist_id}", "starred_url": "https://api.github.com/users/Yuan-ManX/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Yuan-ManX/subscriptions", "organizations_url": "https://api.github.com/users/Yuan-ManX/orgs", "repos_url": "https://api.github.com/users/Yuan-ManX/repos", "events_url": "https://api.github.com/users/Yuan-ManX/events{/privacy}", "received_events_url": "https://api.github.com/users/Yuan-ManX/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
1
2024-02-22T06:46:24
2024-05-07T23:37:47
2024-05-07T23:37:47
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/2666", "html_url": "https://github.com/ollama/ollama/pull/2666", "diff_url": "https://github.com/ollama/ollama/pull/2666.diff", "patch_url": "https://github.com/ollama/ollama/pull/2666.patch", "merged_at": null }
null
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2666/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2666/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/2396
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2396/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2396/comments
https://api.github.com/repos/ollama/ollama/issues/2396/events
https://github.com/ollama/ollama/issues/2396
2,123,732,549
I_kwDOJ0Z1Ps5-lZZF
2,396
llama.cpp now supports Vulkan
{ "login": "ddpasa", "id": 112642920, "node_id": "U_kgDOBrbLaA", "avatar_url": "https://avatars.githubusercontent.com/u/112642920?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ddpasa", "html_url": "https://github.com/ddpasa", "followers_url": "https://api.github.com/users/ddpasa/followers", "following_url": "https://api.github.com/users/ddpasa/following{/other_user}", "gists_url": "https://api.github.com/users/ddpasa/gists{/gist_id}", "starred_url": "https://api.github.com/users/ddpasa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ddpasa/subscriptions", "organizations_url": "https://api.github.com/users/ddpasa/orgs", "repos_url": "https://api.github.com/users/ddpasa/repos", "events_url": "https://api.github.com/users/ddpasa/events{/privacy}", "received_events_url": "https://api.github.com/users/ddpasa/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 6677745918, "node_id": "LA_kwDOJ0Z1Ps8AAAABjgZQ_g", "url": "https://api.github.com/repos/ollama/ollama/labels/gpu", "name": "gpu", "color": "76C49E", "default": false, "description": "" } ]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false } ]
null
3
2024-02-07T19:33:24
2024-03-21T14:00:45
2024-03-21T14:00:45
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
As of 10 days ago: https://github.com/ggerganov/llama.cpp/commit/2307523d322af762ae06648b29ec5a9eb1c73032 This is great news for people who non-CUDA cards. What's necessary to support this with Ollama? I'm happy to help if you show me the pointers.
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2396/reactions", "total_count": 27, "+1": 10, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 17, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2396/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7204
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7204/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7204/comments
https://api.github.com/repos/ollama/ollama/issues/7204/events
https://github.com/ollama/ollama/pull/7204
2,587,233,516
PR_kwDOJ0Z1Ps5-mfq6
7,204
Fix openapi base writer header code.
{ "login": "zhanluxianshen", "id": 161462588, "node_id": "U_kgDOCZ-5PA", "avatar_url": "https://avatars.githubusercontent.com/u/161462588?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zhanluxianshen", "html_url": "https://github.com/zhanluxianshen", "followers_url": "https://api.github.com/users/zhanluxianshen/followers", "following_url": "https://api.github.com/users/zhanluxianshen/following{/other_user}", "gists_url": "https://api.github.com/users/zhanluxianshen/gists{/gist_id}", "starred_url": "https://api.github.com/users/zhanluxianshen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zhanluxianshen/subscriptions", "organizations_url": "https://api.github.com/users/zhanluxianshen/orgs", "repos_url": "https://api.github.com/users/zhanluxianshen/repos", "events_url": "https://api.github.com/users/zhanluxianshen/events{/privacy}", "received_events_url": "https://api.github.com/users/zhanluxianshen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
null
0
2024-10-14T23:00:41
2024-10-17T15:32:18
null
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/7204", "html_url": "https://github.com/ollama/ollama/pull/7204", "diff_url": "https://github.com/ollama/ollama/pull/7204.diff", "patch_url": "https://github.com/ollama/ollama/pull/7204.patch", "merged_at": null }
Fix openapi base writer header code.
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7204/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7204/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/987
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/987/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/987/comments
https://api.github.com/repos/ollama/ollama/issues/987/events
https://github.com/ollama/ollama/issues/987
1,976,403,050
I_kwDOJ0Z1Ps51zYRq
987
segmentation fault with prompts longer than 5 / 6 tokens on intel mac
{ "login": "Serpico84", "id": 7769484, "node_id": "MDQ6VXNlcjc3Njk0ODQ=", "avatar_url": "https://avatars.githubusercontent.com/u/7769484?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Serpico84", "html_url": "https://github.com/Serpico84", "followers_url": "https://api.github.com/users/Serpico84/followers", "following_url": "https://api.github.com/users/Serpico84/following{/other_user}", "gists_url": "https://api.github.com/users/Serpico84/gists{/gist_id}", "starred_url": "https://api.github.com/users/Serpico84/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Serpico84/subscriptions", "organizations_url": "https://api.github.com/users/Serpico84/orgs", "repos_url": "https://api.github.com/users/Serpico84/repos", "events_url": "https://api.github.com/users/Serpico84/events{/privacy}", "received_events_url": "https://api.github.com/users/Serpico84/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
8
2023-11-03T15:07:05
2023-12-10T23:16:41
2023-11-17T03:18:22
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I'm running Ollama on a 2019 intel MacBook Pro with 32gb of RAM and a 4gb AMD GPU. macOS Monterey. For some reson, every prompt longer than a few words on both codellama:7b and llama2:7b end up with `Error: llama runner exited, you may not have enough available memory to run this model` Very short prompts work ok. This is the server log file: ``` llama_new_context_with_model: n_ctx = 2048 llama_new_context_with_model: freq_base = 1000000.0 llama_new_context_with_model: freq_scale = 1 llama_new_context_with_model: kv self size = 1024.00 MB llama_new_context_with_model: compute buffer total size = 162.13 MB llama server listening at http://127.0.0.1:49879 {"timestamp":1699023783,"level":"INFO","function":"main","line":1749,"message":"HTTP server listening","hostname":"127.0.0.1","port":49879} {"timestamp":1699023783,"level":"INFO","function":"log_server_request","line":1240,"message":"request","remote_addr":"127.0.0.1","remote_port":50344,"status":200,"method":"HEAD","path":"/","params":{}} 2023/11/03 16:03:03 llama.go:442: llama runner started in 1.001631 seconds [GIN] 2023/11/03 - 16:03:03 | 200 | 1.143871095s | 127.0.0.1 | POST "/api/generate" {"timestamp":1699023810,"level":"INFO","function":"log_server_request","line":1240,"message":"request","remote_addr":"127.0.0.1","remote_port":50346,"status":200,"method":"HEAD","path":"/","params":{}} 2023/11/03 16:03:30 llama.go:385: signal: segmentation fault 2023/11/03 16:03:30 llama.go:459: llama runner stopped successfully [GIN] 2023/11/03 - 16:03:30 | 200 | 227.020112ms | 127.0.0.1 | POST "/api/generate" ``` Not sure what can I do to tackle this problem or provide any more info, as far as I know I should be able to run those models just fine with my machine.
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/987/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/987/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5761
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5761/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5761/comments
https://api.github.com/repos/ollama/ollama/issues/5761/events
https://github.com/ollama/ollama/issues/5761
2,415,145,821
I_kwDOJ0Z1Ps6P9DNd
5,761
Tokenizer issue with tool calling with InternLM2
{ "login": "endyjasmi", "id": 1048745, "node_id": "MDQ6VXNlcjEwNDg3NDU=", "avatar_url": "https://avatars.githubusercontent.com/u/1048745?v=4", "gravatar_id": "", "url": "https://api.github.com/users/endyjasmi", "html_url": "https://github.com/endyjasmi", "followers_url": "https://api.github.com/users/endyjasmi/followers", "following_url": "https://api.github.com/users/endyjasmi/following{/other_user}", "gists_url": "https://api.github.com/users/endyjasmi/gists{/gist_id}", "starred_url": "https://api.github.com/users/endyjasmi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/endyjasmi/subscriptions", "organizations_url": "https://api.github.com/users/endyjasmi/orgs", "repos_url": "https://api.github.com/users/endyjasmi/repos", "events_url": "https://api.github.com/users/endyjasmi/events{/privacy}", "received_events_url": "https://api.github.com/users/endyjasmi/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
5
2024-07-18T03:39:25
2024-07-22T03:01:22
2024-07-19T12:15:38
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I am currently using `internlm2:7b` model from https://ollama.com/library/internlm2:7b . I am trying to use the tool calling capability of the model using https://github.com/InternLM/InternLM/blob/main/chat/chat_format.md#function-call as a prompt reference. Following is the prompt I used ``` <|im_start|>system You are a helpful and honest assistant.<|im_end|> <|im_start|>system name=<|plugin|> [{"name":"duckduckgo-search","description":"A search engine. Useful for when you need to answer questions about current events. Input should be a search query.","parameters":{"type":"object","properties":{"input":{"type":"string"}},"additionalProperties":false,"$schema":"http://json-schema.org/draft-07/schema#"}}]<|im_end|> <|im_start|>user What is the capital city of France?<|im_end|> <|im_start|>assistant ``` The prompt is sent to the `/api/generate` endpoint (raw mode). https://github.com/ollama/ollama/blob/main/docs/api.md#request-raw-mode Here is the response from the model. ``` I need to use the duckduckgo-search plugin to find the latest test news today.[UNUSED_TOKEN_144][UNUSED_TOKEN_141] {"name":"duckduckgo-search","parameters":{"input":"latest test news today"}}[UNUSED_TOKEN_143] ``` The format of the response looks good except for the tokenizer failure to detokenize `<|action_start|>`, `<|action_end|>` and `<|plugin|>`. After some research around the web, I still have no idea how to fix this, hoping you can help me with this. Thank you in advance. <hr /> ### Extras (Not sure if this helps) Here is the script I used ```typescript import { DuckDuckGoSearch } from "@langchain/community/tools/duckduckgo_search"; import { Ollama } from "ollama"; import { zodToJsonSchema } from "zod-to-json-schema"; const searchTool = new DuckDuckGoSearch(); const tools = JSON.stringify([ { name: searchTool.name, description: searchTool.description, parameters: zodToJsonSchema(searchTool.schema), }, ]); const prompt = `<|im_start|>system You are a helpful and honest assistant.<|im_end|> <|im_start|>system name=<|plugin|> ${tools}<|im_end|> <|im_start|>user What is the capital city of France?<|im_end|> <|im_start|>assistant `; console.info(prompt); const ollama = new Ollama(); const stream = await ollama.generate({ keep_alive: "1h", model: "internlm2:7b", prompt, raw: true, stream: true, options: { num_ctx: 4096, temperature: 0, }, }); for await (const chunk of stream) process.stdout.write(chunk.response); process.stdout.write("\n"); ``` Here is the `tokenizer_config.json` for `internlm2_5-7b-chat`. https://huggingface.co/internlm/internlm2_5-7b-chat/blob/main/tokenizer_config.json ### OS Windows, Docker, WSL2 ### GPU Nvidia ### CPU Intel ### Ollama version 0.2.5
{ "login": "endyjasmi", "id": 1048745, "node_id": "MDQ6VXNlcjEwNDg3NDU=", "avatar_url": "https://avatars.githubusercontent.com/u/1048745?v=4", "gravatar_id": "", "url": "https://api.github.com/users/endyjasmi", "html_url": "https://github.com/endyjasmi", "followers_url": "https://api.github.com/users/endyjasmi/followers", "following_url": "https://api.github.com/users/endyjasmi/following{/other_user}", "gists_url": "https://api.github.com/users/endyjasmi/gists{/gist_id}", "starred_url": "https://api.github.com/users/endyjasmi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/endyjasmi/subscriptions", "organizations_url": "https://api.github.com/users/endyjasmi/orgs", "repos_url": "https://api.github.com/users/endyjasmi/repos", "events_url": "https://api.github.com/users/endyjasmi/events{/privacy}", "received_events_url": "https://api.github.com/users/endyjasmi/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5761/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5761/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6624
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6624/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6624/comments
https://api.github.com/repos/ollama/ollama/issues/6624/events
https://github.com/ollama/ollama/pull/6624
2,504,196,670
PR_kwDOJ0Z1Ps56VY1T
6,624
Update README.md with PyOllaMx
{ "login": "kspviswa", "id": 7476271, "node_id": "MDQ6VXNlcjc0NzYyNzE=", "avatar_url": "https://avatars.githubusercontent.com/u/7476271?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kspviswa", "html_url": "https://github.com/kspviswa", "followers_url": "https://api.github.com/users/kspviswa/followers", "following_url": "https://api.github.com/users/kspviswa/following{/other_user}", "gists_url": "https://api.github.com/users/kspviswa/gists{/gist_id}", "starred_url": "https://api.github.com/users/kspviswa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kspviswa/subscriptions", "organizations_url": "https://api.github.com/users/kspviswa/orgs", "repos_url": "https://api.github.com/users/kspviswa/repos", "events_url": "https://api.github.com/users/kspviswa/events{/privacy}", "received_events_url": "https://api.github.com/users/kspviswa/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
1
2024-09-04T03:04:25
2024-09-04T03:10:53
2024-09-04T03:10:53
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/6624", "html_url": "https://github.com/ollama/ollama/pull/6624", "diff_url": "https://github.com/ollama/ollama/pull/6624.diff", "patch_url": "https://github.com/ollama/ollama/pull/6624.patch", "merged_at": "2024-09-04T03:10:53" }
Based on [this comment](https://github.com/ollama/ollama/issues/5937#issuecomment-2327760726), creating this PR to add PyOllaMx to list of ollama based applications
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6624/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6624/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5683
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5683/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5683/comments
https://api.github.com/repos/ollama/ollama/issues/5683/events
https://github.com/ollama/ollama/pull/5683
2,407,188,878
PR_kwDOJ0Z1Ps51TmfX
5,683
fix: solve network disruption during downloads, add OLLAMA_DOWNLOAD_CONN setting
{ "login": "supercurio", "id": 406003, "node_id": "MDQ6VXNlcjQwNjAwMw==", "avatar_url": "https://avatars.githubusercontent.com/u/406003?v=4", "gravatar_id": "", "url": "https://api.github.com/users/supercurio", "html_url": "https://github.com/supercurio", "followers_url": "https://api.github.com/users/supercurio/followers", "following_url": "https://api.github.com/users/supercurio/following{/other_user}", "gists_url": "https://api.github.com/users/supercurio/gists{/gist_id}", "starred_url": "https://api.github.com/users/supercurio/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/supercurio/subscriptions", "organizations_url": "https://api.github.com/users/supercurio/orgs", "repos_url": "https://api.github.com/users/supercurio/repos", "events_url": "https://api.github.com/users/supercurio/events{/privacy}", "received_events_url": "https://api.github.com/users/supercurio/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
7
2024-07-13T22:54:53
2024-12-10T23:20:15
2024-11-21T10:22:14
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5683", "html_url": "https://github.com/ollama/ollama/pull/5683", "diff_url": "https://github.com/ollama/ollama/pull/5683.diff", "patch_url": "https://github.com/ollama/ollama/pull/5683.patch", "merged_at": null }
The process of managing bandwidth for model downloads has been an ongoing journey. - Users reported difficulties when downloading model since January in issue #2006 - The feature #2995 was reverted in March 2024 The situation left Ollama server with unsafe network concurrency defaults since, causing problems for many users and people sharing the same network, whether they realize Ollama is the origin of their troubles or not. In the associated issue, users describe in length the problems caused and creative mitigations. Fortunately, the root cause is simple: 64 concurrent connections, an extremely aggressive value guaranteed to challenge any network congestion algorithm, and the fix is straightforward: opting for 1 concurrent connection by default per model download. This PR addresses the root cause while adding the ability to configure network concurrency for download if required, via the `OLLAMA_DOWNLOAD_CONN` setting. This PR avoids on purpose any complex, ineffective or hard to configure workarounds, like dynamic concurrency adjustments or manual bandwidth limiting. From the commit associated: The Ollama server now downloads models using a single connection. This change addresses the root cause of issue #2006 by following best practices instead of relying on workarounds. Users have been reporting problems associated with model downloads since January 2024, describing issues such as "hogging the entire device", "reliably and repeatedly kills my connection", "freezes completely leaving no choice but to hard reset", "when I download models, everyone in the office gets a really slow internet", and "when downloading large models, it feels like my home network is being DDoSed." The environment variable `OLLAMA_DOWNLOAD_CONN` can be set to control the number of concurrent connections with a maximum value of 64 (the previous default, an aggressive value - unsafe in some conditions). The new default value is 1, ensuring each Ollama download is given the same priority as other network activities. An entry in the FAQ describes how to use `OLLAMA_DOWNLOAD_CONN` for different use cases. This patch comes with a safe and unproblematic default value. Changes include updates to the `envconfig/config.go`, `cmd/cmd.go`, `server/download.go`, and `docs/faq.md` files.
{ "login": "mchiang0610", "id": 3325447, "node_id": "MDQ6VXNlcjMzMjU0NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mchiang0610", "html_url": "https://github.com/mchiang0610", "followers_url": "https://api.github.com/users/mchiang0610/followers", "following_url": "https://api.github.com/users/mchiang0610/following{/other_user}", "gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}", "starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions", "organizations_url": "https://api.github.com/users/mchiang0610/orgs", "repos_url": "https://api.github.com/users/mchiang0610/repos", "events_url": "https://api.github.com/users/mchiang0610/events{/privacy}", "received_events_url": "https://api.github.com/users/mchiang0610/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5683/reactions", "total_count": 11, "+1": 11, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5683/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/8614
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8614/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8614/comments
https://api.github.com/repos/ollama/ollama/issues/8614/events
https://github.com/ollama/ollama/issues/8614
2,813,943,892
I_kwDOJ0Z1Ps6nuWBU
8,614
Problems with deepseek-r1:671b, ollama keeps crashing on long answers
{ "login": "fabiounixpi", "id": 48057600, "node_id": "MDQ6VXNlcjQ4MDU3NjAw", "avatar_url": "https://avatars.githubusercontent.com/u/48057600?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fabiounixpi", "html_url": "https://github.com/fabiounixpi", "followers_url": "https://api.github.com/users/fabiounixpi/followers", "following_url": "https://api.github.com/users/fabiounixpi/following{/other_user}", "gists_url": "https://api.github.com/users/fabiounixpi/gists{/gist_id}", "starred_url": "https://api.github.com/users/fabiounixpi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fabiounixpi/subscriptions", "organizations_url": "https://api.github.com/users/fabiounixpi/orgs", "repos_url": "https://api.github.com/users/fabiounixpi/repos", "events_url": "https://api.github.com/users/fabiounixpi/events{/privacy}", "received_events_url": "https://api.github.com/users/fabiounixpi/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
11
2025-01-27T20:04:40
2025-01-30T13:07:47
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Hi all, I'm using an r960 with 2TB of ram, so ram is not a problem here. I'm experiencing constant crashes of ollama 0.5.7 and deepseek-r1:671b, even increasing the context window with the command /set parameter num_ctx 4096. I also tried a second system, an r670 csp with 1TB of ram, but the problem occurs in the same way. I'm not able to use gpu due to the massive size of the model, anyway plenty of cores do the job for my current pourposes. os are ubuntu 22.04.5 and 24.04.1 ### OS Linux ### GPU _No response_ ### CPU Intel ### Ollama version 0.5.7
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8614/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8614/timeline
null
null
false