url
stringlengths 51
54
| repository_url
stringclasses 1
value | labels_url
stringlengths 65
68
| comments_url
stringlengths 60
63
| events_url
stringlengths 58
61
| html_url
stringlengths 39
44
| id
int64 1.78B
2.82B
| node_id
stringlengths 18
19
| number
int64 1
8.69k
| title
stringlengths 1
382
| user
dict | labels
listlengths 0
5
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
2
| milestone
null | comments
int64 0
323
| created_at
timestamp[s] | updated_at
timestamp[s] | closed_at
timestamp[s] | author_association
stringclasses 4
values | sub_issues_summary
dict | active_lock_reason
null | draft
bool 2
classes | pull_request
dict | body
stringlengths 2
118k
⌀ | closed_by
dict | reactions
dict | timeline_url
stringlengths 60
63
| performed_via_github_app
null | state_reason
stringclasses 4
values | is_pull_request
bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/ollama/ollama/issues/4637
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4637/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4637/comments
|
https://api.github.com/repos/ollama/ollama/issues/4637/events
|
https://github.com/ollama/ollama/issues/4637
| 2,317,069,953
|
I_kwDOJ0Z1Ps6KG66B
| 4,637
|
windows gpu memory.available always be one value
|
{
"login": "mofanke",
"id": 54242816,
"node_id": "MDQ6VXNlcjU0MjQyODE2",
"avatar_url": "https://avatars.githubusercontent.com/u/54242816?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mofanke",
"html_url": "https://github.com/mofanke",
"followers_url": "https://api.github.com/users/mofanke/followers",
"following_url": "https://api.github.com/users/mofanke/following{/other_user}",
"gists_url": "https://api.github.com/users/mofanke/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mofanke/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mofanke/subscriptions",
"organizations_url": "https://api.github.com/users/mofanke/orgs",
"repos_url": "https://api.github.com/users/mofanke/repos",
"events_url": "https://api.github.com/users/mofanke/events{/privacy}",
"received_events_url": "https://api.github.com/users/mofanke/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg",
"url": "https://api.github.com/repos/ollama/ollama/labels/windows",
"name": "windows",
"color": "0052CC",
"default": false,
"description": ""
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 3
| 2024-05-25T14:54:38
| 2024-05-31T19:59:43
| 2024-05-31T19:59:31
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
windows gpu memory.available always be one value , no matter how many model was loaded
### OS
Windows
### GPU
Nvidia
### CPU
_No response_
### Ollama version
0.1.38
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4637/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4637/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/268
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/268/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/268/comments
|
https://api.github.com/repos/ollama/ollama/issues/268/events
|
https://github.com/ollama/ollama/pull/268
| 1,835,276,983
|
PR_kwDOJ0Z1Ps5XHyWq
| 268
|
Update README.md
|
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-08-03T15:23:07
| 2023-08-03T15:23:33
| 2023-08-03T15:23:32
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/268",
"html_url": "https://github.com/ollama/ollama/pull/268",
"diff_url": "https://github.com/ollama/ollama/pull/268.diff",
"patch_url": "https://github.com/ollama/ollama/pull/268.patch",
"merged_at": "2023-08-03T15:23:32"
}
| null |
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/268/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/268/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/673
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/673/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/673/comments
|
https://api.github.com/repos/ollama/ollama/issues/673/events
|
https://github.com/ollama/ollama/pull/673
| 1,922,391,010
|
PR_kwDOJ0Z1Ps5bsqjJ
| 673
|
clean up num_gpu calculation code
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-10-02T18:11:47
| 2023-10-02T18:53:42
| 2023-10-02T18:53:42
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/673",
"html_url": "https://github.com/ollama/ollama/pull/673",
"diff_url": "https://github.com/ollama/ollama/pull/673.diff",
"patch_url": "https://github.com/ollama/ollama/pull/673.patch",
"merged_at": "2023-10-02T18:53:42"
}
|
there were some unreachable code paths and unused variables here from iterations on an old branch, remove them
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/673/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/673/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6554
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6554/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6554/comments
|
https://api.github.com/repos/ollama/ollama/issues/6554/events
|
https://github.com/ollama/ollama/issues/6554
| 2,494,275,014
|
I_kwDOJ0Z1Ps6Uq53G
| 6,554
|
Error: llama runner process has terminated: exit status 0xc0000135
|
{
"login": "balaji1732000",
"id": 70811241,
"node_id": "MDQ6VXNlcjcwODExMjQx",
"avatar_url": "https://avatars.githubusercontent.com/u/70811241?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/balaji1732000",
"html_url": "https://github.com/balaji1732000",
"followers_url": "https://api.github.com/users/balaji1732000/followers",
"following_url": "https://api.github.com/users/balaji1732000/following{/other_user}",
"gists_url": "https://api.github.com/users/balaji1732000/gists{/gist_id}",
"starred_url": "https://api.github.com/users/balaji1732000/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/balaji1732000/subscriptions",
"organizations_url": "https://api.github.com/users/balaji1732000/orgs",
"repos_url": "https://api.github.com/users/balaji1732000/repos",
"events_url": "https://api.github.com/users/balaji1732000/events{/privacy}",
"received_events_url": "https://api.github.com/users/balaji1732000/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 4
| 2024-08-29T12:16:26
| 2024-09-01T23:20:14
| 2024-09-01T23:20:14
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I followed the below document to run the ollama model in GPU using Intel IPEX
https://github.com/intel-analytics/ipex-llm/blob/main/docs/mddocs/Quickstart/ollama_quickstart.md
https://www.intel.com/content/www/us/en/content-details/826081/running-ollama-with-open-webui-on-intel-hardware-platform.html
I couldn't get the inference from the model.
Error: llama runner process has terminated: exit status 0xc0000135
can anyone solve the issue
### OS
Windows
### GPU
Intel
### CPU
Intel
### Ollama version
_No response_
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6554/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6554/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6949
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6949/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6949/comments
|
https://api.github.com/repos/ollama/ollama/issues/6949/events
|
https://github.com/ollama/ollama/issues/6949
| 2,547,140,889
|
I_kwDOJ0Z1Ps6X0kkZ
| 6,949
|
Is there a better model that can accurately recognize image information?下载了好几个多模态的模型,图片识别效果都不好
|
{
"login": "SDAIer",
"id": 174102361,
"node_id": "U_kgDOCmCXWQ",
"avatar_url": "https://avatars.githubusercontent.com/u/174102361?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SDAIer",
"html_url": "https://github.com/SDAIer",
"followers_url": "https://api.github.com/users/SDAIer/followers",
"following_url": "https://api.github.com/users/SDAIer/following{/other_user}",
"gists_url": "https://api.github.com/users/SDAIer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SDAIer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SDAIer/subscriptions",
"organizations_url": "https://api.github.com/users/SDAIer/orgs",
"repos_url": "https://api.github.com/users/SDAIer/repos",
"events_url": "https://api.github.com/users/SDAIer/events{/privacy}",
"received_events_url": "https://api.github.com/users/SDAIer/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] |
closed
| false
| null |
[] | null | 5
| 2024-09-25T07:09:37
| 2025-01-08T00:00:26
| 2025-01-08T00:00:26
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
"Using fastgpt --onapi to call local Ollama models, I have downloaded several multimodal models, but the image recognition accuracy is not good. Is there a better model that can accurately recognize image information?"
使用fastgpt--onapi调用ollama本地模型,下载了好几个多模态的模型,图片识别效果都不准确。
有没有好一点的可以识别图片信息的模型
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.3.11
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6949/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6949/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8129
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8129/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8129/comments
|
https://api.github.com/repos/ollama/ollama/issues/8129/events
|
https://github.com/ollama/ollama/pull/8129
| 2,743,834,133
|
PR_kwDOJ0Z1Ps6Fb0jh
| 8,129
|
build: Enable -mf16c and -mfma in ROCm on x86 only
|
{
"login": "hack3ric",
"id": 18899791,
"node_id": "MDQ6VXNlcjE4ODk5Nzkx",
"avatar_url": "https://avatars.githubusercontent.com/u/18899791?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hack3ric",
"html_url": "https://github.com/hack3ric",
"followers_url": "https://api.github.com/users/hack3ric/followers",
"following_url": "https://api.github.com/users/hack3ric/following{/other_user}",
"gists_url": "https://api.github.com/users/hack3ric/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hack3ric/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hack3ric/subscriptions",
"organizations_url": "https://api.github.com/users/hack3ric/orgs",
"repos_url": "https://api.github.com/users/hack3ric/repos",
"events_url": "https://api.github.com/users/hack3ric/events{/privacy}",
"received_events_url": "https://api.github.com/users/hack3ric/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 0
| 2024-12-17T03:20:59
| 2024-12-17T03:20:59
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/8129",
"html_url": "https://github.com/ollama/ollama/pull/8129",
"diff_url": "https://github.com/ollama/ollama/pull/8129.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8129.patch",
"merged_at": null
}
|
These flags are not available outside of x86. I've successfully built Ollama with ROCm support on RISC-V hardware and Arch Linux RISC-V.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8129/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8129/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/8636
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8636/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8636/comments
|
https://api.github.com/repos/ollama/ollama/issues/8636/events
|
https://github.com/ollama/ollama/issues/8636
| 2,815,799,891
|
I_kwDOJ0Z1Ps6n1bJT
| 8,636
|
Upload compressed package file, unable to decompress and error reported
|
{
"login": "terling",
"id": 174825001,
"node_id": "U_kgDOCmueKQ",
"avatar_url": "https://avatars.githubusercontent.com/u/174825001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/terling",
"html_url": "https://github.com/terling",
"followers_url": "https://api.github.com/users/terling/followers",
"following_url": "https://api.github.com/users/terling/following{/other_user}",
"gists_url": "https://api.github.com/users/terling/gists{/gist_id}",
"starred_url": "https://api.github.com/users/terling/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/terling/subscriptions",
"organizations_url": "https://api.github.com/users/terling/orgs",
"repos_url": "https://api.github.com/users/terling/repos",
"events_url": "https://api.github.com/users/terling/events{/privacy}",
"received_events_url": "https://api.github.com/users/terling/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 2
| 2025-01-28T14:13:01
| 2025-01-29T23:29:46
| 2025-01-29T23:29:45
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Thanks for this great program, I love it! However, I uploaded a compressed package containing the project source code in the dialog interface, and an error occurred when the program was run. Can this problem be solved?

### OS
_No response_
### GPU
_No response_
### CPU
_No response_
### Ollama version
_No response_
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8636/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8636/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2328
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2328/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2328/comments
|
https://api.github.com/repos/ollama/ollama/issues/2328/events
|
https://github.com/ollama/ollama/issues/2328
| 2,114,965,582
|
I_kwDOJ0Z1Ps5-D9BO
| 2,328
|
llava:34b is not working properly on my 36GB macbook M3 max
|
{
"login": "microflyer",
"id": 880359,
"node_id": "MDQ6VXNlcjg4MDM1OQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/880359?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/microflyer",
"html_url": "https://github.com/microflyer",
"followers_url": "https://api.github.com/users/microflyer/followers",
"following_url": "https://api.github.com/users/microflyer/following{/other_user}",
"gists_url": "https://api.github.com/users/microflyer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/microflyer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/microflyer/subscriptions",
"organizations_url": "https://api.github.com/users/microflyer/orgs",
"repos_url": "https://api.github.com/users/microflyer/repos",
"events_url": "https://api.github.com/users/microflyer/events{/privacy}",
"received_events_url": "https://api.github.com/users/microflyer/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-02-02T13:44:39
| 2024-02-02T16:46:57
| 2024-02-02T16:46:57
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
When running the model with a picture, it returned ramdom text like below. I am using ollama version 0.1.22
>>> /Users/danny/Downloads/ollama.png what is this
Added image '/Users/danny/Downloads/ollama.png'
username is first key ofthe.2!f+...0!5"0 g?..1...-
. 10dd.. t1.!.
....
...!:/.-.s[..,.,.:..)
. A.---..... .-!:
(^C
I have tried the other models, yi:34b, llava:13b, mistral... all the others are working perfectly.
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2328/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2328/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/901
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/901/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/901/comments
|
https://api.github.com/repos/ollama/ollama/issues/901/events
|
https://github.com/ollama/ollama/issues/901
| 1,960,492,610
|
I_kwDOJ0Z1Ps502r5C
| 901
|
Setting correct rope frequency on llama2-chinese
|
{
"login": "ddv404",
"id": 97394404,
"node_id": "U_kgDOBc4e5A",
"avatar_url": "https://avatars.githubusercontent.com/u/97394404?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ddv404",
"html_url": "https://github.com/ddv404",
"followers_url": "https://api.github.com/users/ddv404/followers",
"following_url": "https://api.github.com/users/ddv404/following{/other_user}",
"gists_url": "https://api.github.com/users/ddv404/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ddv404/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ddv404/subscriptions",
"organizations_url": "https://api.github.com/users/ddv404/orgs",
"repos_url": "https://api.github.com/users/ddv404/repos",
"events_url": "https://api.github.com/users/ddv404/events{/privacy}",
"received_events_url": "https://api.github.com/users/ddv404/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 3
| 2023-10-25T04:09:08
| 2024-04-17T02:15:38
| 2024-04-17T02:15:24
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
<img width="1566" alt="image" src="https://github.com/jmorganca/ollama/assets/97394404/007005ae-456c-4b66-a509-7c57849e79ec">
回答一直在换行?这是为什么?
(Always on line feed display,Why?)
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/901/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/901/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1868
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1868/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1868/comments
|
https://api.github.com/repos/ollama/ollama/issues/1868/events
|
https://github.com/ollama/ollama/issues/1868
| 2,072,575,275
|
I_kwDOJ0Z1Ps57iP0r
| 1,868
|
ollama in a docker - can't check healthiness - Support Ollama under Rosetta
|
{
"login": "FreakDev",
"id": 187670,
"node_id": "MDQ6VXNlcjE4NzY3MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/187670?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/FreakDev",
"html_url": "https://github.com/FreakDev",
"followers_url": "https://api.github.com/users/FreakDev/followers",
"following_url": "https://api.github.com/users/FreakDev/following{/other_user}",
"gists_url": "https://api.github.com/users/FreakDev/gists{/gist_id}",
"starred_url": "https://api.github.com/users/FreakDev/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/FreakDev/subscriptions",
"organizations_url": "https://api.github.com/users/FreakDev/orgs",
"repos_url": "https://api.github.com/users/FreakDev/repos",
"events_url": "https://api.github.com/users/FreakDev/events{/privacy}",
"received_events_url": "https://api.github.com/users/FreakDev/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 4
| 2024-01-09T15:14:05
| 2024-01-11T22:00:49
| 2024-01-11T22:00:49
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hello !
i'm trying to setup ollama to run in a docker container, in order to have it run in runpod serverless function and to do so i'd like to pull a model file in my container image (embed the model file into the docker image)
basically i'd like to have a script like this that run during the build fo the image :
```bash
#!/bin/bash
/bin/ollama serve &
while [[ "$(curl -s -o /dev/null -w ''%{http_code}'' http://0.0.0.0:11434)" != "200" ]]; do
echo "waiting for ollama"
sleep 1
done
/bin/ollama pull mistral
```
but this doesn't work the curl never returns a http code 200...
any idea why ? and/or how could I achieve this (maybe there is another/easier way of doing this) ?
thanks in advance !
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1868/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1868/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1257
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1257/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1257/comments
|
https://api.github.com/repos/ollama/ollama/issues/1257/events
|
https://github.com/ollama/ollama/pull/1257
| 2,008,758,473
|
PR_kwDOJ0Z1Ps5gQggk
| 1,257
|
env variable to configure defaultSessionDuration
|
{
"login": "Pr0dt0s",
"id": 24417072,
"node_id": "MDQ6VXNlcjI0NDE3MDcy",
"avatar_url": "https://avatars.githubusercontent.com/u/24417072?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Pr0dt0s",
"html_url": "https://github.com/Pr0dt0s",
"followers_url": "https://api.github.com/users/Pr0dt0s/followers",
"following_url": "https://api.github.com/users/Pr0dt0s/following{/other_user}",
"gists_url": "https://api.github.com/users/Pr0dt0s/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Pr0dt0s/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Pr0dt0s/subscriptions",
"organizations_url": "https://api.github.com/users/Pr0dt0s/orgs",
"repos_url": "https://api.github.com/users/Pr0dt0s/repos",
"events_url": "https://api.github.com/users/Pr0dt0s/events{/privacy}",
"received_events_url": "https://api.github.com/users/Pr0dt0s/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 5
| 2023-11-23T20:26:27
| 2024-05-07T23:47:45
| 2024-05-07T23:47:45
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1257",
"html_url": "https://github.com/ollama/ollama/pull/1257",
"diff_url": "https://github.com/ollama/ollama/pull/1257.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1257.patch",
"merged_at": null
}
|
This adds a simple environment variable to configure the defautSessionDuration that currently is hardcoded as 5 minutes.
Fixes issues #1048 and #931
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1257/reactions",
"total_count": 16,
"+1": 16,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1257/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3705
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3705/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3705/comments
|
https://api.github.com/repos/ollama/ollama/issues/3705/events
|
https://github.com/ollama/ollama/pull/3705
| 2,248,777,188
|
PR_kwDOJ0Z1Ps5s8vUi
| 3,705
|
Update api.md
|
{
"login": "ghost",
"id": 10137,
"node_id": "MDQ6VXNlcjEwMTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghost",
"html_url": "https://github.com/ghost",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.github.com/users/ghost/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghost/subscriptions",
"organizations_url": "https://api.github.com/users/ghost/orgs",
"repos_url": "https://api.github.com/users/ghost/repos",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghost/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-04-17T17:04:11
| 2024-04-20T19:17:04
| 2024-04-20T19:17:04
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3705",
"html_url": "https://github.com/ollama/ollama/pull/3705",
"diff_url": "https://github.com/ollama/ollama/pull/3705.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3705.patch",
"merged_at": "2024-04-20T19:17:03"
}
| null |
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3705/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3705/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1008
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1008/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1008/comments
|
https://api.github.com/repos/ollama/ollama/issues/1008/events
|
https://github.com/ollama/ollama/issues/1008
| 1,977,961,592
|
I_kwDOJ0Z1Ps515Ux4
| 1,008
|
Message repeated infinitly with last version of Zephyr and Ollama 0.1.8
|
{
"login": "igorschlum",
"id": 2884312,
"node_id": "MDQ6VXNlcjI4ODQzMTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/2884312?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/igorschlum",
"html_url": "https://github.com/igorschlum",
"followers_url": "https://api.github.com/users/igorschlum/followers",
"following_url": "https://api.github.com/users/igorschlum/following{/other_user}",
"gists_url": "https://api.github.com/users/igorschlum/gists{/gist_id}",
"starred_url": "https://api.github.com/users/igorschlum/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/igorschlum/subscriptions",
"organizations_url": "https://api.github.com/users/igorschlum/orgs",
"repos_url": "https://api.github.com/users/igorschlum/repos",
"events_url": "https://api.github.com/users/igorschlum/events{/privacy}",
"received_events_url": "https://api.github.com/users/igorschlum/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 3
| 2023-11-05T20:59:10
| 2023-11-24T10:43:03
| 2023-11-24T10:43:02
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I installed Ollama 0.1.8 and ran Zephyr. I asked in French how many languages Zephyr could translate, and Zephyr answered me with the same paragraph repeatedly:
Note : Les tarifs sont approximatifs et peuvent varier selon la complexité du texte, sa
longueur et la disponibilité des traducteurs qualifiés dans la langue demandée. Pour
connaître le coût exact de votre demande, veuillez contacter le service clientèle ou
utilisez notre outil en ligne pour obtenir une estimation préliminaire.
(base) igor@macIgor ~ % ollama run zephyr
pulling manifest
pulling 0e655574a746... 100% |███████████████████████████| (4.1/4.1 GB, 4.1 MB/s)
pulling a96be15318d6... 100% |██████████████████████████████████| (101/101 B, 46 B/s)
pulling fe135e2a970c... 100% |████████████████████████████████████| (97/97 B, 40 B/s)
pulling 9f6045c34448... 100% |█████████████████████████████████| (381/381 B, 167 B/s)
verifying sha256 digest
writing manifest
removing any unused layers
success
>>> tu sais traduire de quelle langue vers quelle langue ?
Je ne peux pas déterminer de quelle langue vous parlez actuellement ou vers quel langage
vous souhaitez traduire. Veuillez fournir plus d'informations pour que j' puisse vous
aider correctement. Pouvez-vous préciser la langue dont vous disiez «tu» au début de
votre message, ainsi que la langue vers laquelle vous souhaitez traduire ?
>>> je voudrai que tu me donnes la liste des langues dans lesquelles je peux t'écrire ?
Bien sûr! Je peux vous écrire dans de nombreuses langues. Voici une liste de
quelques-unes : anglais, français, allemand, espagnol, portugais (brésilien ou
européen), italien, néerlandais, polonais, roumain et russe. Lettrons nous sommes
également en mesure de fournir des traductions dans une variété d'autres langues sur
demande. Quelle langue souhaitez-vous que j' utilise pour vous répondre ?
>>> j'aimerai que tu me donnes la liste exaustives des langues dans lesquelles je peux t'écrir
... e et pas uniquement une liste de quelques langues en exemple.
Bien sûr, voici une liste exhaustive des principales langues que je peux utiliser pour
vous écrire :
- Afrikaans (Afrique du Sud)
- Albanais (Albanie)
- Amharique (Éthiopie)
- Arabe (variétés régionales, par exemple : arabe egyptien, arabe sahraoui, arabe
marocain...)
- Arménien (Arménie)
- Azerbaïdjan (Azerbaïdjan)
- Bashi (Taiwan)
- Belarussien (Biélorussie)
- Bengali (Bangladesh)
- Bosnien (Bosnie-Herzégovine)
- Breton (France)
- Bulgare (Bulgarie)
- Catalan (Espagne, Andorre...)
- Cebuano (Philippines)
- Chinois (de nombreuses variétés régionales, par exemple : cantonais, mandarin
standard, hakka...)
- Corsicain (Corse, France)
- Croate (Croatie)
- Cymrique (Pays de Galles, Royaume-Uni)
- Dari (Afghanistan)
- Dinka (Sudan du Sud)
- Dzongkha (Bhoutan)
- Eestonien (Estonie)
- Élan (Papouasie-Nouvelle-Guinée)
- Emilien (Québec, Canada)
- English Creole (Caribe, Afrique, Asie...)
- Esperanto (International)
- Estonien (Estonie)
- Farsi (Ir Iran)
- Filipino (Philippines)
- Finnois (Finlande)
- Franco-provençal (France)
- Frison (Pays-Bas, Allemagne...)
- Galicien (Espagne)
- Géorgien (Géorgie)
- Germain (de nombreuses variétés régionales, par exemple : allemand standard, bavarois,
souabe...)
- Guarani (Paraguay)
- Gujarati (Inde)
- Hausa (Nigeria, Tchad...)
- Hébreu (Israël)
- Hindi (Inde)
- Histonien (Pays-Bas)
- Hongrois (Hongrie)
- Icelandais (Islande)
- Idéophone camerounais (Cameroun)
- Indo-portugais (Goa, India...)
- Indonesien (Indonésie)
- Irlandais (Irlande)
- Islandais (Islande)
- Italien (de nombreuses variétés régionales, par exemple : italien standard, sicilien,
napolitain...)
- Japonais (Japon)
- Javanais (Indonésie)
- Kazakh (Kazakhstan)
- Kirghize (Kirghizistan)
- Kinyarwanda (Rwanda)
- Kurdish sorani (Irak)
- Kurde sorani (Syrie, Irak...)
- Lakota (États-Unis)
- Lamba (Zambie)
- Languedocien (France)
- Laothian (Laos)
- Letton (Lettonie)
- Limburg (Pays-Bas, Allemagne...)
- Litouanien (Lituanie)
- Luxembourgeois (Luxembourg)
- Maori (Nouvelle-Zélande)
- Malais (Malaisie)
- Mandarin (Chine)
- Marathi (Inde)
- Māori (Nouvelle-Zélande)
- Maya q'eqchi' (Guatemala)
- Macedonien (Macédoine du Nord)
- Malgache (Madagascar)
- Mandarin (Chine)
- Manchou (Chine)
- Marathi (Inde)
- Marathi Devanāgarī (Inde)
- Moldave (Moldavie, Roumanie...)
- Mongol (Mongolie)
- Monténégrin (Monténégro)
- Náhuatl (Mexique)
- Navajo (États-Unis)
- Neapolitan (Italie)
- Ndebele (Afrique du Sud, Swaziland...)
- Norvégien (Norvège)
- Nyawá (Congo)
- Occitan (France)
- Ojibwe (États-Unis)
- Oromo (Éthiopie)
- Palu' (Papouasie-Nouvelle-Guinée)
- Papou (Papouasie-Nouvelle-Guinée)
- Paraguayan Guarani (Paraguay)
- Persan (Iran, Afghanistan...)
- Polonais (Pologne)
- Pontique (Grèce)
- Portuguese (Portugal, Brésil...)
- Q'eqchi' (Guatemala)
- Quechua (Andes)
- Rarotongan (Îles Cook)
- Romani (International)
- Roumain (Roumanie)
- Russien (Russie)
- Samoan (Samoa)
- Sango (Centrafrique)
- Sarawak malais (Malaisie)
- Serbe (Serbie)
- Shona (Zimbabwe)
- Slovène (Slovénie)
- Slovaque (Slovaquie)
- Somali (Somalie)
- Soninke (Mauritanie, Mali...)
- Sorani kurde (Irak)
- Spanish (Espagne, Pérou...)
- Sri Lanka Tamil (Sri Lanka)
- Suazilandswati (Swaziland)
- Swahili (Afrique orientale...)
- Swazi (Swaziland)
- Taïtiien (Polynésie française)
- Tat (Kirghizistan, Kazakhstan...)
- Tétum (Timor-Leste)
- Thaï (Thaïlande)
- Tigrinya (Érythrée)
- Timorese (Timor-Leste)
- Tlingit (Alaska)
- Tokelauan (Tokelau)
- Tokpana (Papouasie-Nouvelle-Guinée)
- Tongarevena (Tahiti)
- Tshiluba (Congo, Angola...)
- Tsonga (Afrique du Sud)
- Turkish (Turquie)
- Ukrainien (Ukraine)
- Uzbek (Ouzbékistan)
- Venda (Afrique du Sud)
- Vietnamien (Viêt Nam)
- Walser (Suisse, Allemagne...)
- Wallisien (Wallis et Futuna)
- Waray (Philippines)
- Welsh (Pays de Galles, Royaume-Uni)
- Wolof (Sénégal, Gambie...)
- Yoruba (Nigeria)
- Zapotec de Oaxaca (Mexique)
- Zhuang (Chine)
- Zulu (Afrique du Sud)
- Zwela (Swaziland)
Pour les langues qui ne sont pas indiquées ici, veuillez contacter le service clientèle.
Nous essayerons de vous fournir un traducteur qualifié dans cette langue pour vous aider
à traiter votre document.
Note : Cette liste n'est pas exhaustive et nous pouvons toujours essayer d'effectuer
votre traduction même si votre langue n'apparaît pas sur cette liste, tant que le texte
est facilement lisible par un humain et qu'il ne contient pas de caractères spéciaux ou
de codes particuliers. Si vous souhaitez obtenir un devis pour une traduction dans une
langue non indiquée ici, veuillez contacter le service clientèle pour en savoir plus sur
la disponibilité d'un traducteur qualifié pour cette langue. Nous serons heureux de vous
fournir tout l'assistant que vous avez besoin, à condition que votre texte soit
facilement lisible par un humain et qu'il ne contienne pas de caractères spéciaux ou de
codes particuliers.
Note : La disponibilité des traducteurs qualifiés dans chaque langue peut varier selon
la demande actuelle, donc il est préférable de contacter le service clientèle pour
obtenir un devis et connaître l'état actuel de la disponibilité d'un traducteur qualifié
pour votre langue préférée.
Note : Pour les langues très rares ou exotiques, nous recommandons de contacter le
service clientèle avant de soumettre votre demande afin qu'on puisse vous fournir un
devis et déterminer la disponibilité d'un traducteur qualifié pour cette langue.
Note : Les prix peuvent varier selon la complexité du texte, la longueur de celui-ci et
la rareté de la langue. Pour obtenir un devis précis pour votre demande, veuillez
contacter le service clientèle ou utilisez notre outil en ligne pour obtenir une
estimation préliminaire.
Note : Les prix indiqués sont estimatifs et peuvent varier selon la complexité du texte,
sa longueur et la disponibilité des traducteurs qualifiés dans la langue demandée. Pour
connaître le coût exact de votre demande, veuillez contacter le service clientèle ou
utilisez notre outil en ligne pour obtenir une estimation préliminaire.
Note : Les tarifs peuvent varier selon la complexité du texte, sa longueur et la
disponibilité des traducteurs qualifiés dans la langue demandée. Pour obtenir un devis
précis pour votre demande, veuillez contacter le service clientèle ou utilisez notre
outil en ligne pour obtenir une estimation préliminaire.
Note : Les tarifs indiqués sont estimatifs et peuvent varier selon la complexité du
texte, sa longueur et la disponibilité des traducteurs qualifiés dans la langue
demandée. Pour connaître le coût exact de votre demande, veuillez contacter le service
clientèle ou utilisez notre outil en ligne pour obtenir une estimation préliminaire.
Note : Les prix sont indicatifs et peuvent varier selon la complexité du texte, sa
longueur et la disponibilité des traducteurs qualifiés dans la langue demandée. Pour
connaître le coût exact de votre demande, veuillez contacter le service clientèle ou
utilisez notre outil en ligne pour obtenir une estimation préliminaire.
Note : Les tarifs sont approximatifs et peuvent varier selon la complexité du texte, sa
longueur et la disponibilité des traducteurs qualifiés dans la langue demandée. Pour
connaître le coût exact de votre demande, veuillez contacter le service clientèle ou
utilisez notre outil en ligne pour obtenir une estimation préliminaire.
Note : Les prix sont approximatifs et peuvent varier selon la complexité du texte, sa
longueur et la disponibilité des traducteurs qualifiés dans la langue demandée. Pour
connaître le coût exact de votre demande, veuillez contacter le service clientèle ou
utilisez notre outil en ligne pour obtenir une estimation préliminaire.
Note : Les tarifs sont approximatifs et peuvent varier selon la complexité du texte, sa
longueur et la disponibilité des traducteurs qualifiés dans la langue demandée. Pour
connaître le coût exact de votre demande, veuillez contacter le service clientèle ou
utilisez notre outil en ligne pour obtenir une estimation préliminaire.
Note : Les prix sont approximatifs et peuvent varier selon la complexité du texte, sa
longueur et la disponibilité des traducteurs qualifiés dans la langue demandée. Pour
connaître le coût exact de votre demande, veuillez contacter le service clientèle ou
utilisez notre outil en ligne pour obtenir une estimation préliminaire.
Note : Les prix sont approximatifs et peuvent varier selon la complexité du texte, sa
longueur et la disponibilité des traducteurs qualifiés dans la langue demandée. Pour
connaître le coût exact de votre demande, veuillez contacter le service clientèle ou
utilisez notre outil en ligne pour obtenir une estimation préliminaire.
Note : Les tarifs sont approximatifs et peuvent varier selon la complexité du texte, sa
longueur et la disponibilité des traducteurs qualifiés dans la langue demandée. Pour
connaître le coût exact de votre demande, veuillez contacter le service clientèle ou
utilisez notre outil en ligne pour obtenir une estimation préliminaire.
Note : Les prix sont approximatifs et peuvent varier selon la complexité du texte, sa
longueur et la disponibilité des traducteurs qualifiés dans la langue demandée. Pour
connaître le coût exact de votre demande, veuillez contacter le service clientèle ou
utilisez notre outil en ligne pour obtenir une estimation préliminaire.
Note : Les tarifs sont approximatifs et peuvent varier selon la complexité du texte, sa
longueur et la disponibilité des traducteurs qualifiés dans la langue demandée. Pour
connaître le coût exact de votre demande, veuillez contacter le service clientèle ou
utilisez notre outil en ligne pour obtenir une estimation préliminaire.
Note : Les prix sont approximatifs et peuvent varier selon la complexité du texte, sa
longueur et la disponibilité des traducteurs qualifiés dans la langue demandée. Pour
connaître le coût exact de votre demande, veuillez contacter le service clientèle ou
utilisez notre outil en ligne pour obtenir une estimation préliminaire.
Note : Les tarifs sont approximatifs et peuvent varier selon la complexité du texte, sa
longueur et la disponibilité des traducteurs qualifiés dans la langue demandée. Pour
connaître le coût exact de votre demande, veuillez contacter le service clientèle ou
utilisez notre outil en ligne pour obtenir une estimation préliminaire.
Note : Les prix sont approximatifs et peuvent varier selon la complexité du texte, sa
longueur et la disponibilité des traducteurs qualifiés dans la langue demandée. Pour
connaître le coût exact de votre demande, veuillez contacter le service clientèle ou
utilisez notre outil en ligne pour obtenir une estimation préliminaire.
Note : Les tarifs sont approximatifs et peuvent varier selon la complexité du texte, sa
longueur et la disponibilité des traducteurs qualifiés dans la langue demandée. Pour
connaître le coût exact de votre demande, veuillez contacter le service clientèle ou
utilisez notre outil en ligne pour obtenir une estimation préliminaire.
Note : Les prix sont approximatifs et peuvent varier selon la complexité du texte, sa
longueur et la disponibilité des traducteurs qualifiés dans la langue demandée. Pour
connaître le coût exact de votre demande, veuillez contacter le service clientèle ou
utilisez notre outil en ligne pour obtenir une estimation préliminaire.
Note : Les tarifs sont approximatifs et peuvent varier selon la complexité du texte, sa
longueur et la disponibilité des traducteurs qualifiés dans la langue demandée. Pour
connaître le coût exact de votre demande, veuillez contacter le service clientèle ou
utilisez notre outil en ligne pour obtenir une estimation préliminaire.
Note : Les prix sont approximatifs et peuvent varier selon la complexité du texte, sa
longueur et la disponibilité des traducteurs qualifiés dans la langue demandée. Pour
connaître le coût exact de votre demande, veuillez contacter le service clientèle ou
utilisez notre outil en ligne pour obtenir une estimation préliminaire.
Note : Les tarifs sont approximatifs et peuvent varier selon la complexité du texte, sa
longueur et la disponibilité des traducteurs qualifiés dans la langue demandée. Pour
connaître le coût exact de votre demande, veuillez contacter le service clientèle ou
utilisez notre outil en ligne pour obtenir une estimation préliminaire.
Note : Les tarifs sont approximatifs et peuvent varier selon la complexité du texte, sa
longueur et la disponibilité des traducteurs qualifiés dans la langue demandée. Pour
connaître le coût exact de votre demande, veuillez contacter le service clientèle ou
utilisez notre outil en ligne pour obtenir une estimation préliminaire.
Note : Les tarifs sont approximatifs et peuvent varier selon la complexité du texte, sa
longueur et la disponibilité des traducteurs qualifiés dans la langue demandée. Pour
connaître le coût exact de votre demande, veuillez contacter le service clientèle ou
utilisez notre outil en ligne pour obtenir une estimation préliminaire.
Note : Les tarifs sont approximatifs et peuvent varier selon la complexité du texte, sa
longueur et la disponibilité des traducteurs qualifiés dans la langue demandée. Pour
connaître le coût exact de votre demande, veuillez contacter le service clientèle ou
utilisez notre outil en ligne pour obtenir une estimation préliminaire.
Note : Les tarifs sont approximatifs et peuvent varier selon la complexité du texte, sa
longueur et la disponibilité des traducteurs qualifiés dans la langue demandée. Pour
connaître le coût exact de votre demande, veuillez contacter le service clientèle ou
utilisez notre outil en ligne pour obtenir une estimation préliminaire.
Note : Les tarifs sont approximatifs et peuvent varier selon la complexité du texte, sa
longueur et la disponibilité des traducteurs qualifiés dans la langue demandée. Pour
connaître le coût exact de votre demande, veuillez contacter le service clientèle ou
utilisez notre outil en ligne pour obtenir une estimation préliminaire.
Note : Les tarifs sont approximatifs et peuvent varier selon la complexité du texte, sa
longueur et la disponibilité des traducteurs qualifiés dans la langue demandée. Pour
connaître le coût exact de votre demande, veuillez contacter le service clientèle ou
utilisez notre outil en ligne pour obtenir une estimation préliminaire.
Note : Les tarifs sont approximatifs et peuvent varier selon la complexité du texte, sa
longueur et la disponibilité des traducteurs qualifiés dans la langue demandée. Pour
connaître le coût exact de votre demande, veuillez contacter le service clientèle ou
utilisez notre outil en ligne pour obtenir une estimation préliminaire.
Note : Les tarifs sont approximatifs et peuvent varier selon la complexité du texte, sa
longueur et la disponibilité des traducteurs qualifiés dans la langue demandée. Pour
connaître le coût exact de votre demande, veuillez contacter le service clientèle ou
utilisez notre outil en ligne pour obtenir une estimation préliminaire.
Note : Les tarifs sont approximatifs et peuvent varier selon la complexité du texte, sa
longueur et la disponibilité des traducteurs qualifiés dans la langue demandée. Pour
connaître le coût exact de votre demande, veuillez contacter le service clientèle ou
utilisez notre outil en ligne pour obtenir une estimation préliminaire.
Note : Les tarifs sont approximatifs et peuvent varier selon la complexité du texte, sa
longueur et la disponibilité des traducteurs qualifiés dans la langue demandée. Pour
connaître le coût exact de votre demande, veuillez contacter le service clientèle ou
utilisez notre outil en ligne pour obtenir une estimation préliminaire.
Note : Les tarifs sont approximatifs et peuvent varier selon la complexité du texte, sa
longueur et la disponibilité des traducteurs qualifiés dans la langue demandée. Pour
connaître le coût exact de votre demande, veuillez contacter le service clientèle ou
utilisez notre outil en ligne pour obtenir une estimation préliminaire.
|
{
"login": "igorschlum",
"id": 2884312,
"node_id": "MDQ6VXNlcjI4ODQzMTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/2884312?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/igorschlum",
"html_url": "https://github.com/igorschlum",
"followers_url": "https://api.github.com/users/igorschlum/followers",
"following_url": "https://api.github.com/users/igorschlum/following{/other_user}",
"gists_url": "https://api.github.com/users/igorschlum/gists{/gist_id}",
"starred_url": "https://api.github.com/users/igorschlum/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/igorschlum/subscriptions",
"organizations_url": "https://api.github.com/users/igorschlum/orgs",
"repos_url": "https://api.github.com/users/igorschlum/repos",
"events_url": "https://api.github.com/users/igorschlum/events{/privacy}",
"received_events_url": "https://api.github.com/users/igorschlum/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1008/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1008/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2457
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2457/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2457/comments
|
https://api.github.com/repos/ollama/ollama/issues/2457/events
|
https://github.com/ollama/ollama/issues/2457
| 2,129,222,077
|
I_kwDOJ0Z1Ps5-6Vm9
| 2,457
|
Support for more image types
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 1
| 2024-02-11T23:27:16
| 2024-02-12T07:35:43
| null |
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Currently image models such as Llava only support `png` and `jpeg`. Add support for more such as `webp`, `avif` and others.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2457/reactions",
"total_count": 7,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 3,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2457/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/1497
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1497/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1497/comments
|
https://api.github.com/repos/ollama/ollama/issues/1497/events
|
https://github.com/ollama/ollama/pull/1497
| 2,039,058,173
|
PR_kwDOJ0Z1Ps5h3LQo
| 1,497
|
patches: Clean up llama.cpp patches, update submodules to latest upstream
|
{
"login": "tmc",
"id": 3977,
"node_id": "MDQ6VXNlcjM5Nzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3977?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tmc",
"html_url": "https://github.com/tmc",
"followers_url": "https://api.github.com/users/tmc/followers",
"following_url": "https://api.github.com/users/tmc/following{/other_user}",
"gists_url": "https://api.github.com/users/tmc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tmc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tmc/subscriptions",
"organizations_url": "https://api.github.com/users/tmc/orgs",
"repos_url": "https://api.github.com/users/tmc/repos",
"events_url": "https://api.github.com/users/tmc/events{/privacy}",
"received_events_url": "https://api.github.com/users/tmc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2023-12-13T06:41:36
| 2024-01-18T22:28:57
| 2024-01-18T22:28:57
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1497",
"html_url": "https://github.com/ollama/ollama/pull/1497",
"diff_url": "https://github.com/ollama/ollama/pull/1497.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1497.patch",
"merged_at": null
}
|
This updates llama.cpp submodules to latest (fecac4) and removes the patches that have landed in llama.cpp already (most of them).
Since the "ggml" source tree doesn't appear to need to be patched anymore it seems like we can reduce down to one submodule here but I did not perform that refactor for simplicity, let me know if you'd like me to tackle that.
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1497/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1497/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2616
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2616/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2616/comments
|
https://api.github.com/repos/ollama/ollama/issues/2616/events
|
https://github.com/ollama/ollama/issues/2616
| 2,144,627,054
|
I_kwDOJ0Z1Ps5_1Glu
| 2,616
|
Change Bind IP address
|
{
"login": "Jacoub",
"id": 11414612,
"node_id": "MDQ6VXNlcjExNDE0NjEy",
"avatar_url": "https://avatars.githubusercontent.com/u/11414612?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Jacoub",
"html_url": "https://github.com/Jacoub",
"followers_url": "https://api.github.com/users/Jacoub/followers",
"following_url": "https://api.github.com/users/Jacoub/following{/other_user}",
"gists_url": "https://api.github.com/users/Jacoub/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Jacoub/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Jacoub/subscriptions",
"organizations_url": "https://api.github.com/users/Jacoub/orgs",
"repos_url": "https://api.github.com/users/Jacoub/repos",
"events_url": "https://api.github.com/users/Jacoub/events{/privacy}",
"received_events_url": "https://api.github.com/users/Jacoub/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 4
| 2024-02-20T15:19:58
| 2024-05-31T22:13:44
| 2024-02-20T18:49:09
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Tried changing bind localhost:11434 to IP:11434 to server requests from Ollama WEBUI running on a separate docker host
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2616/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2616/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4008
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4008/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4008/comments
|
https://api.github.com/repos/ollama/ollama/issues/4008/events
|
https://github.com/ollama/ollama/issues/4008
| 2,267,745,810
|
I_kwDOJ0Z1Ps6HKw4S
| 4,008
|
Compute Capability Misidentification with PhysX cudart library
|
{
"login": "aaronjrod",
"id": 35236356,
"node_id": "MDQ6VXNlcjM1MjM2MzU2",
"avatar_url": "https://avatars.githubusercontent.com/u/35236356?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aaronjrod",
"html_url": "https://github.com/aaronjrod",
"followers_url": "https://api.github.com/users/aaronjrod/followers",
"following_url": "https://api.github.com/users/aaronjrod/following{/other_user}",
"gists_url": "https://api.github.com/users/aaronjrod/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aaronjrod/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aaronjrod/subscriptions",
"organizations_url": "https://api.github.com/users/aaronjrod/orgs",
"repos_url": "https://api.github.com/users/aaronjrod/repos",
"events_url": "https://api.github.com/users/aaronjrod/events{/privacy}",
"received_events_url": "https://api.github.com/users/aaronjrod/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg",
"url": "https://api.github.com/repos/ollama/ollama/labels/nvidia",
"name": "nvidia",
"color": "8CDB00",
"default": false,
"description": "Issues relating to Nvidia GPUs and CUDA"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 24
| 2024-04-28T19:03:26
| 2024-09-25T17:17:55
| 2024-05-06T20:30:28
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Ollama server incorrectly identifies the Compute Capability of my GPU (detects 1.0 instead of 5.2). It seems to me that this is due to a recent change in [gpu/gpu.go](https://github.com/ollama/ollama/commit/34b9db5afc43b352c5ef04fe6ef52684bfdd57b5#diff-b3bde438f86c17903c484c6a1f48f7c98437f5ed1906742c3075342d748ce7ec). Thanks!
Previously: CUDART CUDA Compute Capability detected: 5.2
Now: CUDA GPU is too old. Compute Capability detected: 1.0
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.33-rc5
## Workaround
Remove `c:\Program Files (x86)\NVIDIA Corporation\PhysX\Common\` from your `PATH` environment variable so Ollama does not use this cuda runtime library.
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4008/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4008/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6060
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6060/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6060/comments
|
https://api.github.com/repos/ollama/ollama/issues/6060/events
|
https://github.com/ollama/ollama/issues/6060
| 2,436,182,125
|
I_kwDOJ0Z1Ps6RNTBt
| 6,060
|
Update template: Llama 3.1
|
{
"login": "MaxJa4",
"id": 74194322,
"node_id": "MDQ6VXNlcjc0MTk0MzIy",
"avatar_url": "https://avatars.githubusercontent.com/u/74194322?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MaxJa4",
"html_url": "https://github.com/MaxJa4",
"followers_url": "https://api.github.com/users/MaxJa4/followers",
"following_url": "https://api.github.com/users/MaxJa4/following{/other_user}",
"gists_url": "https://api.github.com/users/MaxJa4/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MaxJa4/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MaxJa4/subscriptions",
"organizations_url": "https://api.github.com/users/MaxJa4/orgs",
"repos_url": "https://api.github.com/users/MaxJa4/repos",
"events_url": "https://api.github.com/users/MaxJa4/events{/privacy}",
"received_events_url": "https://api.github.com/users/MaxJa4/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
closed
| false
| null |
[] | null | 4
| 2024-07-29T19:28:54
| 2024-08-07T16:28:13
| 2024-08-07T16:28:13
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Meta / HF updated the tokenizer config (specifically the chat template) of all the Llama 3.1 (instruct) models a few hours ago:
- [Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct/commit/b2a4d0f33b41fcd59a6d31662cc63b8d53367e1e)
- [Meta-Llama-3.1-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-70B-Instruct/commit/168b5b277b07436c229bd69844a73872eb8b73a8)
- [Meta-Llama-3.1-405B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-405B-Instruct/commit/f9801cba95a53242b3cc928a4a418d12571d1c5f)
Since there have been some reports of quality issues with the models themselves (not with Ollama specifically), it may help fix things.
The new chat templates are quite different from the original ones.
Just as a heads up, so it doesn't get overlooked.
Thanks! :)
Update: it changed again
|
{
"login": "MaxJa4",
"id": 74194322,
"node_id": "MDQ6VXNlcjc0MTk0MzIy",
"avatar_url": "https://avatars.githubusercontent.com/u/74194322?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MaxJa4",
"html_url": "https://github.com/MaxJa4",
"followers_url": "https://api.github.com/users/MaxJa4/followers",
"following_url": "https://api.github.com/users/MaxJa4/following{/other_user}",
"gists_url": "https://api.github.com/users/MaxJa4/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MaxJa4/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MaxJa4/subscriptions",
"organizations_url": "https://api.github.com/users/MaxJa4/orgs",
"repos_url": "https://api.github.com/users/MaxJa4/repos",
"events_url": "https://api.github.com/users/MaxJa4/events{/privacy}",
"received_events_url": "https://api.github.com/users/MaxJa4/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6060/reactions",
"total_count": 24,
"+1": 24,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6060/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5117
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5117/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5117/comments
|
https://api.github.com/repos/ollama/ollama/issues/5117/events
|
https://github.com/ollama/ollama/pull/5117
| 2,360,419,823
|
PR_kwDOJ0Z1Ps5y26zJ
| 5,117
|
Handle models with divergent layer sizes
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-06-18T18:06:13
| 2024-06-18T18:36:54
| 2024-06-18T18:36:51
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5117",
"html_url": "https://github.com/ollama/ollama/pull/5117",
"diff_url": "https://github.com/ollama/ollama/pull/5117.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5117.patch",
"merged_at": "2024-06-18T18:36:51"
}
|
The recent refactoring of the memory prediction assumed all layers are the same size, but for some models (like deepseek-coder-v2) this is not the case, so our predictions were significantly off.
Without the fix:
```
time=2024-06-18T11:03:42.708-07:00 level=INFO source=memory.go:303 msg="offload to metal" layers.requested=-1 layers.model=28 layers.offload=28 layers.split="" memory.available="[96.0 GiB]" memory.required.full="2.4 GiB" memory.required.partial="2.4 GiB" memory.required.kv="432.0 MiB" memory.required.allocations="[2.4 GiB]" memory.weights.total="1.6 GiB" memory.weights.repeating="1.4 GiB" memory.weights.nonrepeating="164.1 MiB" memory.graph.full="72.0 MiB" memory.graph.partial="72.0 MiB"
```
With the fix:
```
time=2024-06-18T11:02:47.707-07:00 level=INFO source=memory.go:309 msg="offload to metal" layers.requested=-1 layers.model=28 layers.offload=28 layers.split="" memory.available="[96.0 GiB]" memory.required.full="9.2 GiB" memory.required.partial="9.2 GiB" memory.required.kv="432.0 MiB" memory.required.allocations="[9.2 GiB]" memory.weights.total="8.4 GiB" memory.weights.repeating="8.3 GiB" memory.weights.nonrepeating="164.1 MiB" memory.graph.full="72.0 MiB" memory.graph.partial="72.0 MiB"
```
Partial fix for #5113 but we'll need additional graph updates...
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5117/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5117/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/74
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/74/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/74/comments
|
https://api.github.com/repos/ollama/ollama/issues/74/events
|
https://github.com/ollama/ollama/pull/74
| 1,801,981,185
|
PR_kwDOJ0Z1Ps5VXXLW
| 74
|
Timings
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-07-13T01:20:54
| 2023-07-13T17:17:22
| 2023-07-13T17:17:14
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/74",
"html_url": "https://github.com/ollama/ollama/pull/74",
"diff_url": "https://github.com/ollama/ollama/pull/74.diff",
"patch_url": "https://github.com/ollama/ollama/pull/74.patch",
"merged_at": "2023-07-13T17:17:14"
}
| null |
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/74/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/74/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3628
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3628/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3628/comments
|
https://api.github.com/repos/ollama/ollama/issues/3628/events
|
https://github.com/ollama/ollama/issues/3628
| 2,241,654,325
|
I_kwDOJ0Z1Ps6FnO41
| 3,628
|
Fails to pull model
|
{
"login": "ahmetkca",
"id": 74574469,
"node_id": "MDQ6VXNlcjc0NTc0NDY5",
"avatar_url": "https://avatars.githubusercontent.com/u/74574469?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ahmetkca",
"html_url": "https://github.com/ahmetkca",
"followers_url": "https://api.github.com/users/ahmetkca/followers",
"following_url": "https://api.github.com/users/ahmetkca/following{/other_user}",
"gists_url": "https://api.github.com/users/ahmetkca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ahmetkca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ahmetkca/subscriptions",
"organizations_url": "https://api.github.com/users/ahmetkca/orgs",
"repos_url": "https://api.github.com/users/ahmetkca/repos",
"events_url": "https://api.github.com/users/ahmetkca/events{/privacy}",
"received_events_url": "https://api.github.com/users/ahmetkca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 0
| 2024-04-13T16:51:20
| 2024-04-15T23:11:07
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
```
❯ ollama pull gemma
pulling manifest
pulling ef311de6af9d... 100% ▕██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 2.5 GB
Error: remove /Users/ahmetkca/.ollama/models/blobs/sha256-ef311de6af9db043d51ca4b1e766c28e0a1ac41d60420fed5e001dc470c64b77-partial-0: no such file or directory
```
### What did you expect to see?
_No response_
### Steps to reproduce
ollama pull gemma
### Are there any recent changes that introduced the issue?
_No response_
### OS
macOS
### Architecture
arm64
### Platform
_No response_
### Ollama version
1.30.0
### GPU
Apple
### GPU info
_No response_
### CPU
_No response_
### Other software
_No response_
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3628/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3628/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/8130
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8130/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8130/comments
|
https://api.github.com/repos/ollama/ollama/issues/8130/events
|
https://github.com/ollama/ollama/pull/8130
| 2,743,930,230
|
PR_kwDOJ0Z1Ps6FcJbC
| 8,130
|
llm: do not silently fail for supplied, but invalid formats
|
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-12-17T04:55:01
| 2024-12-17T15:54:57
| 2024-12-17T05:57:49
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/8130",
"html_url": "https://github.com/ollama/ollama/pull/8130",
"diff_url": "https://github.com/ollama/ollama/pull/8130.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8130.patch",
"merged_at": "2024-12-17T05:57:49"
}
|
Changes in #8002 introduced fixes for bugs with mangling JSON Schemas.
It also fixed a bug where the server would silently fail when clients
requested invalid formats. It also, unfortunately, introduced a bug
where the server would reject requests with an empty format, which
should be allowed.
The change in #8127 updated the code to allow the empty format, but also
reintroduced the regression where the server would silently fail when
the format was set, but invalid.
This commit fixes both regressions. The server does not reject the empty
format, but it does reject invalid formats. It also adds tests to help
us catch regressions in the future.
Also, the updated code provides a more detailed error message when a
client sends a non-empty, but invalid format, echoing the invalid format
in the response.
This commits also takes the opportunity to remove superfluous linter
checks.
|
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8130/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8130/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5495
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5495/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5495/comments
|
https://api.github.com/repos/ollama/ollama/issues/5495/events
|
https://github.com/ollama/ollama/issues/5495
| 2,391,846,622
|
I_kwDOJ0Z1Ps6OkK7e
| 5,495
|
The quality of the results returned by the embedding model become worse
|
{
"login": "wwjCMP",
"id": 32979859,
"node_id": "MDQ6VXNlcjMyOTc5ODU5",
"avatar_url": "https://avatars.githubusercontent.com/u/32979859?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wwjCMP",
"html_url": "https://github.com/wwjCMP",
"followers_url": "https://api.github.com/users/wwjCMP/followers",
"following_url": "https://api.github.com/users/wwjCMP/following{/other_user}",
"gists_url": "https://api.github.com/users/wwjCMP/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wwjCMP/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wwjCMP/subscriptions",
"organizations_url": "https://api.github.com/users/wwjCMP/orgs",
"repos_url": "https://api.github.com/users/wwjCMP/repos",
"events_url": "https://api.github.com/users/wwjCMP/events{/privacy}",
"received_events_url": "https://api.github.com/users/wwjCMP/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info",
"name": "needs more info",
"color": "BA8041",
"default": false,
"description": "More information is needed to assist"
}
] |
open
| false
| null |
[] | null | 6
| 2024-07-05T05:28:44
| 2024-12-08T15:26:38
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
The quality of the results returned by the embedding model now is much worse than the previous version.
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.48
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5495/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5495/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/2279
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2279/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2279/comments
|
https://api.github.com/repos/ollama/ollama/issues/2279/events
|
https://github.com/ollama/ollama/pull/2279
| 2,108,320,414
|
PR_kwDOJ0Z1Ps5lefy5
| 2,279
|
Add support for libcudart.so for CUDA devices (Adds Jetson support)
|
{
"login": "remy415",
"id": 105550370,
"node_id": "U_kgDOBkqSIg",
"avatar_url": "https://avatars.githubusercontent.com/u/105550370?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/remy415",
"html_url": "https://github.com/remy415",
"followers_url": "https://api.github.com/users/remy415/followers",
"following_url": "https://api.github.com/users/remy415/following{/other_user}",
"gists_url": "https://api.github.com/users/remy415/gists{/gist_id}",
"starred_url": "https://api.github.com/users/remy415/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/remy415/subscriptions",
"organizations_url": "https://api.github.com/users/remy415/orgs",
"repos_url": "https://api.github.com/users/remy415/repos",
"events_url": "https://api.github.com/users/remy415/events{/privacy}",
"received_events_url": "https://api.github.com/users/remy415/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 42
| 2024-01-30T16:50:18
| 2024-03-30T15:58:30
| 2024-03-25T19:46:28
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/2279",
"html_url": "https://github.com/ollama/ollama/pull/2279",
"diff_url": "https://github.com/ollama/ollama/pull/2279.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2279.patch",
"merged_at": "2024-03-25T19:46:28"
}
|
Added libcudart.so support to gpu.go for CUDA devices that are missing libnvidia-ml.so. CUDA libraries split into nvml (libnvidia-ml.so) and cudart (libcudart.so), can work with either. Tested on Jetson device and on Windows 11 in WSL2.
Devices used to test:
Jetson Orin Nano 8Gb
Jetpack 5.1.2, L4T 35.4.1
CUDA 11-8
CUDA Capability Supported 8.7
Go version 1.26.1
Cmake 3.28.1
nvcc 11.8.89
AMD Ryzen 3950x
NVidia RTX 3090ti
WSL2 running Ubuntu 22.04
WSL CUDA Toolkit v12.3 installed
Edited for updates
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2279/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2279/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4981
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4981/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4981/comments
|
https://api.github.com/repos/ollama/ollama/issues/4981/events
|
https://github.com/ollama/ollama/issues/4981
| 2,346,543,830
|
I_kwDOJ0Z1Ps6L3WrW
| 4,981
|
Error Pulling any model - "Error: pull model manifest: 200: stream error: stream ID 3; NO_ERROR; received from peer"
|
{
"login": "ziptron",
"id": 17092430,
"node_id": "MDQ6VXNlcjE3MDkyNDMw",
"avatar_url": "https://avatars.githubusercontent.com/u/17092430?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ziptron",
"html_url": "https://github.com/ziptron",
"followers_url": "https://api.github.com/users/ziptron/followers",
"following_url": "https://api.github.com/users/ziptron/following{/other_user}",
"gists_url": "https://api.github.com/users/ziptron/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ziptron/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ziptron/subscriptions",
"organizations_url": "https://api.github.com/users/ziptron/orgs",
"repos_url": "https://api.github.com/users/ziptron/repos",
"events_url": "https://api.github.com/users/ziptron/events{/privacy}",
"received_events_url": "https://api.github.com/users/ziptron/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 4
| 2024-06-11T14:10:14
| 2024-09-24T15:57:56
| 2024-09-24T15:57:56
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I'm running Ollama on Windows Server. The setup seemed to work a few days ago, I was able to pull several models. Today I see an error that shows up immediately after a try to download a model by typing Ollama Run {model name} in powershell:
"Error: pull model manifest: 200: stream error: stream ID 3; NO_ERROR; received from peer"
Any idea how to further diagnose this?
### OS
Windows
### GPU
Nvidia
### CPU
_No response_
### Ollama version
0.1.41
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4981/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4981/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1669
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1669/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1669/comments
|
https://api.github.com/repos/ollama/ollama/issues/1669/events
|
https://github.com/ollama/ollama/issues/1669
| 2,053,674,871
|
I_kwDOJ0Z1Ps56aJd3
| 1,669
|
Feature Request: Add RSS feed to Blog
|
{
"login": "puresick",
"id": 2714266,
"node_id": "MDQ6VXNlcjI3MTQyNjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/2714266?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/puresick",
"html_url": "https://github.com/puresick",
"followers_url": "https://api.github.com/users/puresick/followers",
"following_url": "https://api.github.com/users/puresick/following{/other_user}",
"gists_url": "https://api.github.com/users/puresick/gists{/gist_id}",
"starred_url": "https://api.github.com/users/puresick/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/puresick/subscriptions",
"organizations_url": "https://api.github.com/users/puresick/orgs",
"repos_url": "https://api.github.com/users/puresick/repos",
"events_url": "https://api.github.com/users/puresick/events{/privacy}",
"received_events_url": "https://api.github.com/users/puresick/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 6573197867,
"node_id": "LA_kwDOJ0Z1Ps8AAAABh8sKKw",
"url": "https://api.github.com/repos/ollama/ollama/labels/ollama.com",
"name": "ollama.com",
"color": "ffffff",
"default": false,
"description": ""
}
] |
open
| false
| null |
[] | null | 14
| 2023-12-22T09:57:23
| 2025-01-04T20:08:04
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi! I am not sure if this is the right place for feature requests for the [Blog](https://ollama.ai/blog), but I did not find any other place where this would have been applicable — if I am wrong here I am sorry!
Regarding the feature request: It would be great to have a RSS feed for the blog to keep up with updates around the project. Would that be possible to implement?
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1669/reactions",
"total_count": 43,
"+1": 7,
"-1": 1,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 35,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1669/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/2233
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2233/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2233/comments
|
https://api.github.com/repos/ollama/ollama/issues/2233/events
|
https://github.com/ollama/ollama/pull/2233
| 2,103,740,329
|
PR_kwDOJ0Z1Ps5lPBCd
| 2,233
|
Support building from source with CUDA CC 3.5 and 3.7 support
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 13
| 2024-01-27T19:06:36
| 2024-11-20T23:09:29
| 2024-11-20T23:09:24
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/2233",
"html_url": "https://github.com/ollama/ollama/pull/2233",
"diff_url": "https://github.com/ollama/ollama/pull/2233.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2233.patch",
"merged_at": null
}
|
They don't perform much better than CPU, but this adds support for these older cards for users who build locally.
Fixes #1756
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2233/reactions",
"total_count": 6,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2233/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4958
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4958/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4958/comments
|
https://api.github.com/repos/ollama/ollama/issues/4958/events
|
https://github.com/ollama/ollama/issues/4958
| 2,342,504,398
|
I_kwDOJ0Z1Ps6Ln8fO
| 4,958
|
Cuda 12 runner
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg",
"url": "https://api.github.com/repos/ollama/ollama/labels/nvidia",
"name": "nvidia",
"color": "8CDB00",
"default": false,
"description": "Issues relating to Nvidia GPUs and CUDA"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 0
| 2024-06-09T21:43:20
| 2024-08-19T18:14:25
| 2024-08-19T18:14:25
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
A CUDA 12+ build of a runner is required for CUDA graphs to be enabled.
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4958/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4958/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4984
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4984/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4984/comments
|
https://api.github.com/repos/ollama/ollama/issues/4984/events
|
https://github.com/ollama/ollama/issues/4984
| 2,347,076,913
|
I_kwDOJ0Z1Ps6L5Y0x
| 4,984
|
Ollama not using GPU after OS Reboot
|
{
"login": "lukasmwerner",
"id": 55150634,
"node_id": "MDQ6VXNlcjU1MTUwNjM0",
"avatar_url": "https://avatars.githubusercontent.com/u/55150634?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lukasmwerner",
"html_url": "https://github.com/lukasmwerner",
"followers_url": "https://api.github.com/users/lukasmwerner/followers",
"following_url": "https://api.github.com/users/lukasmwerner/following{/other_user}",
"gists_url": "https://api.github.com/users/lukasmwerner/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lukasmwerner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lukasmwerner/subscriptions",
"organizations_url": "https://api.github.com/users/lukasmwerner/orgs",
"repos_url": "https://api.github.com/users/lukasmwerner/repos",
"events_url": "https://api.github.com/users/lukasmwerner/events{/privacy}",
"received_events_url": "https://api.github.com/users/lukasmwerner/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg",
"url": "https://api.github.com/repos/ollama/ollama/labels/windows",
"name": "windows",
"color": "0052CC",
"default": false,
"description": ""
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 15
| 2024-06-11T18:51:30
| 2024-11-14T19:40:52
| 2024-06-13T20:26:10
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
After installing ollama from ollama.com it is able to use my GPU but after rebooting it no longer is able to find the GPU giving the message:
```
CUDA driver version: 12-5
time=2024-06-11T11:46:56.544-07:00 level=DEBUG source=gpu.go:148 msg="detected GPUs" library="C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\cudart64_30_9.dll" count=1
time=2024-06-11T11:46:56.545-07:00 level=DEBUG source=cpu_common.go:11 msg="CPU has AVX2"
[GPU-ffffffff-0000-0000-00c0-000000000000] CUDA totalMem 4294967295
[GPU-ffffffff-0000-0000-00c0-000000000000] CUDA freeMem 3617587199
[GPU-ffffffff-0000-0000-00c0-000000000000] Compute Capability 1.0
time=2024-06-11T11:46:56.635-07:00 level=INFO source=gpu.go:214 msg="[0] CUDA GPU is too old. Compute Capability detected: 1.0"
time=2024-06-11T11:46:56.636-07:00 level=DEBUG source=amd_windows.go:31 msg="unable to load amdhip64.dll: The specified module could not be found."
```
### OS
Windows
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.1.42
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4984/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4984/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5506
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5506/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5506/comments
|
https://api.github.com/repos/ollama/ollama/issues/5506/events
|
https://github.com/ollama/ollama/pull/5506
| 2,393,201,999
|
PR_kwDOJ0Z1Ps50keRP
| 5,506
|
Refine scheduler unit tests for reliability
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-07-05T22:31:26
| 2024-07-20T22:48:43
| 2024-07-20T22:48:40
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5506",
"html_url": "https://github.com/ollama/ollama/pull/5506",
"diff_url": "https://github.com/ollama/ollama/pull/5506.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5506.patch",
"merged_at": "2024-07-20T22:48:40"
}
|
This breaks up some of the test scenarios to create a more reliable set of tests, as well as adding a little more coverage.
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5506/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5506/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4186
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4186/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4186/comments
|
https://api.github.com/repos/ollama/ollama/issues/4186/events
|
https://github.com/ollama/ollama/issues/4186
| 2,279,747,256
|
I_kwDOJ0Z1Ps6H4i64
| 4,186
|
Tokenize and Detokenize API For Token Count
|
{
"login": "sslx",
"id": 6382550,
"node_id": "MDQ6VXNlcjYzODI1NTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6382550?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sslx",
"html_url": "https://github.com/sslx",
"followers_url": "https://api.github.com/users/sslx/followers",
"following_url": "https://api.github.com/users/sslx/following{/other_user}",
"gists_url": "https://api.github.com/users/sslx/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sslx/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sslx/subscriptions",
"organizations_url": "https://api.github.com/users/sslx/orgs",
"repos_url": "https://api.github.com/users/sslx/repos",
"events_url": "https://api.github.com/users/sslx/events{/privacy}",
"received_events_url": "https://api.github.com/users/sslx/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-05-05T21:46:21
| 2024-07-05T16:20:33
| 2024-06-04T22:42:21
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
For rag purposes, I'd love to find out the token count for text before feeding to a model for a response.
Could you connect api points for tokenize and detokenize on llama.cpp?
Thanks!
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4186/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4186/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4782
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4782/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4782/comments
|
https://api.github.com/repos/ollama/ollama/issues/4782/events
|
https://github.com/ollama/ollama/pull/4782
| 2,329,642,569
|
PR_kwDOJ0Z1Ps5xOIJj
| 4,782
|
Added messages confirming arm64 support (NEON and SVE)
|
{
"login": "bindatype",
"id": 6185719,
"node_id": "MDQ6VXNlcjYxODU3MTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6185719?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bindatype",
"html_url": "https://github.com/bindatype",
"followers_url": "https://api.github.com/users/bindatype/followers",
"following_url": "https://api.github.com/users/bindatype/following{/other_user}",
"gists_url": "https://api.github.com/users/bindatype/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bindatype/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bindatype/subscriptions",
"organizations_url": "https://api.github.com/users/bindatype/orgs",
"repos_url": "https://api.github.com/users/bindatype/repos",
"events_url": "https://api.github.com/users/bindatype/events{/privacy}",
"received_events_url": "https://api.github.com/users/bindatype/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 0
| 2024-06-02T12:53:06
| 2024-06-02T13:12:10
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4782",
"html_url": "https://github.com/ollama/ollama/pull/4782",
"diff_url": "https://github.com/ollama/ollama/pull/4782.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4782.patch",
"merged_at": null
}
|
Added messages confirming arm64 support (NEON and SVE) to go along with AVX messages in gpu/cpu_common.go. Currently, only AVX is checked but that doesn't apply to arm64 builds and the default message `CPU does not have vector extensions` is displayed even if there is NEON or SVE support. This fix addresses that issue.
OLLAMA_CUSTOM_CPU_DEFS="-DLLAMA_NATIVE=on -DLLAMA_NEON=on -DLLAMA_SVE=on" go generate ./...
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4782/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4782/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2009
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2009/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2009/comments
|
https://api.github.com/repos/ollama/ollama/issues/2009/events
|
https://github.com/ollama/ollama/issues/2009
| 2,082,958,848
|
I_kwDOJ0Z1Ps58J24A
| 2,009
|
Import pytorch adapter `.bin` files
|
{
"login": "PhilipAmadasun",
"id": 55031054,
"node_id": "MDQ6VXNlcjU1MDMxMDU0",
"avatar_url": "https://avatars.githubusercontent.com/u/55031054?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PhilipAmadasun",
"html_url": "https://github.com/PhilipAmadasun",
"followers_url": "https://api.github.com/users/PhilipAmadasun/followers",
"following_url": "https://api.github.com/users/PhilipAmadasun/following{/other_user}",
"gists_url": "https://api.github.com/users/PhilipAmadasun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PhilipAmadasun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PhilipAmadasun/subscriptions",
"organizations_url": "https://api.github.com/users/PhilipAmadasun/orgs",
"repos_url": "https://api.github.com/users/PhilipAmadasun/repos",
"events_url": "https://api.github.com/users/PhilipAmadasun/events{/privacy}",
"received_events_url": "https://api.github.com/users/PhilipAmadasun/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 2
| 2024-01-16T03:32:37
| 2024-07-10T19:38:06
| 2024-07-10T18:32:27
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Has anyone on here successfully created a fine-tuned mistral model with:
```
curl http://server.local:11434/api/create -d '{
"name": "test_mistral",
"modelfile": "FROM mistral\nADAPTER /home/robot/adapter_model.bin"
}'
```
Apparently .bin files aren't in pytorch format so it doesn't work does anyone actually use this method or just straight up export a gguf?
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2009/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2009/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2606
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2606/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2606/comments
|
https://api.github.com/repos/ollama/ollama/issues/2606/events
|
https://github.com/ollama/ollama/issues/2606
| 2,143,640,895
|
I_kwDOJ0Z1Ps5_xV0_
| 2,606
|
`Ollama run` Error
|
{
"login": "iaoxuesheng",
"id": 94165844,
"node_id": "U_kgDOBZzbVA",
"avatar_url": "https://avatars.githubusercontent.com/u/94165844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iaoxuesheng",
"html_url": "https://github.com/iaoxuesheng",
"followers_url": "https://api.github.com/users/iaoxuesheng/followers",
"following_url": "https://api.github.com/users/iaoxuesheng/following{/other_user}",
"gists_url": "https://api.github.com/users/iaoxuesheng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/iaoxuesheng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iaoxuesheng/subscriptions",
"organizations_url": "https://api.github.com/users/iaoxuesheng/orgs",
"repos_url": "https://api.github.com/users/iaoxuesheng/repos",
"events_url": "https://api.github.com/users/iaoxuesheng/events{/privacy}",
"received_events_url": "https://api.github.com/users/iaoxuesheng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] |
closed
| false
| null |
[] | null | 6
| 2024-02-20T06:35:05
| 2024-05-05T22:10:09
| 2024-05-05T22:10:09
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |

|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2606/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2606/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7521
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7521/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7521/comments
|
https://api.github.com/repos/ollama/ollama/issues/7521/events
|
https://github.com/ollama/ollama/pull/7521
| 2,637,088,862
|
PR_kwDOJ0Z1Ps6BAghe
| 7,521
|
Add GoLamify in Libraries section
|
{
"login": "prasad89",
"id": 67261499,
"node_id": "MDQ6VXNlcjY3MjYxNDk5",
"avatar_url": "https://avatars.githubusercontent.com/u/67261499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/prasad89",
"html_url": "https://github.com/prasad89",
"followers_url": "https://api.github.com/users/prasad89/followers",
"following_url": "https://api.github.com/users/prasad89/following{/other_user}",
"gists_url": "https://api.github.com/users/prasad89/gists{/gist_id}",
"starred_url": "https://api.github.com/users/prasad89/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/prasad89/subscriptions",
"organizations_url": "https://api.github.com/users/prasad89/orgs",
"repos_url": "https://api.github.com/users/prasad89/repos",
"events_url": "https://api.github.com/users/prasad89/events{/privacy}",
"received_events_url": "https://api.github.com/users/prasad89/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-11-06T05:35:10
| 2024-11-11T06:38:19
| 2024-11-11T06:38:19
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7521",
"html_url": "https://github.com/ollama/ollama/pull/7521",
"diff_url": "https://github.com/ollama/ollama/pull/7521.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7521.patch",
"merged_at": "2024-11-11T06:38:19"
}
|
### GoLamify Package
This PR adds the [GoLamify](https://github.com/prasad89/golamify), a Go package designed to simplify integration of Go projects with Ollama.
The GoLamify package provides an easy and efficient way to connect Go applications with Ollama services, allowing for seamless interaction and enhanced functionality.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7521/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7521/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7550
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7550/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7550/comments
|
https://api.github.com/repos/ollama/ollama/issues/7550/events
|
https://github.com/ollama/ollama/issues/7550
| 2,640,528,471
|
I_kwDOJ0Z1Ps6dY0RX
| 7,550
|
ollama runner process has terminated: exit status 127
|
{
"login": "SimpleYj",
"id": 38721053,
"node_id": "MDQ6VXNlcjM4NzIxMDUz",
"avatar_url": "https://avatars.githubusercontent.com/u/38721053?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SimpleYj",
"html_url": "https://github.com/SimpleYj",
"followers_url": "https://api.github.com/users/SimpleYj/followers",
"following_url": "https://api.github.com/users/SimpleYj/following{/other_user}",
"gists_url": "https://api.github.com/users/SimpleYj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SimpleYj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SimpleYj/subscriptions",
"organizations_url": "https://api.github.com/users/SimpleYj/orgs",
"repos_url": "https://api.github.com/users/SimpleYj/repos",
"events_url": "https://api.github.com/users/SimpleYj/events{/privacy}",
"received_events_url": "https://api.github.com/users/SimpleYj/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5755339642,
"node_id": "LA_kwDOJ0Z1Ps8AAAABVwuDeg",
"url": "https://api.github.com/repos/ollama/ollama/labels/linux",
"name": "linux",
"color": "516E70",
"default": false,
"description": ""
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info",
"name": "needs more info",
"color": "BA8041",
"default": false,
"description": "More information is needed to assist"
}
] |
open
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 1
| 2024-11-07T10:13:31
| 2024-11-07T21:55:43
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Ollama will report this error when running any model. The ollama-linux-amd64.tgz file is directly upgraded to version 0.3.14.
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.4.0
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7550/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7550/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/6994
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6994/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6994/comments
|
https://api.github.com/repos/ollama/ollama/issues/6994/events
|
https://github.com/ollama/ollama/issues/6994
| 2,552,038,549
|
I_kwDOJ0Z1Ps6YHQSV
| 6,994
|
Docker container cannot load model
|
{
"login": "utopeadia",
"id": 98788152,
"node_id": "U_kgDOBeNjOA",
"avatar_url": "https://avatars.githubusercontent.com/u/98788152?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/utopeadia",
"html_url": "https://github.com/utopeadia",
"followers_url": "https://api.github.com/users/utopeadia/followers",
"following_url": "https://api.github.com/users/utopeadia/following{/other_user}",
"gists_url": "https://api.github.com/users/utopeadia/gists{/gist_id}",
"starred_url": "https://api.github.com/users/utopeadia/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/utopeadia/subscriptions",
"organizations_url": "https://api.github.com/users/utopeadia/orgs",
"repos_url": "https://api.github.com/users/utopeadia/repos",
"events_url": "https://api.github.com/users/utopeadia/events{/privacy}",
"received_events_url": "https://api.github.com/users/utopeadia/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-09-27T05:35:54
| 2024-09-27T05:57:53
| 2024-09-27T05:57:53
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Whether using ollama run or curl to use the model, it is impossible to load the model into GPU memory
`docker logs ollama` for starting and loading the ollama model are as follows
```bash
2024/09/27 05:29:20 routes.go:1153: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:2562047h47m16.854775807s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:10 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:20 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2024-09-27T05:29:20.148Z level=INFO source=images.go:753 msg="total blobs: 24"
time=2024-09-27T05:29:20.148Z level=INFO source=images.go:760 msg="total unused blobs removed: 0"
time=2024-09-27T05:29:20.148Z level=INFO source=routes.go:1200 msg="Listening on [::]:11434 (version 0.3.11)"
time=2024-09-27T05:29:20.149Z level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cuda_v12 cpu cpu_avx cpu_avx2 cuda_v11]"
time=2024-09-27T05:29:20.149Z level=INFO source=gpu.go:199 msg="looking for compatible GPUs"
time=2024-09-27T05:29:24.814Z level=INFO source=types.go:107 msg="inference compute" id=GPU-e8ee7d42-72a9-d27d-ef76-dfa4df69bf0f library=cuda variant=v12 compute=8.6 driver=12.4 name="NVIDIA RTX A6000" total="47.5 GiB" available="47.3 GiB"
time=2024-09-27T05:29:24.814Z level=INFO source=types.go:107 msg="inference compute" id=GPU-2325557b-dcde-10cd-b219-60ed716aa9ef library=cuda variant=v12 compute=8.6 driver=12.4 name="NVIDIA RTX A6000" total="47.5 GiB" available="47.3 GiB"
[GIN] 2024/09/27 - 05:31:07 | 200 | 60.09µs | 127.0.0.1 | HEAD "/"
[GIN] 2024/09/27 - 05:31:07 | 200 | 58.078071ms | 127.0.0.1 | POST "/api/show"
time=2024-09-27T05:31:07.477Z level=INFO source=sched.go:730 msg="new model will fit in available VRAM, loading" model=/root/.ollama/models/blobs/sha256-6e7fdda508e91cb0f63de5c15ff79ac63a1584ccafd751c07ca12b7f442101b8 library=cuda parallel=20 required="71.1 GiB"
time=2024-09-27T05:31:07.477Z level=INFO source=server.go:103 msg="system memory" total="503.5 GiB" free="495.5 GiB" free_swap="0 B"
time=2024-09-27T05:31:07.481Z level=INFO source=memory.go:326 msg="offload to cuda" layers.requested=-1 layers.model=81 layers.offload=81 layers.split=41,40 memory.available="[47.3 GiB 47.3 GiB]" memory.gpu_overhead="0 B" memory.required.full="71.1 GiB" memory.required.partial="71.1 GiB" memory.required.kv="12.5 GiB" memory.required.allocations="[36.1 GiB 35.0 GiB]" memory.weights.total="55.0 GiB" memory.weights.repeating="54.1 GiB" memory.weights.nonrepeating="974.6 MiB" memory.graph.full="6.4 GiB" memory.graph.partial="6.4 GiB"
time=2024-09-27T05:31:07.488Z level=INFO source=server.go:388 msg="starting llama server" cmd="/usr/lib/ollama/runners/cuda_v12/ollama_llama_server --model /root/.ollama/models/blobs/sha256-6e7fdda508e91cb0f63de5c15ff79ac63a1584ccafd751c07ca12b7f442101b8 --ctx-size 40960 --batch-size 512 --embedding --log-disable --n-gpu-layers 81 --parallel 20 --tensor-split 41,40 --port 39693"
time=2024-09-27T05:31:07.488Z level=INFO source=sched.go:449 msg="loaded runners" count=1
time=2024-09-27T05:31:07.488Z level=INFO source=server.go:587 msg="waiting for llama runner to start responding"
time=2024-09-27T05:31:07.489Z level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server error"
INFO [main] build info | build=10 commit="eaf151c" tid="140015820541952" timestamp=1727415067
INFO [main] system info | n_threads=48 n_threads_batch=48 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="140015820541952" timestamp=1727415067 total_threads=96
INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="95" port="39693" tid="140015820541952" timestamp=1727415067
llama_model_loader: loaded meta data with 35 key-value pairs and 963 tensors from /root/.ollama/models/blobs/sha256-6e7fdda508e91cb0f63de5c15ff79ac63a1584ccafd751c07ca12b7f442101b8 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = qwen2
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = Qwen2.5 72B Instruct
llama_model_loader: - kv 3: general.finetune str = Instruct
llama_model_loader: - kv 4: general.basename str = Qwen2.5
llama_model_loader: - kv 5: general.size_label str = 72B
llama_model_loader: - kv 6: general.license str = other
llama_model_loader: - kv 7: general.license.name str = qwen
llama_model_loader: - kv 8: general.license.link str = https://huggingface.co/Qwen/Qwen2.5-7...
llama_model_loader: - kv 9: general.base_model.count u32 = 1
llama_model_loader: - kv 10: general.base_model.0.name str = Qwen2.5 72B
llama_model_loader: - kv 11: general.base_model.0.organization str = Qwen
llama_model_loader: - kv 12: general.base_model.0.repo_url str = https://huggingface.co/Qwen/Qwen2.5-72B
llama_model_loader: - kv 13: general.tags arr[str,2] = ["chat", "text-generation"]
llama_model_loader: - kv 14: general.languages arr[str,1] = ["en"]
llama_model_loader: - kv 15: qwen2.block_count u32 = 80
llama_model_loader: - kv 16: qwen2.context_length u32 = 32768
llama_model_loader: - kv 17: qwen2.embedding_length u32 = 8192
llama_model_loader: - kv 18: qwen2.feed_forward_length u32 = 29568
llama_model_loader: - kv 19: qwen2.attention.head_count u32 = 64
llama_model_loader: - kv 20: qwen2.attention.head_count_kv u32 = 8
llama_model_loader: - kv 21: qwen2.rope.freq_base f32 = 1000000.000000
llama_model_loader: - kv 22: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001
llama_model_loader: - kv 23: general.file_type u32 = 15
llama_model_loader: - kv 24: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 25: tokenizer.ggml.pre str = qwen2
llama_model_loader: - kv 26: tokenizer.ggml.tokens arr[str,152064] = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 27: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 28: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv 29: tokenizer.ggml.eos_token_id u32 = 151645
llama_model_loader: - kv 30: tokenizer.ggml.padding_token_id u32 = 151643
llama_model_loader: - kv 31: tokenizer.ggml.bos_token_id u32 = 151643
llama_model_loader: - kv 32: tokenizer.ggml.add_bos_token bool = false
llama_model_loader: - kv 33: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>...
llama_model_loader: - kv 34: general.quantization_version u32 = 2
llama_model_loader: - type f32: 401 tensors
llama_model_loader: - type q5_0: 40 tensors
llama_model_loader: - type q8_0: 40 tensors
llama_model_loader: - type q4_K: 401 tensors
llama_model_loader: - type q5_K: 40 tensors
llama_model_loader: - type q6_K: 41 tensors
time=2024-09-27T05:31:07.741Z level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server loading model"
llm_load_vocab: special tokens cache size = 22
llm_load_vocab: token to piece cache size = 0.9310 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = qwen2
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 152064
llm_load_print_meta: n_merges = 151387
llm_load_print_meta: vocab_only = 0
llm_load_print_meta: n_ctx_train = 32768
llm_load_print_meta: n_embd = 8192
llm_load_print_meta: n_layer = 80
llm_load_print_meta: n_head = 64
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_swa = 0
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 8
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-06
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 29568
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 2
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 32768
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: ssm_dt_b_c_rms = 0
llm_load_print_meta: model type = 70B
llm_load_print_meta: model ftype = Q4_K - Medium
llm_load_print_meta: model params = 72.71 B
llm_load_print_meta: model size = 44.15 GiB (5.22 BPW)
llm_load_print_meta: general.name = Qwen2.5 72B Instruct
llm_load_print_meta: BOS token = 151643 '<|endoftext|>'
llm_load_print_meta: EOS token = 151645 '<|im_end|>'
llm_load_print_meta: PAD token = 151643 '<|endoftext|>'
llm_load_print_meta: LF token = 148848 'ÄĬ'
llm_load_print_meta: EOT token = 151645 '<|im_end|>'
llm_load_print_meta: max token length = 256
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 2 CUDA devices:
Device 0: NVIDIA RTX A6000, compute capability 8.6, VMM: yes
Device 1: NVIDIA RTX A6000, compute capability 8.6, VMM: yes
llm_load_tensors: ggml ctx size = 1.27 MiB
```
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.3.12
|
{
"login": "utopeadia",
"id": 98788152,
"node_id": "U_kgDOBeNjOA",
"avatar_url": "https://avatars.githubusercontent.com/u/98788152?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/utopeadia",
"html_url": "https://github.com/utopeadia",
"followers_url": "https://api.github.com/users/utopeadia/followers",
"following_url": "https://api.github.com/users/utopeadia/following{/other_user}",
"gists_url": "https://api.github.com/users/utopeadia/gists{/gist_id}",
"starred_url": "https://api.github.com/users/utopeadia/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/utopeadia/subscriptions",
"organizations_url": "https://api.github.com/users/utopeadia/orgs",
"repos_url": "https://api.github.com/users/utopeadia/repos",
"events_url": "https://api.github.com/users/utopeadia/events{/privacy}",
"received_events_url": "https://api.github.com/users/utopeadia/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6994/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6994/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2724
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2724/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2724/comments
|
https://api.github.com/repos/ollama/ollama/issues/2724/events
|
https://github.com/ollama/ollama/issues/2724
| 2,152,213,008
|
I_kwDOJ0Z1Ps6ASCoQ
| 2,724
|
Error running GEMMA:7b on Ollama via Docker
|
{
"login": "wangshuai67",
"id": 13214849,
"node_id": "MDQ6VXNlcjEzMjE0ODQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/13214849?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wangshuai67",
"html_url": "https://github.com/wangshuai67",
"followers_url": "https://api.github.com/users/wangshuai67/followers",
"following_url": "https://api.github.com/users/wangshuai67/following{/other_user}",
"gists_url": "https://api.github.com/users/wangshuai67/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wangshuai67/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wangshuai67/subscriptions",
"organizations_url": "https://api.github.com/users/wangshuai67/orgs",
"repos_url": "https://api.github.com/users/wangshuai67/repos",
"events_url": "https://api.github.com/users/wangshuai67/events{/privacy}",
"received_events_url": "https://api.github.com/users/wangshuai67/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 5
| 2024-02-24T08:38:01
| 2024-02-26T15:38:06
| 2024-02-26T15:38:06
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Body:
**Description:**
I encountered an error while running GEMMA:7b on Ollama using Docker. Whenever I attempt to run the GEMMA:7b image, an error occurs.
**Steps to Reproduce:**
1. Deploy Ollama on Docker.
2. Run the GEMMA:7b image using the appropriate command.
3. See the error message that is displayed.
**Actual Behavior:**
Running the GEMMA:7b image on Ollama via Docker results in an error.
**Additional Information:**
- I have ensured that the Docker setup for Ollama is correct.
- Other images and models run successfully on Ollama using Docker.
- The error message received while running GEMMA:7b provides details about the issue.

ollama log
```
time=2024-02-24T08:29:43.035Z level=INFO source=gpu.go:146 msg="CUDA Compute Capability detected: 7.5"
[GIN] 2024/02/24 - 08:29:54 | 200 | 1.300495ms | 172.17.0.1 | GET "/api/tags"
[GIN] 2024/02/24 - 08:29:54 | 200 | 105.186µs | 172.17.0.1 | GET "/api/version"
time=2024-02-24T08:30:03.108Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-02-24T08:30:03.108Z level=INFO source=gpu.go:146 msg="CUDA Compute Capability detected: 7.5"
time=2024-02-24T08:30:03.108Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-02-24T08:30:03.108Z level=INFO source=gpu.go:146 msg="CUDA Compute Capability detected: 7.5"
time=2024-02-24T08:30:03.108Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-02-24T08:30:03.113Z level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: /tmp/ollama2393286333/cuda_v11/libext_server.so"
time=2024-02-24T08:30:03.113Z level=INFO source=dyn_ext_server.go:145 msg="Initializing llama server"
ggml_init_cublas: GGML_CUDA_FORCE_MMQ: yes
ggml_init_cublas: CUDA_USE_TENSOR_CORES: no
ggml_init_cublas: found 2 CUDA devices:
Device 0: Quadro RTX 6000, compute capability 7.5, VMM: yes
Device 1: Quadro RTX 6000, compute capability 7.5, VMM: yes
SIGSEGV: segmentation violation
PC=0x7f9a983a80bc m=22 sigcode=1
signal arrived during cgo execution
goroutine 47 [syscall]:
runtime.cgocall(0x9bc8b0, 0xc00017c6f0)
/usr/local/go/src/runtime/cgocall.go:157 +0x4b fp=0xc00017c6c8 sp=0xc00017c690 pc=0x409b0b
github.com/jmorganca/ollama/llm._Cfunc_dyn_llama_server_init({0x7f99f4001e10, 0x7f99ece5a610, 0x7f99ece4be50, 0x7f99ece4f0d0, 0x7f99ece5dd90, 0x7f99ece57c20, 0x7f99ece4ed50, 0x7f99ece4bed0, 0x7f99ece5e7c0, 0x7f99ece5d930, ...}, ...)
_cgo_gotypes.go:282 +0x45 fp=0xc00017c6f0 sp=0xc00017c6c8 pc=0x7c5085
github.com/jmorganca/ollama/llm.newDynExtServer.func7(0xaf2020?, 0xc?)
/go/src/github.com/jmorganca/ollama/llm/dyn_ext_server.go:148 +0xef fp=0xc00017c7e0 sp=0xc00017c6f0 pc=0x7c65af
github.com/jmorganca/ollama/llm.newDynExtServer({0xc0007aa000, 0x2f}, {0xc0002a42a0, _}, {_, _, _}, {0x0, 0x0, 0x0}, ...)
/go/src/github.com/jmorganca/ollama/llm/dyn_ext_server.go:148 +0xa45 fp=0xc00017ca80 sp=0xc00017c7e0 pc=0x7c6245
github.com/jmorganca/ollama/llm.newLlmServer({{_, _, _}, {_, _}, {_, _}}, {_, _}, {0xc0002a42a0, ...}, ...)
/go/src/github.com/jmorganca/ollama/llm/llm.go:158 +0x425 fp=0xc00017cc40 sp=0xc00017ca80 pc=0x7c29c5
github.com/jmorganca/ollama/llm.New({0xc0003209a8, 0x15}, {0xc0002a42a0, _}, {_, _, _}, {0x0, 0x0, 0x0}, ...)
/go/src/github.com/jmorganca/ollama/llm/llm.go:123 +0x713 fp=0xc00017cec0 sp=0xc00017cc40 pc=0x7c2333
github.com/jmorganca/ollama/server.load(0xc000318000?, 0xc000318000, {{0x0, 0x800, 0x200, 0x1, 0xffffffffffffffff, 0x0, 0x0, 0x1, ...}, ...}, ...)
/go/src/github.com/jmorganca/ollama/server/routes.go:85 +0x3a5 fp=0xc00017d040 sp=0xc00017cec0 pc=0x996c05
github.com/jmorganca/ollama/server.ChatHandler(0xc0001f8100)
/go/src/github.com/jmorganca/ollama/server/routes.go:1163 +0x60b fp=0xc00017d748 sp=0xc00017d040 pc=0x9a1c8b
github.com/gin-gonic/gin.(*Context).Next(...)
/root/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/context.go:174
github.com/jmorganca/ollama/server.(*Server).GenerateRoutes.func1(0xc0001f8100)
/go/src/github.com/jmorganca/ollama/server/routes.go:938 +0x68 fp=0xc00017d780 sp=0xc00017d748 pc=0x9a08c8
github.com/gin-gonic/gin.(*Context).Next(...)
/root/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/context.go:174
github.com/gin-gonic/gin.CustomRecoveryWithWriter.func1(0xc0001f8100)
/root/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/recovery.go:102 +0x7a fp=0xc00017d7d0 sp=0xc00017d780 pc=0x977c1a
github.com/gin-gonic/gin.(*Context).Next(...)
/root/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/context.go:174
github.com/gin-gonic/gin.LoggerWithConfig.func1(0xc0001f8100)
/root/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/logger.go:240 +0xde fp=0xc00017d980 sp=0xc00017d7d0 pc=0x976dbe
github.com/gin-gonic/gin.(*Context).Next(...)
/root/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/context.go:174
github.com/gin-gonic/gin.(*Engine).handleHTTPRequest(0xc0000eba00, 0xc0001f8100)
/root/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/gin.go:620 +0x65b fp=0xc00017db08 sp=0xc00017d980 pc=0x975e7b
github.com/gin-gonic/gin.(*Engine).ServeHTTP(0xc0000eba00, {0x1126bac0?, 0xc0001981c0}, 0xc0001f8200)
/root/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/gin.go:576 +0x1dd fp=0xc00017db48 sp=0xc00017db08 pc=0x97563d
net/http.serverHandler.ServeHTTP({0x11269de0?}, {0x1126bac0?, 0xc0001981c0?}, 0x6?)
/usr/local/go/src/net/http/server.go:2938 +0x8e fp=0xc00017db78 sp=0xc00017db48 pc=0x6cee2e
net/http.(*conn).serve(0xc0002861b0, {0x1126d128, 0xc0004a6930})
/usr/local/go/src/net/http/server.go:2009 +0x5f4 fp=0xc00017dfb8 sp=0xc00017db78 pc=0x6cad14
net/http.(*Server).Serve.func3()
/usr/local/go/src/net/http/server.go:3086 +0x28 fp=0xc00017dfe0 sp=0xc00017dfb8 pc=0x6cf648
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc00017dfe8 sp=0xc00017dfe0 pc=0x46e2c1
created by net/http.(*Server).Serve in goroutine 1
/usr/local/go/src/net/http/server.go:3086 +0x5cb
goroutine 1 [IO wait]:
runtime.gopark(0x480f10?, 0xc0005bb850?, 0xa0?, 0xb8?, 0x4f711d?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc0005bb830 sp=0xc0005bb810 pc=0x43e7ee
runtime.netpollblock(0x46c332?, 0x4092a6?, 0x0?)
/usr/local/go/src/runtime/netpoll.go:564 +0xf7 fp=0xc0005bb868 sp=0xc0005bb830 pc=0x437277
internal/poll.runtime_pollWait(0x7f9a505bfe80, 0x72)
/usr/local/go/src/runtime/netpoll.go:343 +0x85 fp=0xc0005bb888 sp=0xc0005bb868 pc=0x468a05
internal/poll.(*pollDesc).wait(0xc000482000?, 0x4?, 0x0)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc0005bb8b0 sp=0xc0005bb888 pc=0x4efd67
internal/poll.(*pollDesc).waitRead(...)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc000482000)
/usr/local/go/src/internal/poll/fd_unix.go:611 +0x2ac fp=0xc0005bb958 sp=0xc0005bb8b0 pc=0x4f524c
net.(*netFD).accept(0xc000482000)
/usr/local/go/src/net/fd_unix.go:172 +0x29 fp=0xc0005bba10 sp=0xc0005bb958 pc=0x56be29
net.(*TCPListener).accept(0xc00045b580)
/usr/local/go/src/net/tcpsock_posix.go:152 +0x1e fp=0xc0005bba38 sp=0xc0005bba10 pc=0x580c3e
net.(*TCPListener).Accept(0xc00045b580)
/usr/local/go/src/net/tcpsock.go:315 +0x30 fp=0xc0005bba68 sp=0xc0005bba38 pc=0x57fdf0
net/http.(*onceCloseListener).Accept(0xc0002861b0?)
<autogenerated>:1 +0x24 fp=0xc0005bba80 sp=0xc0005bba68 pc=0x6f1bc4
net/http.(*Server).Serve(0xc000386ff0, {0x1126b8b0, 0xc00045b580})
/usr/local/go/src/net/http/server.go:3056 +0x364 fp=0xc0005bbbb0 sp=0xc0005bba80 pc=0x6cf284
github.com/jmorganca/ollama/server.Serve({0x1126b8b0, 0xc00045b580})
/go/src/github.com/jmorganca/ollama/server/routes.go:1041 +0x454 fp=0xc0005bbc98 sp=0xc0005bbbb0 pc=0x9a0d74
github.com/jmorganca/ollama/cmd.RunServer(0xc000480300?, {0x116b3820?, 0x4?, 0xad9d08?})
/go/src/github.com/jmorganca/ollama/cmd/cmd.go:705 +0x199 fp=0xc0005bbd30 sp=0xc0005bbc98 pc=0x9b3a19
github.com/spf13/cobra.(*Command).execute(0xc000421800, {0x116b3820, 0x0, 0x0})
/root/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:940 +0x87c fp=0xc0005bbe68 sp=0xc0005bbd30 pc=0x7649fc
github.com/spf13/cobra.(*Command).ExecuteC(0xc000420c00)
/root/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068 +0x3a5 fp=0xc0005bbf20 sp=0xc0005bbe68 pc=0x765225
github.com/spf13/cobra.(*Command).Execute(...)
/root/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992
github.com/spf13/cobra.(*Command).ExecuteContext(...)
/root/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:985
main.main()
/go/src/github.com/jmorganca/ollama/main.go:11 +0x4d fp=0xc0005bbf40 sp=0xc0005bbf20 pc=0x9bb9cd
runtime.main()
/usr/local/go/src/runtime/proc.go:267 +0x2bb fp=0xc0005bbfe0 sp=0xc0005bbf40 pc=0x43e39b
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0005bbfe8 sp=0xc0005bbfe0 pc=0x46e2c1
goroutine 2 [force gc (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000072fa8 sp=0xc000072f88 pc=0x43e7ee
runtime.goparkunlock(...)
/usr/local/go/src/runtime/proc.go:404
runtime.forcegchelper()
/usr/local/go/src/runtime/proc.go:322 +0xb3 fp=0xc000072fe0 sp=0xc000072fa8 pc=0x43e673
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc000072fe8 sp=0xc000072fe0 pc=0x46e2c1
created by runtime.init.6 in goroutine 1
/usr/local/go/src/runtime/proc.go:310 +0x1a
goroutine 3 [GC sweep wait]:
runtime.gopark(0x1?, 0x0?, 0x0?, 0x0?, 0x0?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000073778 sp=0xc000073758 pc=0x43e7ee
runtime.goparkunlock(...)
/usr/local/go/src/runtime/proc.go:404
runtime.bgsweep(0x0?)
/usr/local/go/src/runtime/mgcsweep.go:321 +0xdf fp=0xc0000737c8 sp=0xc000073778 pc=0x42a73f
runtime.gcenable.func1()
/usr/local/go/src/runtime/mgc.go:200 +0x25 fp=0xc0000737e0 sp=0xc0000737c8 pc=0x41f865
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0000737e8 sp=0xc0000737e0 pc=0x46e2c1
created by runtime.gcenable in goroutine 1
/usr/local/go/src/runtime/mgc.go:200 +0x66
goroutine 4 [GC scavenge wait]:
runtime.gopark(0x6ce62b?, 0x61f5f2?, 0x0?, 0x0?, 0x0?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000073f70 sp=0xc000073f50 pc=0x43e7ee
runtime.goparkunlock(...)
/usr/local/go/src/runtime/proc.go:404
runtime.(*scavengerState).park(0x11683ba0)
/usr/local/go/src/runtime/mgcscavenge.go:425 +0x49 fp=0xc000073fa0 sp=0xc000073f70 pc=0x427f69
runtime.bgscavenge(0x0?)
/usr/local/go/src/runtime/mgcscavenge.go:658 +0x59 fp=0xc000073fc8 sp=0xc000073fa0 pc=0x428519
runtime.gcenable.func2()
/usr/local/go/src/runtime/mgc.go:201 +0x25 fp=0xc000073fe0 sp=0xc000073fc8 pc=0x41f805
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc000073fe8 sp=0xc000073fe0 pc=0x46e2c1
created by runtime.gcenable in goroutine 1
/usr/local/go/src/runtime/mgc.go:201 +0xa5
goroutine 5 [finalizer wait]:
runtime.gopark(0xad2cc0?, 0x10043f901?, 0x0?, 0x0?, 0x4469a5?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000072628 sp=0xc000072608 pc=0x43e7ee
runtime.runfinq()
/usr/local/go/src/runtime/mfinal.go:193 +0x107 fp=0xc0000727e0 sp=0xc000072628 pc=0x41e8e7
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0000727e8 sp=0xc0000727e0 pc=0x46e2c1
created by runtime.createfing in goroutine 1
/usr/local/go/src/runtime/mfinal.go:163 +0x3d
goroutine 6 [select, locked to thread]:
runtime.gopark(0xc0000747a8?, 0x2?, 0x89?, 0xea?, 0xc0000747a4?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000074638 sp=0xc000074618 pc=0x43e7ee
runtime.selectgo(0xc0000747a8, 0xc0000747a0, 0x0?, 0x0, 0x0?, 0x1)
/usr/local/go/src/runtime/select.go:327 +0x725 fp=0xc000074758 sp=0xc000074638 pc=0x44e325
runtime.ensureSigM.func1()
/usr/local/go/src/runtime/signal_unix.go:1014 +0x19f fp=0xc0000747e0 sp=0xc000074758 pc=0x46535f
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0000747e8 sp=0xc0000747e0 pc=0x46e2c1
created by runtime.ensureSigM in goroutine 1
/usr/local/go/src/runtime/signal_unix.go:997 +0xc8
goroutine 18 [syscall]:
runtime.notetsleepg(0x0?, 0x0?)
/usr/local/go/src/runtime/lock_futex.go:236 +0x29 fp=0xc00006e7a0 sp=0xc00006e768 pc=0x411349
os/signal.signal_recv()
/usr/local/go/src/runtime/sigqueue.go:152 +0x29 fp=0xc00006e7c0 sp=0xc00006e7a0 pc=0x46ac89
os/signal.loop()
/usr/local/go/src/os/signal/signal_unix.go:23 +0x13 fp=0xc00006e7e0 sp=0xc00006e7c0 pc=0x6f45f3
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc00006e7e8 sp=0xc00006e7e0 pc=0x46e2c1
created by os/signal.Notify.func1.1 in goroutine 1
/usr/local/go/src/os/signal/signal.go:151 +0x1f
goroutine 34 [chan receive]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000514718 sp=0xc0005146f8 pc=0x43e7ee
runtime.chanrecv(0xc0001a9860, 0x0, 0x1)
/usr/local/go/src/runtime/chan.go:583 +0x3cd fp=0xc000514790 sp=0xc000514718 pc=0x40beed
runtime.chanrecv1(0x0?, 0x0?)
/usr/local/go/src/runtime/chan.go:442 +0x12 fp=0xc0005147b8 sp=0xc000514790 pc=0x40baf2
github.com/jmorganca/ollama/server.Serve.func2()
/go/src/github.com/jmorganca/ollama/server/routes.go:1023 +0x25 fp=0xc0005147e0 sp=0xc0005147b8 pc=0x9a0e05
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0005147e8 sp=0xc0005147e0 pc=0x46e2c1
created by github.com/jmorganca/ollama/server.Serve in goroutine 1
/go/src/github.com/jmorganca/ollama/server/routes.go:1022 +0x3c7
goroutine 35 [GC worker (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000514f50 sp=0xc000514f30 pc=0x43e7ee
runtime.gcBgMarkWorker()
/usr/local/go/src/runtime/mgc.go:1293 +0xe5 fp=0xc000514fe0 sp=0xc000514f50 pc=0x4213e5
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc000514fe8 sp=0xc000514fe0 pc=0x46e2c1
created by runtime.gcBgMarkStartWorkers in goroutine 1
/usr/local/go/src/runtime/mgc.go:1217 +0x1c
goroutine 19 [GC worker (idle)]:
runtime.gopark(0x131e8a75048ea8?, 0x3?, 0xfd?, 0x5d?, 0x0?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc00006ef50 sp=0xc00006ef30 pc=0x43e7ee
runtime.gcBgMarkWorker()
/usr/local/go/src/runtime/mgc.go:1293 +0xe5 fp=0xc00006efe0 sp=0xc00006ef50 pc=0x4213e5
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc00006efe8 sp=0xc00006efe0 pc=0x46e2c1
created by runtime.gcBgMarkStartWorkers in goroutine 1
/usr/local/go/src/runtime/mgc.go:1217 +0x1c
goroutine 36 [GC worker (idle)]:
runtime.gopark(0x131e8a7504bbe6?, 0x1?, 0x3?, 0x5d?, 0x0?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000515750 sp=0xc000515730 pc=0x43e7ee
runtime.gcBgMarkWorker()
/usr/local/go/src/runtime/mgc.go:1293 +0xe5 fp=0xc0005157e0 sp=0xc000515750 pc=0x4213e5
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0005157e8 sp=0xc0005157e0 pc=0x46e2c1
created by runtime.gcBgMarkStartWorkers in goroutine 1
/usr/local/go/src/runtime/mgc.go:1217 +0x1c
goroutine 37 [GC worker (idle)]:
runtime.gopark(0x116b5540?, 0x3?, 0x34?, 0xc?, 0x0?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000515f50 sp=0xc000515f30 pc=0x43e7ee
runtime.gcBgMarkWorker()
/usr/local/go/src/runtime/mgc.go:1293 +0xe5 fp=0xc000515fe0 sp=0xc000515f50 pc=0x4213e5
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc000515fe8 sp=0xc000515fe0 pc=0x46e2c1
created by runtime.gcBgMarkStartWorkers in goroutine 1
/usr/local/go/src/runtime/mgc.go:1217 +0x1c
goroutine 38 [GC worker (idle)]:
runtime.gopark(0x131e8a7504a72a?, 0x3?, 0xf0?, 0x7f?, 0x0?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000516750 sp=0xc000516730 pc=0x43e7ee
runtime.gcBgMarkWorker()
/usr/local/go/src/runtime/mgc.go:1293 +0xe5 fp=0xc0005167e0 sp=0xc000516750 pc=0x4213e5
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0005167e8 sp=0xc0005167e0 pc=0x46e2c1
created by runtime.gcBgMarkStartWorkers in goroutine 1
/usr/local/go/src/runtime/mgc.go:1217 +0x1c
goroutine 39 [GC worker (idle)]:
runtime.gopark(0x131e8a7504e443?, 0x1?, 0xa6?, 0x62?, 0x0?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000516f50 sp=0xc000516f30 pc=0x43e7ee
runtime.gcBgMarkWorker()
/usr/local/go/src/runtime/mgc.go:1293 +0xe5 fp=0xc000516fe0 sp=0xc000516f50 pc=0x4213e5
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc000516fe8 sp=0xc000516fe0 pc=0x46e2c1
created by runtime.gcBgMarkStartWorkers in goroutine 1
/usr/local/go/src/runtime/mgc.go:1217 +0x1c
goroutine 40 [GC worker (idle)]:
runtime.gopark(0x131e8a750496c8?, 0x1?, 0x6a?, 0x5?, 0x0?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000517750 sp=0xc000517730 pc=0x43e7ee
runtime.gcBgMarkWorker()
/usr/local/go/src/runtime/mgc.go:1293 +0xe5 fp=0xc0005177e0 sp=0xc000517750 pc=0x4213e5
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0005177e8 sp=0xc0005177e0 pc=0x46e2c1
created by runtime.gcBgMarkStartWorkers in goroutine 1
/usr/local/go/src/runtime/mgc.go:1217 +0x1c
goroutine 41 [GC worker (idle)]:
runtime.gopark(0x131e8a7504e2e8?, 0x3?, 0x25?, 0x28?, 0x0?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000517f50 sp=0xc000517f30 pc=0x43e7ee
runtime.gcBgMarkWorker()
/usr/local/go/src/runtime/mgc.go:1293 +0xe5 fp=0xc000517fe0 sp=0xc000517f50 pc=0x4213e5
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc000517fe8 sp=0xc000517fe0 pc=0x46e2c1
created by runtime.gcBgMarkStartWorkers in goroutine 1
/usr/local/go/src/runtime/mgc.go:1217 +0x1c
goroutine 7 [GC worker (idle)]:
runtime.gopark(0x131e8a75049497?, 0x3?, 0xca?, 0x3e?, 0x0?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000074f50 sp=0xc000074f30 pc=0x43e7ee
runtime.gcBgMarkWorker()
/usr/local/go/src/runtime/mgc.go:1293 +0xe5 fp=0xc000074fe0 sp=0xc000074f50 pc=0x4213e5
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc000074fe8 sp=0xc000074fe0 pc=0x46e2c1
created by runtime.gcBgMarkStartWorkers in goroutine 1
/usr/local/go/src/runtime/mgc.go:1217 +0x1c
goroutine 50 [GC worker (idle)]:
runtime.gopark(0x131e8a7504bafc?, 0x3?, 0x3d?, 0xd4?, 0x0?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000510750 sp=0xc000510730 pc=0x43e7ee
runtime.gcBgMarkWorker()
/usr/local/go/src/runtime/mgc.go:1293 +0xe5 fp=0xc0005107e0 sp=0xc000510750 pc=0x4213e5
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0005107e8 sp=0xc0005107e0 pc=0x46e2c1
created by runtime.gcBgMarkStartWorkers in goroutine 1
/usr/local/go/src/runtime/mgc.go:1217 +0x1c
goroutine 42 [GC worker (idle)]:
runtime.gopark(0x131e8a7504e40b?, 0x1?, 0x9f?, 0x6d?, 0x0?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000528750 sp=0xc000528730 pc=0x43e7ee
runtime.gcBgMarkWorker()
/usr/local/go/src/runtime/mgc.go:1293 +0xe5 fp=0xc0005287e0 sp=0xc000528750 pc=0x4213e5
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0005287e8 sp=0xc0005287e0 pc=0x46e2c1
created by runtime.gcBgMarkStartWorkers in goroutine 1
/usr/local/go/src/runtime/mgc.go:1217 +0x1c
goroutine 51 [GC worker (idle)]:
runtime.gopark(0x131e8a750510da?, 0x3?, 0xf?, 0xf0?, 0x0?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000510f50 sp=0xc000510f30 pc=0x43e7ee
runtime.gcBgMarkWorker()
/usr/local/go/src/runtime/mgc.go:1293 +0xe5 fp=0xc000510fe0 sp=0xc000510f50 pc=0x4213e5
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc000510fe8 sp=0xc000510fe0 pc=0x46e2c1
created by runtime.gcBgMarkStartWorkers in goroutine 1
/usr/local/go/src/runtime/mgc.go:1217 +0x1c
goroutine 8 [GC worker (idle)]:
runtime.gopark(0x131e8a75050f1a?, 0x3?, 0x91?, 0x60?, 0x0?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000075750 sp=0xc000075730 pc=0x43e7ee
runtime.gcBgMarkWorker()
/usr/local/go/src/runtime/mgc.go:1293 +0xe5 fp=0xc0000757e0 sp=0xc000075750 pc=0x4213e5
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0000757e8 sp=0xc0000757e0 pc=0x46e2c1
created by runtime.gcBgMarkStartWorkers in goroutine 1
/usr/local/go/src/runtime/mgc.go:1217 +0x1c
goroutine 9 [GC worker (idle)]:
runtime.gopark(0x131e8a75847d46?, 0x1?, 0xdd?, 0x25?, 0x0?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000075f50 sp=0xc000075f30 pc=0x43e7ee
runtime.gcBgMarkWorker()
/usr/local/go/src/runtime/mgc.go:1293 +0xe5 fp=0xc000075fe0 sp=0xc000075f50 pc=0x4213e5
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc000075fe8 sp=0xc000075fe0 pc=0x46e2c1
created by runtime.gcBgMarkStartWorkers in goroutine 1
/usr/local/go/src/runtime/mgc.go:1217 +0x1c
goroutine 43 [GC worker (idle)]:
runtime.gopark(0x131e8a75050e92?, 0x1?, 0xd9?, 0x54?, 0x0?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000528f50 sp=0xc000528f30 pc=0x43e7ee
runtime.gcBgMarkWorker()
/usr/local/go/src/runtime/mgc.go:1293 +0xe5 fp=0xc000528fe0 sp=0xc000528f50 pc=0x4213e5
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc000528fe8 sp=0xc000528fe0 pc=0x46e2c1
created by runtime.gcBgMarkStartWorkers in goroutine 1
/usr/local/go/src/runtime/mgc.go:1217 +0x1c
goroutine 52 [GC worker (idle)]:
runtime.gopark(0xc000039728?, 0x1?, 0x81?, 0xc6?, 0x0?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000511750 sp=0xc000511730 pc=0x43e7ee
runtime.gcBgMarkWorker()
/usr/local/go/src/runtime/mgc.go:1293 +0xe5 fp=0xc0005117e0 sp=0xc000511750 pc=0x4213e5
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0005117e8 sp=0xc0005117e0 pc=0x46e2c1
created by runtime.gcBgMarkStartWorkers in goroutine 1
/usr/local/go/src/runtime/mgc.go:1217 +0x1c
goroutine 10 [GC worker (idle)]:
runtime.gopark(0x131e8a7504becd?, 0x1?, 0x45?, 0x82?, 0x0?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000524750 sp=0xc000524730 pc=0x43e7ee
runtime.gcBgMarkWorker()
/usr/local/go/src/runtime/mgc.go:1293 +0xe5 fp=0xc0005247e0 sp=0xc000524750 pc=0x4213e5
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0005247e8 sp=0xc0005247e0 pc=0x46e2c1
created by runtime.gcBgMarkStartWorkers in goroutine 1
/usr/local/go/src/runtime/mgc.go:1217 +0x1c
goroutine 44 [GC worker (idle)]:
runtime.gopark(0x131e8a7504c4e5?, 0x1?, 0x60?, 0x3c?, 0x0?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000529750 sp=0xc000529730 pc=0x43e7ee
runtime.gcBgMarkWorker()
/usr/local/go/src/runtime/mgc.go:1293 +0xe5 fp=0xc0005297e0 sp=0xc000529750 pc=0x4213e5
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0005297e8 sp=0xc0005297e0 pc=0x46e2c1
created by runtime.gcBgMarkStartWorkers in goroutine 1
/usr/local/go/src/runtime/mgc.go:1217 +0x1c
goroutine 53 [GC worker (idle)]:
runtime.gopark(0x116b5540?, 0x1?, 0x5f?, 0x3c?, 0x0?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000511f50 sp=0xc000511f30 pc=0x43e7ee
runtime.gcBgMarkWorker()
/usr/local/go/src/runtime/mgc.go:1293 +0xe5 fp=0xc000511fe0 sp=0xc000511f50 pc=0x4213e5
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc000511fe8 sp=0xc000511fe0 pc=0x46e2c1
created by runtime.gcBgMarkStartWorkers in goroutine 1
/usr/local/go/src/runtime/mgc.go:1217 +0x1c
goroutine 11 [GC worker (idle)]:
runtime.gopark(0x131e8a7504c1dd?, 0x3?, 0x28?, 0x63?, 0x0?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000524f50 sp=0xc000524f30 pc=0x43e7ee
runtime.gcBgMarkWorker()
/usr/local/go/src/runtime/mgc.go:1293 +0xe5 fp=0xc000524fe0 sp=0xc000524f50 pc=0x4213e5
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc000524fe8 sp=0xc000524fe0 pc=0x46e2c1
created by runtime.gcBgMarkStartWorkers in goroutine 1
/usr/local/go/src/runtime/mgc.go:1217 +0x1c
goroutine 48 [IO wait]:
runtime.gopark(0x0?, 0xb?, 0x0?, 0x0?, 0xe?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc00052a5a0 sp=0xc00052a580 pc=0x43e7ee
runtime.netpollblock(0x47f078?, 0x4092a6?, 0x0?)
/usr/local/go/src/runtime/netpoll.go:564 +0xf7 fp=0xc00052a5d8 sp=0xc00052a5a0 pc=0x437277
internal/poll.runtime_pollWait(0x7f9a505bfc90, 0x72)
/usr/local/go/src/runtime/netpoll.go:343 +0x85 fp=0xc00052a5f8 sp=0xc00052a5d8 pc=0x468a05
internal/poll.(*pollDesc).wait(0xc0001a2280?, 0xc0000b5781?, 0x0)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc00052a620 sp=0xc00052a5f8 pc=0x4efd67
internal/poll.(*pollDesc).waitRead(...)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0001a2280, {0xc0000b5781, 0x1, 0x1})
/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a fp=0xc00052a6b8 sp=0xc00052a620 pc=0x4f105a
net.(*netFD).Read(0xc0001a2280, {0xc0000b5781?, 0x0?, 0x0?})
/usr/local/go/src/net/fd_posix.go:55 +0x25 fp=0xc00052a700 sp=0xc00052a6b8 pc=0x569e05
net.(*conn).Read(0xc0007ae030, {0xc0000b5781?, 0x0?, 0x0?})
/usr/local/go/src/net/net.go:179 +0x45 fp=0xc00052a748 sp=0xc00052a700 pc=0x5780a5
net.(*TCPConn).Read(0x0?, {0xc0000b5781?, 0x0?, 0x0?})
<autogenerated>:1 +0x25 fp=0xc00052a778 sp=0xc00052a748 pc=0x589fa5
net/http.(*connReader).backgroundRead(0xc0000b5770)
/usr/local/go/src/net/http/server.go:683 +0x37 fp=0xc00052a7c8 sp=0xc00052a778 pc=0x6c4b97
net/http.(*connReader).startBackgroundRead.func2()
/usr/local/go/src/net/http/server.go:679 +0x25 fp=0xc00052a7e0 sp=0xc00052a7c8 pc=0x6c4ac5
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc00052a7e8 sp=0xc00052a7e0 pc=0x46e2c1
created by net/http.(*connReader).startBackgroundRead in goroutine 47
/usr/local/go/src/net/http/server.go:679 +0xba
rax 0x0
rbx 0x7f9a057fd430
rcx 0x6d
rdx 0x7f99f4200490
rdi 0x0
rsi 0x0
rbp 0x7f9a057fd3f0
rsp 0x7f9a057fd258
r8 0x7f993c8ebeb0
r9 0x7f993c8ebed8
r10 0x7f9a98205b40
r11 0x7f9a983a4800
r12 0x0
r13 0x7f99f41fe748
r14 0x7f99f4200490
r15 0x0
rip 0x7f9a983a80bc
rflags 0x10283
cs 0x33
fs 0x0
gs 0x0
```
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2724/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/ollama/ollama/issues/2724/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7725
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7725/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7725/comments
|
https://api.github.com/repos/ollama/ollama/issues/7725/events
|
https://github.com/ollama/ollama/issues/7725
| 2,668,264,963
|
I_kwDOJ0Z1Ps6fCn4D
| 7,725
|
How to check the actual location where the model file is saved, and the directory queried by 'ollama list'
|
{
"login": "supersaiyan2019",
"id": 130198547,
"node_id": "U_kgDOB8KsEw",
"avatar_url": "https://avatars.githubusercontent.com/u/130198547?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/supersaiyan2019",
"html_url": "https://github.com/supersaiyan2019",
"followers_url": "https://api.github.com/users/supersaiyan2019/followers",
"following_url": "https://api.github.com/users/supersaiyan2019/following{/other_user}",
"gists_url": "https://api.github.com/users/supersaiyan2019/gists{/gist_id}",
"starred_url": "https://api.github.com/users/supersaiyan2019/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/supersaiyan2019/subscriptions",
"organizations_url": "https://api.github.com/users/supersaiyan2019/orgs",
"repos_url": "https://api.github.com/users/supersaiyan2019/repos",
"events_url": "https://api.github.com/users/supersaiyan2019/events{/privacy}",
"received_events_url": "https://api.github.com/users/supersaiyan2019/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677675697,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgU-sQ",
"url": "https://api.github.com/repos/ollama/ollama/labels/wsl",
"name": "wsl",
"color": "7E0821",
"default": false,
"description": "Issues using WSL"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-11-18T11:36:36
| 2024-11-18T12:35:16
| 2024-11-18T12:33:16
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
encountered an error while using the new model minicpm-v #6751,still this issue...
Since installing minicpm-v, my ollama version has always stayed at 0.3.6. My problem #6751 has never been solved. I have completely deleted ollama, restarted windows, and reinstalled ollama. As long as minicpm-v is relisted , my version became 0.3.6, and the OLLAMA_MODELS setting became invalid, and I don’t know where the new model was downloaded.
Here's what I painfully tried...
ollama -v
ollama version is 0.3.6
Warning: client version is 0.4.2
in #6751, my client version was 0.3.10, and now 0.4.2, but the server version is always 0.3.6....
ollama rm minicpm-v:latest
Error: unable to stop existing running model "minicpm-v:latest": llama runner process has terminated: GGML_ASSERT(new_clip->has_llava_projector) failed
I want to remove it ,no way.
ollama run minicpm-v:latest
Error: llama runner process has terminated: GGML_ASSERT(new_clip->has_llava_projector) failed
Obviously run won’t work either.

Directory set by OLLAMA_MODELS ,no minicpm-v files

The default user/.ollama directory has no models directory.

But 'ollama list' can search it out, but it can only be created, not deleted, or used.
What I want to do most now is to delete these three damn models and not list them. I can list them on the command line of this machine, but when calling from the clients, even the list will report an error. Please tell me how to find their location and delete them permanently, or how to clean up all the environment when uninstalling ollama, so that I can get rid of the pain.
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.4.2
|
{
"login": "supersaiyan2019",
"id": 130198547,
"node_id": "U_kgDOB8KsEw",
"avatar_url": "https://avatars.githubusercontent.com/u/130198547?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/supersaiyan2019",
"html_url": "https://github.com/supersaiyan2019",
"followers_url": "https://api.github.com/users/supersaiyan2019/followers",
"following_url": "https://api.github.com/users/supersaiyan2019/following{/other_user}",
"gists_url": "https://api.github.com/users/supersaiyan2019/gists{/gist_id}",
"starred_url": "https://api.github.com/users/supersaiyan2019/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/supersaiyan2019/subscriptions",
"organizations_url": "https://api.github.com/users/supersaiyan2019/orgs",
"repos_url": "https://api.github.com/users/supersaiyan2019/repos",
"events_url": "https://api.github.com/users/supersaiyan2019/events{/privacy}",
"received_events_url": "https://api.github.com/users/supersaiyan2019/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7725/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7725/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2179
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2179/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2179/comments
|
https://api.github.com/repos/ollama/ollama/issues/2179/events
|
https://github.com/ollama/ollama/pull/2179
| 2,099,212,292
|
PR_kwDOJ0Z1Ps5lAMwY
| 2,179
|
add `--upgrade-all` flag to refresh any stale models
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 7
| 2024-01-24T22:22:22
| 2024-04-16T22:58:11
| null |
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | true
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/2179",
"html_url": "https://github.com/ollama/ollama/pull/2179",
"diff_url": "https://github.com/ollama/ollama/pull/2179.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2179.patch",
"merged_at": null
}
|
This change allows you to run `ollama pull --upgrade-all` which will check each of your local models and upgrade any that are out of date. It uses Etags to check if there is a newer manifest, and then pulls that model if it has been updated.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2179/reactions",
"total_count": 23,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 23,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2179/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4171
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4171/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4171/comments
|
https://api.github.com/repos/ollama/ollama/issues/4171/events
|
https://github.com/ollama/ollama/issues/4171
| 2,279,561,818
|
I_kwDOJ0Z1Ps6H31pa
| 4,171
|
Inconsistent or unresponsive response in llama v0.1.33 using llava model
|
{
"login": "iwannabewater",
"id": 82285305,
"node_id": "MDQ6VXNlcjgyMjg1MzA1",
"avatar_url": "https://avatars.githubusercontent.com/u/82285305?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iwannabewater",
"html_url": "https://github.com/iwannabewater",
"followers_url": "https://api.github.com/users/iwannabewater/followers",
"following_url": "https://api.github.com/users/iwannabewater/following{/other_user}",
"gists_url": "https://api.github.com/users/iwannabewater/gists{/gist_id}",
"starred_url": "https://api.github.com/users/iwannabewater/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iwannabewater/subscriptions",
"organizations_url": "https://api.github.com/users/iwannabewater/orgs",
"repos_url": "https://api.github.com/users/iwannabewater/repos",
"events_url": "https://api.github.com/users/iwannabewater/events{/privacy}",
"received_events_url": "https://api.github.com/users/iwannabewater/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 3
| 2024-05-05T14:33:16
| 2024-05-05T15:16:14
| 2024-05-05T15:16:14
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
**Environment:**
Operating System: Ubuntu 22.04
Hardware: NVIDIA RTX 4090 GPU and Intel Xeon Gold 6326 CPU
ollama Version: v0.1.33
Model Used: llava:34b-v1.6-q4_0
**Description:**
I am experiencing issues with the llava model in ollama v0.1.33, where it fails to respond appropriately to queries or provides random and unrelated answers. This problem occurs when attempting to analyze images. For example, after running the model and querying about an image (car.jpg), the expected behavior is a detailed description relevant to the image content. However, the model either does not respond or describes content that does not match the image provided.
**Steps to Reproduce:**
Start the model using the command: ollama run llava:34b-v1.6-q4_0
Input commands to describe an image, for example, >>> describe it. the path is: car.jpg
Observe the lack of appropriate response or incorrect descriptions.
**Expected Behavior:**
The model should consistently provide accurate and relevant responses to the image content queries.
**Actual Behavior:**
Responses are either missing, delayed, or incorrect, significantly hindering project progress where image analysis is crucial.
**Additional Context:**
This behavior has been consistently reproducible, impacting our ability to efficiently use the model for critical tasks. Any insights or fixes would be greatly appreciated.
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
v0.1.33
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4171/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4171/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5022
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5022/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5022/comments
|
https://api.github.com/repos/ollama/ollama/issues/5022/events
|
https://github.com/ollama/ollama/issues/5022
| 2,351,247,042
|
I_kwDOJ0Z1Ps6MJS7C
| 5,022
|
GPU VRAM estimate not accounting for flash attetion
|
{
"login": "theasp",
"id": 7775024,
"node_id": "MDQ6VXNlcjc3NzUwMjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/7775024?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/theasp",
"html_url": "https://github.com/theasp",
"followers_url": "https://api.github.com/users/theasp/followers",
"following_url": "https://api.github.com/users/theasp/following{/other_user}",
"gists_url": "https://api.github.com/users/theasp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/theasp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/theasp/subscriptions",
"organizations_url": "https://api.github.com/users/theasp/orgs",
"repos_url": "https://api.github.com/users/theasp/repos",
"events_url": "https://api.github.com/users/theasp/events{/privacy}",
"received_events_url": "https://api.github.com/users/theasp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 1
| 2024-06-13T14:04:05
| 2024-10-18T09:42:18
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Hi,
I'm using a q6_K quant of of codestral-22 with a 18k context and flash attention enabled. I'm trying to get a higher context configured, but I always have VRAM left. It appears that the estimate does not account for the use of flash attention as I still have 2882 GB left.
```
NAME ID SIZE PROCESSOR UNTIL
DEFAULT/codestral-22b:latest cd78ecba62ae 25 GB 100% GPU Forever
```
```
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 555.42.02 Driver Version: 555.42.02 CUDA Version: 12.5 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA GeForce RTX 3090 Off | 00000000:01:00.0 Off | N/A |
| 0% 40C P8 34W / 420W | 21694MiB / 24576MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| 0 N/A N/A 3659350 C ...unners/cuda_v11/ollama_llama_server 21684MiB |
+-----------------------------------------------------------------------------------------+
```
```
ollama-1 | time=2024-06-13T13:02:05.501Z level=INFO source=memory.go:133 msg="offload to gpu" layers.requested=-1 layers.real=57 memory.available="23.3 GiB" memory.required.full="23.3 GiB" memory.required.partial="23.3 GiB" memory.required.kv="3.9 GiB" memory.weights.total="16.8 GiB" memory.weights.repeating="16.7 GiB" memory.weights.nonrepeating="157.5 MiB" memory.graph.full="1.8 GiB" memory.graph.partial="1.8 GiB"
ollama-1 | time=2024-06-13T13:02:05.501Z level=INFO source=server.go:341 msg="starting llama server" cmd="/tmp/ollama151144087/runners/cuda_v11/ollama_llama_server --model
/root/.ollama/models/blobs/sha256-83d371fdab7d62c12eb780a034bf9b5ea89403e4d69e46d332d9bdaeff765c31 --ctx-size 18432 --batch-size 512 --embedding --log-disable --n-gpu-layers 57 --flash-attn --parallel 1 --port 35209"
[...]
time=2024-06-13T13:02:05.953Z level=INFO source=server.go:567 msg="waiting for server to become available" status="llm server not responding"
llm_load_tensors: offloading 56 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 57/57 layers to GPU
llm_load_tensors: CPU buffer size = 157.50 MiB
llm_load_tensors: CUDA0 buffer size = 17248.90 MiB
time=2024-06-13T13:02:06.656Z level=INFO source=server.go:567 msg="waiting for server to become available" status="llm server loading model"
llama_new_context_with_model: n_ctx = 18432
llama_new_context_with_model: n_batch = 512
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 1
llama_new_context_with_model: freq_base = 1000000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: CUDA0 KV buffer size = 4032.00 MiB
llama_new_context_with_model: KV self size = 4032.00 MiB, K (f16): 2016.00 MiB, V (f16): 2016.00 MiB
llama_new_context_with_model: CUDA_Host output buffer size = 0.15 MiB
llama_new_context_with_model: CUDA0 compute buffer size = 130.00 MiB
llama_new_context_with_model: CUDA_Host compute buffer size = 48.01 MiB
llama_new_context_with_model: graph nodes = 1575
llama_new_context_with_model: graph splits = 2
```
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.43
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5022/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5022/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/1373
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1373/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1373/comments
|
https://api.github.com/repos/ollama/ollama/issues/1373/events
|
https://github.com/ollama/ollama/issues/1373
| 2,024,194,473
|
I_kwDOJ0Z1Ps54psGp
| 1,373
|
Configuring/building from git cloned repo does not produce an ollama executable.
|
{
"login": "phalexo",
"id": 4603365,
"node_id": "MDQ6VXNlcjQ2MDMzNjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/4603365?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/phalexo",
"html_url": "https://github.com/phalexo",
"followers_url": "https://api.github.com/users/phalexo/followers",
"following_url": "https://api.github.com/users/phalexo/following{/other_user}",
"gists_url": "https://api.github.com/users/phalexo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/phalexo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/phalexo/subscriptions",
"organizations_url": "https://api.github.com/users/phalexo/orgs",
"repos_url": "https://api.github.com/users/phalexo/repos",
"events_url": "https://api.github.com/users/phalexo/events{/privacy}",
"received_events_url": "https://api.github.com/users/phalexo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2023-12-04T15:59:25
| 2023-12-04T16:31:24
| 2023-12-04T16:31:09
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Following the instructions in README.md file
go generate ./...
go build .
does not seem to end with an ollama executable code in the folder.
I am missing something? How does one build it and then install it?
From "go build ." I get the stuff below. Nothing has been changed in the code.
../go/pkg/mod/github.com/gin-contrib/cors@v1.4.0/config.go:7:2: //go:build comment without // +build comment
../go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/context.go:23:2: //go:build comment without // +build comment
../go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/binding/form_mapping.go:15:2: //go:build comment without // +build comment
../go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/binding/form_mapping.go:16:2: //go:build comment without // +build comment
../go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/context.go:24:2: //go:build comment without // +build comment
cmd/cmd.go:34:2: //go:build comment without // +build comment
../go/pkg/mod/golang.org/x/crypto@v0.14.0/ssh/kex.go:19:2: //go:build comment without // +build comment
server/images.go:22:2: //go:build comment without // +build comment
|
{
"login": "phalexo",
"id": 4603365,
"node_id": "MDQ6VXNlcjQ2MDMzNjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/4603365?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/phalexo",
"html_url": "https://github.com/phalexo",
"followers_url": "https://api.github.com/users/phalexo/followers",
"following_url": "https://api.github.com/users/phalexo/following{/other_user}",
"gists_url": "https://api.github.com/users/phalexo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/phalexo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/phalexo/subscriptions",
"organizations_url": "https://api.github.com/users/phalexo/orgs",
"repos_url": "https://api.github.com/users/phalexo/repos",
"events_url": "https://api.github.com/users/phalexo/events{/privacy}",
"received_events_url": "https://api.github.com/users/phalexo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1373/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1373/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/999
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/999/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/999/comments
|
https://api.github.com/repos/ollama/ollama/issues/999/events
|
https://github.com/ollama/ollama/pull/999
| 1,977,324,402
|
PR_kwDOJ0Z1Ps5emHHy
| 999
|
add hass-ollama-conversation to community integrations
|
{
"login": "ej52",
"id": 6298706,
"node_id": "MDQ6VXNlcjYyOTg3MDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/6298706?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ej52",
"html_url": "https://github.com/ej52",
"followers_url": "https://api.github.com/users/ej52/followers",
"following_url": "https://api.github.com/users/ej52/following{/other_user}",
"gists_url": "https://api.github.com/users/ej52/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ej52/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ej52/subscriptions",
"organizations_url": "https://api.github.com/users/ej52/orgs",
"repos_url": "https://api.github.com/users/ej52/repos",
"events_url": "https://api.github.com/users/ej52/events{/privacy}",
"received_events_url": "https://api.github.com/users/ej52/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-11-04T13:01:46
| 2023-11-06T18:50:35
| 2023-11-06T18:50:35
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/999",
"html_url": "https://github.com/ollama/ollama/pull/999",
"diff_url": "https://github.com/ollama/ollama/pull/999.diff",
"patch_url": "https://github.com/ollama/ollama/pull/999.patch",
"merged_at": "2023-11-06T18:50:35"
}
|
Add custom home assistant integration [hass-ollama-conversation](https://github.com/ej52/hass-ollama-conversation)
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/999/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/999/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7423
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7423/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7423/comments
|
https://api.github.com/repos/ollama/ollama/issues/7423/events
|
https://github.com/ollama/ollama/issues/7423
| 2,624,652,456
|
I_kwDOJ0Z1Ps6ccQSo
| 7,423
|
"model requires more system memory" When Running in Docker Container and Making Continue Plugin Request from Inside Intellij
|
{
"login": "nathan-hook",
"id": 10638625,
"node_id": "MDQ6VXNlcjEwNjM4NjI1",
"avatar_url": "https://avatars.githubusercontent.com/u/10638625?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nathan-hook",
"html_url": "https://github.com/nathan-hook",
"followers_url": "https://api.github.com/users/nathan-hook/followers",
"following_url": "https://api.github.com/users/nathan-hook/following{/other_user}",
"gists_url": "https://api.github.com/users/nathan-hook/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nathan-hook/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nathan-hook/subscriptions",
"organizations_url": "https://api.github.com/users/nathan-hook/orgs",
"repos_url": "https://api.github.com/users/nathan-hook/repos",
"events_url": "https://api.github.com/users/nathan-hook/events{/privacy}",
"received_events_url": "https://api.github.com/users/nathan-hook/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 9
| 2024-10-30T16:26:08
| 2024-12-03T16:50:35
| 2024-12-02T14:47:23
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I hope that this a PEBCAK issue and that there is quick environment setting, but with my searching I couldn't find one.
## TL;DR
When using the [Continue Plugin](https://plugins.jetbrains.com/plugin/22707-continue) in my Intellij and then configuring it to talk to my local Docker containerized Ollama instance, I get the following error from the plugin:
```
HTTP 500 Internal Server Error from http://127.0.0.1:11434/api/chat {"error":"model requires more system memory (10.1 GiB) than is available (4.1 GiB)"}
```
## Longer Version
I am running Ollama in a Docker container along with Open WebUI via docker-compose on my Apple M2 Pro with 32 GB or memory:
```
services:
ollama:
volumes:
- ollama:/root/.ollama
container_name: ollama
pull_policy: always
ports:
- 11434:11434
environment:
- OLLAMA_DEBUG=1
tty: true
restart: unless-stopped
# image: ollama/ollama:${OLLAMA_DOCKER_TAG-latest}
image: ollama/ollama:latest
open-webui:
build:
context: .
args:
OLLAMA_BASE_URL: '/ollama'
dockerfile: Dockerfile
# image: ghcr.io/open-webui/open-webui:${WEBUI_DOCKER_TAG-main}
image: ghcr.io/open-webui/open-webui:latest
container_name: open-webui
volumes:
- open-webui:/app/backend/data
depends_on:
- ollama
ports:
# - ${OPEN_WEBUI_PORT-3000}:8080
- 3000:8080
environment:
- 'OLLAMA_BASE_URL=http://ollama:11434'
# - 'WEBUI_SECRET_KEY='
extra_hosts:
- host.docker.internal:host-gateway
restart: unless-stopped
volumes:
ollama: {}
open-webui: {}
```
In general, these instances have been able to handle most requests that I've made to them. I have asked general chat questions and then some [Fabric AI](https://github.com/danielmiessler/fabric) queries and Ollama/Open WebUI seemed to do just fine.
When making the requests from Intellij with the Continue plugin, memory suddenly became a problem.
Here is the exact error message from the Continue plugin:
```
HTTP 500 Internal Server Error from http://127.0.0.1:11434/api/chat {"error":"model requires more system memory (10.1 GiB) than is available (4.1 GiB)"}
```
Then here are the logs from Ollama with the DEBUG=1 environment variable set:
```
2024-10-30 10:01:33 2024/10/30 16:01:33 routes.go:1158: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
2024-10-30 10:01:33 time=2024-10-30T16:01:33.049Z level=INFO source=images.go:754 msg="total blobs: 10"
2024-10-30 10:01:33 time=2024-10-30T16:01:33.050Z level=INFO source=images.go:761 msg="total unused blobs removed: 0"
2024-10-30 10:01:33 time=2024-10-30T16:01:33.050Z level=INFO source=routes.go:1205 msg="Listening on [::]:11434 (version 0.3.14)"
2024-10-30 10:01:33 time=2024-10-30T16:01:33.051Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cpu/ollama_llama_server
2024-10-30 10:01:33 time=2024-10-30T16:01:33.051Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cuda_v11/ollama_llama_server
2024-10-30 10:01:33 time=2024-10-30T16:01:33.051Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cuda_v12/ollama_llama_server
2024-10-30 10:01:33 time=2024-10-30T16:01:33.051Z level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cuda_v11 cuda_v12 cpu]"
2024-10-30 10:01:33 time=2024-10-30T16:01:33.051Z level=DEBUG source=common.go:50 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY"
2024-10-30 10:01:33 time=2024-10-30T16:01:33.051Z level=DEBUG source=sched.go:105 msg="starting llm scheduler"
2024-10-30 10:01:33 time=2024-10-30T16:01:33.051Z level=INFO source=gpu.go:221 msg="looking for compatible GPUs"
2024-10-30 10:01:33 time=2024-10-30T16:01:33.052Z level=DEBUG source=gpu.go:94 msg="searching for GPU discovery libraries for NVIDIA"
2024-10-30 10:01:33 time=2024-10-30T16:01:33.052Z level=DEBUG source=gpu.go:505 msg="Searching for GPU library" name=libcuda.so*
2024-10-30 10:01:33 time=2024-10-30T16:01:33.052Z level=DEBUG source=gpu.go:528 msg="gpu library search" globs="[/usr/lib/ollama/libcuda.so* /usr/local/nvidia/lib/libcuda.so* /usr/local/nvidia/lib64/libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]"
2024-10-30 10:01:33 time=2024-10-30T16:01:33.053Z level=DEBUG source=gpu.go:562 msg="discovered GPU libraries" paths=[]
2024-10-30 10:01:33 time=2024-10-30T16:01:33.053Z level=DEBUG source=gpu.go:505 msg="Searching for GPU library" name=libcudart.so*
2024-10-30 10:01:33 time=2024-10-30T16:01:33.053Z level=DEBUG source=gpu.go:528 msg="gpu library search" globs="[/usr/lib/ollama/libcudart.so* /usr/local/nvidia/lib/libcudart.so* /usr/local/nvidia/lib64/libcudart.so* /usr/lib/ollama/libcudart.so* /usr/local/cuda/lib64/libcudart.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/x86_64-linux-gnu/libcudart.so* /usr/lib/wsl/lib/libcudart.so* /usr/lib/wsl/drivers/*/libcudart.so* /opt/cuda/lib64/libcudart.so* /usr/local/cuda*/targets/aarch64-linux/lib/libcudart.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/aarch64-linux-gnu/libcudart.so* /usr/local/cuda/lib*/libcudart.so* /usr/lib*/libcudart.so* /usr/local/lib*/libcudart.so*]"
2024-10-30 10:01:33 time=2024-10-30T16:01:33.053Z level=DEBUG source=gpu.go:562 msg="discovered GPU libraries" paths="[/usr/lib/ollama/libcudart.so.12.4.99 /usr/lib/ollama/libcudart.so.11.3.109]"
2024-10-30 10:01:33 cudaSetDevice err: 35
2024-10-30 10:01:33 time=2024-10-30T16:01:33.056Z level=DEBUG source=gpu.go:578 msg="Unable to load cudart library /usr/lib/ollama/libcudart.so.12.4.99: your nvidia driver is too old or missing. If you have a CUDA GPU please upgrade to run ollama"
2024-10-30 10:01:33 cudaSetDevice err: 35
2024-10-30 10:01:33 time=2024-10-30T16:01:33.057Z level=DEBUG source=gpu.go:578 msg="Unable to load cudart library /usr/lib/ollama/libcudart.so.11.3.109: your nvidia driver is too old or missing. If you have a CUDA GPU please upgrade to run ollama"
2024-10-30 10:01:33 time=2024-10-30T16:01:33.057Z level=DEBUG source=amd_linux.go:416 msg="amdgpu driver not detected /sys/module/amdgpu"
2024-10-30 10:01:33 time=2024-10-30T16:01:33.057Z level=INFO source=gpu.go:384 msg="no compatible GPUs were discovered"
2024-10-30 10:01:33 time=2024-10-30T16:01:33.057Z level=INFO source=types.go:123 msg="inference compute" id=0 library=cpu variant="no vector extensions" compute="" driver=0.0 name="" total="23.4 GiB" available="4.6 GiB"
2024-10-30 10:01:46 [GIN] 2024/10/30 - 16:01:46 | 200 | 3.489458ms | 172.18.0.3 | GET "/api/tags"
2024-10-30 10:02:05 time=2024-10-30T16:02:05.061Z level=DEBUG source=gpu.go:396 msg="updating system memory data" before.total="23.4 GiB" before.free="4.6 GiB" before.free_swap="6.9 MiB" now.total="23.4 GiB" now.free="4.1 GiB" now.free_swap="6.9 MiB"
2024-10-30 10:02:05 time=2024-10-30T16:02:05.061Z level=DEBUG source=sched.go:181 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=0x7f22c0 gpu_count=1
2024-10-30 10:02:05 time=2024-10-30T16:02:05.069Z level=DEBUG source=sched.go:211 msg="cpu mode with first model, loading"
2024-10-30 10:02:05 time=2024-10-30T16:02:05.069Z level=DEBUG source=gpu.go:396 msg="updating system memory data" before.total="23.4 GiB" before.free="4.1 GiB" before.free_swap="6.9 MiB" now.total="23.4 GiB" now.free="4.1 GiB" now.free_swap="6.9 MiB"
2024-10-30 10:02:05 time=2024-10-30T16:02:05.069Z level=INFO source=server.go:105 msg="system memory" total="23.4 GiB" free="4.1 GiB" free_swap="6.9 MiB"
2024-10-30 10:02:05 time=2024-10-30T16:02:05.069Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cpu/ollama_llama_server
2024-10-30 10:02:05 time=2024-10-30T16:02:05.069Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cuda_v11/ollama_llama_server
2024-10-30 10:02:05 time=2024-10-30T16:02:05.069Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cuda_v12/ollama_llama_server
2024-10-30 10:02:05 time=2024-10-30T16:02:05.069Z level=DEBUG source=memory.go:103 msg=evaluating library=cpu gpu_count=1 available="[4.1 GiB]"
2024-10-30 10:02:05 time=2024-10-30T16:02:05.070Z level=WARN source=server.go:137 msg="model request too large for system" requested="10.1 GiB" available=4443041792 total="23.4 GiB" free="4.1 GiB" swap="6.9 MiB"
2024-10-30 10:02:05 time=2024-10-30T16:02:05.070Z level=INFO source=sched.go:428 msg="NewLlamaServer failed" model=/root/.ollama/models/blobs/sha256-ff82381e2bea77d91c1b824c7afb83f6fb73e9f7de9dda631bcdbca564aa5435 error="model requires more system memory (10.1 GiB) than is available (4.1 GiB)"
2024-10-30 10:02:05 [GIN] 2024/10/30 - 16:02:05 | 500 | 18.648125ms | 192.168.65.1 | POST "/api/chat"
2024-10-30 10:06:17 [GIN] 2024/10/30 - 16:06:17 | 200 | 181.5µs | 127.0.0.1 | GET "/api/version"
2024-10-30 10:34:21 [GIN] 2024/10/30 - 16:34:21 | 200 | 232.25µs | 127.0.0.1 | HEAD "/"
2024-10-30 10:34:21 [GIN] 2024/10/30 - 16:34:21 | 200 | 281.833µs | 127.0.0.1 | GET "/api/ps"
2024-10-30 10:47:05 [GIN] 2024/10/30 - 16:47:05 | 200 | 1.833166ms | 172.18.0.3 | GET "/api/tags"
2024-10-30 10:47:05 [GIN] 2024/10/30 - 16:47:05 | 200 | 77.917µs | 172.18.0.3 | GET "/api/version"
2024-10-30 10:47:16 [GIN] 2024/10/30 - 16:47:16 | 200 | 61.084µs | 172.18.0.3 | GET "/api/version"
2024-10-30 10:47:22 [GIN] 2024/10/30 - 16:47:22 | 200 | 223.334µs | 172.18.0.3 | GET "/api/version"
```
Here are my `docker stats`:
```
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
2bef887cf4a9 open-webui 0.41% 698.3MiB / 23.44GiB 2.91% 28.4kB / 20.1kB 348MB / 41.5MB 22
29e804e5b9e1 ollama 0.00% 22.76MiB / 23.44GiB 0.09% 9.32kB / 8.49kB 15.2MB / 0B 12
```
My Docker Desktop has the following resource settings:
CPU Limit: 8
Memory Limit: 24 Gigs
SWAP: 4GB
Memory Statistics from my Mac:
Physical Memory: 32 GB
Memory Used: 27.20 GB
Cached Files: 4.75 GB
Swap Used: 2.07 GB
Models:
Llama3.1:latest
Minstral:7b
Any friendly direction on how to debug this issue or how to change some environment variables (in docker-compose) to just make this issue go away would be greatly appreciated.
FWIW, I am not interesting in performance. I am just futzing with integrating my IDE with a local LLM. And at the end of the day, I just want to see it work...
Thank you for all your hard work. Please let me know what comments, questions, or concerns you have.
### OS
macOS
### GPU
Other
### CPU
Apple
### Ollama version
ollama version is 0.3.14
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7423/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7423/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2131
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2131/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2131/comments
|
https://api.github.com/repos/ollama/ollama/issues/2131/events
|
https://github.com/ollama/ollama/pull/2131
| 2,092,841,329
|
PR_kwDOJ0Z1Ps5kqiMH
| 2,131
|
Probe GPUs before backend init
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-01-21T23:59:49
| 2024-01-22T00:13:51
| 2024-01-22T00:13:47
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/2131",
"html_url": "https://github.com/ollama/ollama/pull/2131",
"diff_url": "https://github.com/ollama/ollama/pull/2131.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2131.patch",
"merged_at": "2024-01-22T00:13:47"
}
|
Detect potential error scenarios so we can fallback to CPU mode without hitting asserts.
This won't fix the underlying errors we're seeing in #1940 and #1877 but it should hopefully allow us to detect the non-working scenario and fallback to CPU. We still need to understand why `cudaGetDevice` is failing on these systems (maybe incompatible cuda libs or drivers?)
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2131/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2131/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1415
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1415/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1415/comments
|
https://api.github.com/repos/ollama/ollama/issues/1415/events
|
https://github.com/ollama/ollama/issues/1415
| 2,030,673,808
|
I_kwDOJ0Z1Ps55CZ-Q
| 1,415
|
Override SYSTEM parameter by commandline
|
{
"login": "marco-trovato",
"id": 18162107,
"node_id": "MDQ6VXNlcjE4MTYyMTA3",
"avatar_url": "https://avatars.githubusercontent.com/u/18162107?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/marco-trovato",
"html_url": "https://github.com/marco-trovato",
"followers_url": "https://api.github.com/users/marco-trovato/followers",
"following_url": "https://api.github.com/users/marco-trovato/following{/other_user}",
"gists_url": "https://api.github.com/users/marco-trovato/gists{/gist_id}",
"starred_url": "https://api.github.com/users/marco-trovato/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/marco-trovato/subscriptions",
"organizations_url": "https://api.github.com/users/marco-trovato/orgs",
"repos_url": "https://api.github.com/users/marco-trovato/repos",
"events_url": "https://api.github.com/users/marco-trovato/events{/privacy}",
"received_events_url": "https://api.github.com/users/marco-trovato/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 4
| 2023-12-07T12:37:53
| 2023-12-15T19:06:25
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
According to the [documentation](https://github.com/jmorganca/ollama/blob/main/docs/modelfile.md), the only way to change the SYSTEM is to create a new model with modelfile using an existing LLM model already downloaded as template:
`ollama create choose-a-model-name -f <location of the file e.g. ./Modelfile>'`
But this will copy and duplicate the model file (often bigger than 20 GB)
But using oTerm is possible to change the SYSTEM, please refer to this screenshot for visual reference:

REQUEST:
**Please add the --system command line to force system, example usage:**
ollama run codeup:13b-llama2-chat-q4_K_M --verbose **--system** "Roleplay as Matrix movie operator before answering the question." "Write Python code to loop for 1 to 10"
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1415/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1415/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/2045
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2045/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2045/comments
|
https://api.github.com/repos/ollama/ollama/issues/2045/events
|
https://github.com/ollama/ollama/pull/2045
| 2,087,605,927
|
PR_kwDOJ0Z1Ps5kY3RI
| 2,045
|
docker-compose: added initial compose yaml
|
{
"login": "stevenbecht",
"id": 9442836,
"node_id": "MDQ6VXNlcjk0NDI4MzY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9442836?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevenbecht",
"html_url": "https://github.com/stevenbecht",
"followers_url": "https://api.github.com/users/stevenbecht/followers",
"following_url": "https://api.github.com/users/stevenbecht/following{/other_user}",
"gists_url": "https://api.github.com/users/stevenbecht/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevenbecht/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevenbecht/subscriptions",
"organizations_url": "https://api.github.com/users/stevenbecht/orgs",
"repos_url": "https://api.github.com/users/stevenbecht/repos",
"events_url": "https://api.github.com/users/stevenbecht/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevenbecht/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-01-18T06:27:29
| 2024-02-21T00:34:03
| 2024-02-21T00:34:02
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/2045",
"html_url": "https://github.com/ollama/ollama/pull/2045",
"diff_url": "https://github.com/ollama/ollama/pull/2045.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2045.patch",
"merged_at": null
}
|
Created initial docker-compose.yaml based on jamesbraza:docker-compose (#1379). We can use bash sockets to test if server is listening.
|
{
"login": "stevenbecht",
"id": 9442836,
"node_id": "MDQ6VXNlcjk0NDI4MzY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9442836?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevenbecht",
"html_url": "https://github.com/stevenbecht",
"followers_url": "https://api.github.com/users/stevenbecht/followers",
"following_url": "https://api.github.com/users/stevenbecht/following{/other_user}",
"gists_url": "https://api.github.com/users/stevenbecht/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevenbecht/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevenbecht/subscriptions",
"organizations_url": "https://api.github.com/users/stevenbecht/orgs",
"repos_url": "https://api.github.com/users/stevenbecht/repos",
"events_url": "https://api.github.com/users/stevenbecht/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevenbecht/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2045/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2045/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6072
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6072/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6072/comments
|
https://api.github.com/repos/ollama/ollama/issues/6072/events
|
https://github.com/ollama/ollama/issues/6072
| 2,437,660,658
|
I_kwDOJ0Z1Ps6RS7_y
| 6,072
|
Unable to get Ollama and OpenwebUI working at all
|
{
"login": "nicholhai",
"id": 96297412,
"node_id": "U_kgDOBb1hxA",
"avatar_url": "https://avatars.githubusercontent.com/u/96297412?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nicholhai",
"html_url": "https://github.com/nicholhai",
"followers_url": "https://api.github.com/users/nicholhai/followers",
"following_url": "https://api.github.com/users/nicholhai/following{/other_user}",
"gists_url": "https://api.github.com/users/nicholhai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nicholhai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nicholhai/subscriptions",
"organizations_url": "https://api.github.com/users/nicholhai/orgs",
"repos_url": "https://api.github.com/users/nicholhai/repos",
"events_url": "https://api.github.com/users/nicholhai/events{/privacy}",
"received_events_url": "https://api.github.com/users/nicholhai/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 4
| 2024-07-30T11:58:52
| 2024-10-09T18:25:19
| 2024-09-04T01:57:49
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Hello All,
Does anyone have instructions on getting Ollama and WebUI working on a tower computer with the following specs: Intel Core i7-13700F 2.1GHz, GeForce RTX 4060 Ti 64GB, 64GB DDR5. **I tried all the following on Ubuntu Server 24.04 OS but can install any OS necessary**
I have it running perfectly on my Mac Studio, but I can't replicate it on this standalone machine.
### Here is what I have done so far
**Followed instructions from:** https://docs.openwebui.com/getting-started/
Tried all the different methods of doing it manually + Docker with all options for the same. No luck.
**I had two issues open:**
https://github.com/ollama/ollama/issues/5892
https://github.com/ollama/ollama/issues/5925
### OS
_No response_
### GPU
Nvidia, Intel
### CPU
Intel
### Ollama version
_No response_
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6072/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6072/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5759
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5759/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5759/comments
|
https://api.github.com/repos/ollama/ollama/issues/5759/events
|
https://github.com/ollama/ollama/issues/5759
| 2,414,944,510
|
I_kwDOJ0Z1Ps6P8SD-
| 5,759
|
service hang after some requests to /api/embeddings
|
{
"login": "JerryKwan",
"id": 990113,
"node_id": "MDQ6VXNlcjk5MDExMw==",
"avatar_url": "https://avatars.githubusercontent.com/u/990113?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JerryKwan",
"html_url": "https://github.com/JerryKwan",
"followers_url": "https://api.github.com/users/JerryKwan/followers",
"following_url": "https://api.github.com/users/JerryKwan/following{/other_user}",
"gists_url": "https://api.github.com/users/JerryKwan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JerryKwan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JerryKwan/subscriptions",
"organizations_url": "https://api.github.com/users/JerryKwan/orgs",
"repos_url": "https://api.github.com/users/JerryKwan/repos",
"events_url": "https://api.github.com/users/JerryKwan/events{/privacy}",
"received_events_url": "https://api.github.com/users/JerryKwan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info",
"name": "needs more info",
"color": "BA8041",
"default": false,
"description": "More information is needed to assist"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 4
| 2024-07-18T01:14:49
| 2024-10-24T03:03:34
| 2024-10-24T03:03:34
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
The service seems like hang after some requests to /api/embeddings, and need to restart to recover
Here are some logs
```
[GIN] 2024/07/18 - 00:52:55 | 200 | 2.824880868s | 10.255.56.113 | POST "/api/embeddings"
time=2024-07-18T00:52:55.388Z level=INFO source=routes.go:298 msg="embedding generation failed: do embedding request: Post \"http://127.0.0.1:35303/embedding\": context canceled"
[GIN] 2024/07/18 - 00:52:55 | 500 | 257.27018ms | 10.255.56.113 | POST "/api/embeddings"
cuda driver library failed to get device context 800time=2024-07-18T00:57:55.395Z level=WARN source=gpu.go:399 msg="error looking up nvidia GPU memory"
cuda driver library failed to get device context 800time=2024-07-18T00:57:55.649Z level=WARN source=gpu.go:399 msg="error looking up nvidia GPU memory"
cuda driver library failed to get device context 800time=2024-07-18T00:57:55.898Z level=WARN source=gpu.go:399 msg="error looking up nvidia GPU memory"
cuda driver library failed to get device context 800time=2024-07-18T00:57:56.148Z level=WARN source=gpu.go:399 msg="error looking up nvidia GPU memory"
cuda driver library failed to get device context 800time=2024-07-18T00:57:56.399Z level=WARN source=gpu.go:399 msg="error looking up nvidia GPU memory"
cuda driver library failed to get device context 800time=2024-07-18T00:57:56.649Z level=WARN source=gpu.go:399 msg="error looking up nvidia GPU memory"
cuda driver library failed to get device context 800time=2024-07-18T00:57:56.899Z level=WARN source=gpu.go:399 msg="error looking up nvidia GPU memory"
cuda driver library failed to get device context 800time=2024-07-18T00:57:57.148Z level=WARN source=gpu.go:399 msg="error looking up nvidia GPU memory"
cuda driver library failed to get device context 800time=2024-07-18T00:57:57.399Z level=WARN source=gpu.go:399 msg="error looking up nvidia GPU memory"
cuda driver library failed to get device context 800time=2024-07-18T00:57:57.649Z level=WARN source=gpu.go:399 msg="error looking up nvidia GPU memory"
cuda driver library failed to get device context 800time=2024-07-18T00:57:57.898Z level=WARN source=gpu.go:399 msg="error looking up nvidia GPU memory"
cuda driver library failed to get device context 800time=2024-07-18T00:57:58.148Z level=WARN source=gpu.go:399 msg="error looking up nvidia GPU memory"
cuda driver library failed to get device context 800time=2024-07-18T00:57:58.398Z level=WARN source=gpu.go:399 msg="error looking up nvidia GPU memory"
cuda driver library failed to get device context 800time=2024-07-18T00:57:58.648Z level=WARN source=gpu.go:399 msg="error looking up nvidia GPU memory"
cuda driver library failed to get device context 800time=2024-07-18T00:57:58.899Z level=WARN source=gpu.go:399 msg="error looking up nvidia GPU memory"
cuda driver library failed to get device context 800time=2024-07-18T00:57:59.148Z level=WARN source=gpu.go:399 msg="error looking up nvidia GPU memory"
cuda driver library failed to get device context 800time=2024-07-18T00:57:59.398Z level=WARN source=gpu.go:399 msg="error looking up nvidia GPU memory"
cuda driver library failed to get device context 800time=2024-07-18T00:57:59.649Z level=WARN source=gpu.go:399 msg="error looking up nvidia GPU memory"
cuda driver library failed to get device context 800time=2024-07-18T00:57:59.898Z level=WARN source=gpu.go:399 msg="error looking up nvidia GPU memory"
cuda driver library failed to get device context 800time=2024-07-18T00:58:00.149Z level=WARN source=gpu.go:399 msg="error looking up nvidia GPU memory"
time=2024-07-18T00:58:00.396Z level=WARN source=sched.go:634 msg="gpu VRAM usage didn't recover within timeout" seconds=5.006997071 model=/root/.ollama/models/blobs/sha256-970aa74c0a90ef7482477cf803618e776e173c007bf957f635f1015bfcfef0e6
cuda driver library failed to get device context 800time=2024-07-18T00:58:00.398Z level=WARN source=gpu.go:399 msg="error looking up nvidia GPU memory"
time=2024-07-18T00:58:00.646Z level=WARN source=sched.go:634 msg="gpu VRAM usage didn't recover within timeout" seconds=5.257512846 model=/root/.ollama/models/blobs/sha256-970aa74c0a90ef7482477cf803618e776e173c007bf957f635f1015bfcfef0e6
cuda driver library failed to get device context 800time=2024-07-18T00:58:00.648Z level=WARN source=gpu.go:399 msg="error looking up nvidia GPU memory"
time=2024-07-18T00:58:00.895Z level=WARN source=sched.go:634 msg="gpu VRAM usage didn't recover within timeout" seconds=5.506938263 model=/root/.ollama/models/blobs/sha256-970aa74c0a90ef7482477cf803618e776e173c007bf957f635f1015bfcfef0e6
```
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.2.5
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5759/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/ollama/ollama/issues/5759/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4675
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4675/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4675/comments
|
https://api.github.com/repos/ollama/ollama/issues/4675/events
|
https://github.com/ollama/ollama/issues/4675
| 2,320,328,981
|
I_kwDOJ0Z1Ps6KTWkV
| 4,675
|
phi3: Error: llama runner process has terminated: exit status 0xc0000409
|
{
"login": "FreemanFeng",
"id": 1662126,
"node_id": "MDQ6VXNlcjE2NjIxMjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1662126?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/FreemanFeng",
"html_url": "https://github.com/FreemanFeng",
"followers_url": "https://api.github.com/users/FreemanFeng/followers",
"following_url": "https://api.github.com/users/FreemanFeng/following{/other_user}",
"gists_url": "https://api.github.com/users/FreemanFeng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/FreemanFeng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/FreemanFeng/subscriptions",
"organizations_url": "https://api.github.com/users/FreemanFeng/orgs",
"repos_url": "https://api.github.com/users/FreemanFeng/repos",
"events_url": "https://api.github.com/users/FreemanFeng/events{/privacy}",
"received_events_url": "https://api.github.com/users/FreemanFeng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 7
| 2024-05-28T07:25:09
| 2024-06-09T17:14:00
| 2024-06-09T17:14:00
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
ollama run phi3:medium-128k
ollama run phi3:3.8-mini-128k-instruct-q4_0
above two models will cause issue
Error: llama runner process has terminated: exit status 0xc0000409
### OS
Windows
### GPU
Other
### CPU
Intel
### Ollama version
0.1.38
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4675/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4675/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5305
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5305/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5305/comments
|
https://api.github.com/repos/ollama/ollama/issues/5305/events
|
https://github.com/ollama/ollama/issues/5305
| 2,375,863,845
|
I_kwDOJ0Z1Ps6NnM4l
| 5,305
|
Application should skip the CLI tool install page during first run if they have already been installed. (macOS)
|
{
"login": "seanchristians",
"id": 25487785,
"node_id": "MDQ6VXNlcjI1NDg3Nzg1",
"avatar_url": "https://avatars.githubusercontent.com/u/25487785?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/seanchristians",
"html_url": "https://github.com/seanchristians",
"followers_url": "https://api.github.com/users/seanchristians/followers",
"following_url": "https://api.github.com/users/seanchristians/following{/other_user}",
"gists_url": "https://api.github.com/users/seanchristians/gists{/gist_id}",
"starred_url": "https://api.github.com/users/seanchristians/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/seanchristians/subscriptions",
"organizations_url": "https://api.github.com/users/seanchristians/orgs",
"repos_url": "https://api.github.com/users/seanchristians/repos",
"events_url": "https://api.github.com/users/seanchristians/events{/privacy}",
"received_events_url": "https://api.github.com/users/seanchristians/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 1
| 2024-06-26T17:04:01
| 2024-09-06T17:55:21
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I'm deploying Ollama for some of the users in my organization who do not have local administrator rights. I wrote a script to symlink the ollama executable to /usr/local/bin/ollama for the user during install.
However, when they start the app, it still asks them to install the command line tools, which they cannot do because they are not an admin.
Is there a way to bypass this step during first run?
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.1.46
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5305/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5305/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/7022
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7022/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7022/comments
|
https://api.github.com/repos/ollama/ollama/issues/7022/events
|
https://github.com/ollama/ollama/issues/7022
| 2,554,344,085
|
I_kwDOJ0Z1Ps6YQDKV
| 7,022
|
Can we have a native integrated gpu support ?
|
{
"login": "user7z",
"id": 161214583,
"node_id": "U_kgDOCZvwdw",
"avatar_url": "https://avatars.githubusercontent.com/u/161214583?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/user7z",
"html_url": "https://github.com/user7z",
"followers_url": "https://api.github.com/users/user7z/followers",
"following_url": "https://api.github.com/users/user7z/following{/other_user}",
"gists_url": "https://api.github.com/users/user7z/gists{/gist_id}",
"starred_url": "https://api.github.com/users/user7z/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/user7z/subscriptions",
"organizations_url": "https://api.github.com/users/user7z/orgs",
"repos_url": "https://api.github.com/users/user7z/repos",
"events_url": "https://api.github.com/users/user7z/events{/privacy}",
"received_events_url": "https://api.github.com/users/user7z/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-09-28T15:43:18
| 2024-09-29T01:02:29
| 2024-09-28T22:42:12
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Its would be great to have ollama native support for igpus , for laptop use , it well free the cpu threads for other tasks , the igpu is that little device that we dont make use of it , despite performance , one wouls have his cpu for other tasks , llm-cpp & oneapi is not the solution in my opinion , specially for igpus
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7022/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7022/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8438
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8438/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8438/comments
|
https://api.github.com/repos/ollama/ollama/issues/8438/events
|
https://github.com/ollama/ollama/pull/8438
| 2,789,135,889
|
PR_kwDOJ0Z1Ps6Hz_Xm
| 8,438
|
docs: fixed path to examples
|
{
"login": "Gloryjaw",
"id": 108608120,
"node_id": "U_kgDOBnk6eA",
"avatar_url": "https://avatars.githubusercontent.com/u/108608120?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Gloryjaw",
"html_url": "https://github.com/Gloryjaw",
"followers_url": "https://api.github.com/users/Gloryjaw/followers",
"following_url": "https://api.github.com/users/Gloryjaw/following{/other_user}",
"gists_url": "https://api.github.com/users/Gloryjaw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Gloryjaw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Gloryjaw/subscriptions",
"organizations_url": "https://api.github.com/users/Gloryjaw/orgs",
"repos_url": "https://api.github.com/users/Gloryjaw/repos",
"events_url": "https://api.github.com/users/Gloryjaw/events{/privacy}",
"received_events_url": "https://api.github.com/users/Gloryjaw/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2025-01-15T08:25:19
| 2025-01-15T19:49:12
| 2025-01-15T19:49:12
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/8438",
"html_url": "https://github.com/ollama/ollama/pull/8438",
"diff_url": "https://github.com/ollama/ollama/pull/8438.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8438.patch",
"merged_at": "2025-01-15T19:49:12"
}
|
Fixed path from example folder (which doesn't exist) to examples.md
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8438/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8438/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7918
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7918/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7918/comments
|
https://api.github.com/repos/ollama/ollama/issues/7918/events
|
https://github.com/ollama/ollama/issues/7918
| 2,715,200,067
|
I_kwDOJ0Z1Ps6h1qpD
| 7,918
|
Request to add semikong-8b to ollama
|
{
"login": "luoLojic",
"id": 153160666,
"node_id": "U_kgDOCSEL2g",
"avatar_url": "https://avatars.githubusercontent.com/u/153160666?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/luoLojic",
"html_url": "https://github.com/luoLojic",
"followers_url": "https://api.github.com/users/luoLojic/followers",
"following_url": "https://api.github.com/users/luoLojic/following{/other_user}",
"gists_url": "https://api.github.com/users/luoLojic/gists{/gist_id}",
"starred_url": "https://api.github.com/users/luoLojic/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/luoLojic/subscriptions",
"organizations_url": "https://api.github.com/users/luoLojic/orgs",
"repos_url": "https://api.github.com/users/luoLojic/repos",
"events_url": "https://api.github.com/users/luoLojic/events{/privacy}",
"received_events_url": "https://api.github.com/users/luoLojic/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-12-03T14:33:50
| 2024-12-14T15:39:10
| 2024-12-14T15:39:10
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I would like to deploy the semikong-8b model locally. Semikong is a large model fine-tuned from Llama focused on the semiconductor domain. You can find the model on Hugging Face at this link: “[https://huggingface.co/pentagoniac/SEMIKONG-8b-GPTQ](https://huggingface.co/pentagoniac/SEMIKONG-8b-GPTQ) and the GitHub repository here: “[https://github.com/aitomatic/semikong](https://github.com/aitomatic/semikong). The model is in the .safetensors format. I attempted to convert it using llama.cpp, but encountered the following error: “ValueError: Can not map tensor 'model.layers.0.mlp.down_proj.g_idx'”. I have not found a suitable solution for this issue. I would love to see this model added to Ollama. Thanks for the developers' hard work.
My English is not very good, and these sentences were generated using a translation tool.
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7918/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7918/timeline
| null |
not_planned
| false
|
https://api.github.com/repos/ollama/ollama/issues/5897
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5897/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5897/comments
|
https://api.github.com/repos/ollama/ollama/issues/5897/events
|
https://github.com/ollama/ollama/issues/5897
| 2,426,329,835
|
I_kwDOJ0Z1Ps6Qntrr
| 5,897
|
Error: llama3.1 runner process has terminated: signal: aborted
|
{
"login": "harnalashok",
"id": 47495816,
"node_id": "MDQ6VXNlcjQ3NDk1ODE2",
"avatar_url": "https://avatars.githubusercontent.com/u/47495816?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/harnalashok",
"html_url": "https://github.com/harnalashok",
"followers_url": "https://api.github.com/users/harnalashok/followers",
"following_url": "https://api.github.com/users/harnalashok/following{/other_user}",
"gists_url": "https://api.github.com/users/harnalashok/gists{/gist_id}",
"starred_url": "https://api.github.com/users/harnalashok/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/harnalashok/subscriptions",
"organizations_url": "https://api.github.com/users/harnalashok/orgs",
"repos_url": "https://api.github.com/users/harnalashok/repos",
"events_url": "https://api.github.com/users/harnalashok/events{/privacy}",
"received_events_url": "https://api.github.com/users/harnalashok/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 9
| 2024-07-24T00:08:01
| 2024-07-24T19:59:25
| 2024-07-24T19:59:25
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I have downloaded ollama3.1:8b using ollama. I am getting the following error while running llama3.1. llama3 runs fine on the same syste,.:
Error: llama runner process has terminated: signal: aborted
### OS
Windows 11 wsl2 Ubuntu
### GPU
GeForce RTX 4070
### CPU
_No response_
### Ollama version
ollama version is 0.1.38
Here are the server logs:
lines 974-1002/1002 (END)
Jul 24 07:14:08 ashok ollama[158]: llama_model_loader: - kv 10: llama.attention.head_count u32 = 32
Jul 24 07:14:08 ashok ollama[158]: llama_model_loader: - kv 11: llama.attention.head_count_kv u32 = 8
Jul 24 07:14:08 ashok ollama[158]: llama_model_loader: - kv 12: llama.rope.freq_base f32 = 500000.000000
Jul 24 07:14:08 ashok ollama[158]: llama_model_loader: - kv 13: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
Jul 24 07:14:08 ashok ollama[158]: llama_model_loader: - kv 14: general.file_type u32 = 2
Jul 24 07:14:08 ashok ollama[158]: llama_model_loader: - kv 15: llama.vocab_size u32 = 128256
Jul 24 07:14:08 ashok ollama[158]: llama_model_loader: - kv 16: llama.rope.dimension_count u32 = 128
Jul 24 07:14:08 ashok ollama[158]: llama_model_loader: - kv 17: tokenizer.ggml.model str = gpt2
Jul 24 07:14:08 ashok ollama[158]: llama_model_loader: - kv 18: tokenizer.ggml.pre str = smaug-bpe
Jul 24 07:14:08 ashok ollama[158]: llama_model_loader: - kv 19: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", ">
Jul 24 07:14:08 ashok ollama[158]: llama_model_loader: - kv 20: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, >
Jul 24 07:14:08 ashok ollama[158]: llama_model_loader: - kv 21: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", ".>
Jul 24 07:14:08 ashok ollama[158]: llama_model_loader: - kv 22: tokenizer.ggml.bos_token_id u32 = 128000
Jul 24 07:14:08 ashok ollama[158]: llama_model_loader: - kv 23: tokenizer.ggml.eos_token_id u32 = 128001
Jul 24 07:14:08 ashok ollama[158]: llama_model_loader: - kv 24: tokenizer.chat_template str = {% set loop_messages = messa>
Jul 24 07:14:08 ashok ollama[158]: llama_model_loader: - kv 25: general.quantization_version u32 = 2
Jul 24 07:14:08 ashok ollama[158]: llama_model_loader: - type f32: 65 tensors
Jul 24 07:14:08 ashok ollama[158]: llama_model_loader: - type q4_0: 225 tensors
Jul 24 07:14:08 ashok ollama[158]: llama_model_loader: - type q6_K: 1 tensors
Jul 24 07:14:08 ashok ollama[158]: time=2024-07-24T07:14:08.359+05:30 level=INFO source=server.go:540 msg="waiting for server to become available" status=">
Jul 24 07:14:08 ashok ollama[158]: llama_model_load: error loading model: error loading model vocabulary: unknown pre-tokenizer type: 'smaug-bpe'
Jul 24 07:14:08 ashok ollama[158]: llama_load_model_from_file: exception loading model
Jul 24 07:14:08 ashok ollama[158]: terminate called after throwing an instance of 'std::runtime_error'
Jul 24 07:14:08 ashok ollama[158]: what(): error loading model vocabulary: unknown pre-tokenizer type: 'smaug-bpe'
Jul 24 07:14:08 ashok ollama[158]: time=2024-07-24T07:14:08.610+05:30 level=ERROR source=sched.go:344 msg="error loading llama server" error="llama runner >
Jul 24 07:14:08 ashok ollama[158]: [GIN] 2024/07/24 - 07:14:08 | 500 | 4.333690088s | 127.0.0.1 | POST "/api/chat"
Jul 24 07:14:15 ashok ollama[158]: time=2024-07-24T07:14:15.636+05:30 level=WARN source=sched.go:512 msg="gpu VRAM usage didn't recover within timeout" sec>
Jul 24 07:14:19 ashok ollama[158]: time=2024-07-24T07:14:19.624+05:30 level=WARN source=sched.go:512 msg="gpu VRAM usage didn't recover within timeout" sec>
Jul 24 07:14:23 ashok ollama[158]: time=2024-07-24T07:14:23.388+05:30 level=WARN source=sched.go:512 msg="gpu VRAM usage didn't recover within timeout" sec>
~
~
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5897/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5897/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1627
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1627/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1627/comments
|
https://api.github.com/repos/ollama/ollama/issues/1627/events
|
https://github.com/ollama/ollama/issues/1627
| 2,050,301,561
|
I_kwDOJ0Z1Ps56NR55
| 1,627
|
Can't run dolphin-mixtral, llama runner process has terminated
|
{
"login": "mbruhler",
"id": 21124163,
"node_id": "MDQ6VXNlcjIxMTI0MTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/21124163?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mbruhler",
"html_url": "https://github.com/mbruhler",
"followers_url": "https://api.github.com/users/mbruhler/followers",
"following_url": "https://api.github.com/users/mbruhler/following{/other_user}",
"gists_url": "https://api.github.com/users/mbruhler/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mbruhler/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mbruhler/subscriptions",
"organizations_url": "https://api.github.com/users/mbruhler/orgs",
"repos_url": "https://api.github.com/users/mbruhler/repos",
"events_url": "https://api.github.com/users/mbruhler/events{/privacy}",
"received_events_url": "https://api.github.com/users/mbruhler/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 4
| 2023-12-20T10:47:26
| 2024-01-08T21:42:04
| 2024-01-08T21:42:04
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi,
I have trouble running dolphin mixtral using ollama
When I type `ollama run dolphin-mixtral` the message "llama runner process has terminated" appears
This is the log:
```
llama_new_context_with_model: n_ctx = 4096
llama_new_context_with_model: freq_base = 1000000.0
llama_new_context_with_model: freq_scale = 1
llama_new_context_with_model: KV self size = 512.00 MiB, K (f16): 256.00 MiB, V (f16): 256.00 MiB
llama_build_graph: non-view tensors processed: 1124/1124
ggml_metal_init: allocating
ggml_metal_init: found device: Apple M1 Pro
ggml_metal_init: picking default device: Apple M1 Pro
ggml_metal_init: default.metallib not found, loading from source
ggml_metal_init: GGML_METAL_PATH_RESOURCES = nil
ggml_metal_init: loading '/var/folders/tw/8r367x6x1t12cqcm0w9fhcgh0000gn/T/ollama2368397414/llama.cpp/gguf/build/metal/bin/ggml-metal.metal'
ggml_metal_init: GPU name: Apple M1 Pro
ggml_metal_init: GPU family: MTLGPUFamilyApple7 (1007)
ggml_metal_init: hasUnifiedMemory = true
ggml_metal_init: recommendedMaxWorkingSetSize = 22906.50 MB
ggml_metal_init: maxTransferRate = built-in GPU
llama_new_context_with_model: compute buffer total size = 319.22 MiB
llama_new_context_with_model: max tensor size = 102.55 MiB
ggml_metal_add_buffer: allocated 'data ' buffer, size = 16384.00 MiB, offs = 0
ggml_metal_add_buffer: allocated 'data ' buffer, size = 8935.19 MiB, offs = 17072324608, (25320.81 / 21845.34)ggml_metal_add_buffer: warning: current allocated size is greater than the recommended max working set size
ggml_metal_add_buffer: allocated 'kv ' buffer, size = 512.03 MiB, (25832.84 / 21845.34)ggml_metal_add_buffer: warning: current allocated size is greater than the recommended max working set size
ggml_metal_add_buffer: allocated 'alloc ' buffer, size = 316.05 MiB, (26148.89 / 21845.34)ggml_metal_add_buffer: warning: current allocated size is greater than the recommended max working set size
ggml_metal_graph_compute: command buffer 4 failed with status 5
GGML_ASSERT: /Users/jmorgan/workspace/ollama/llm/llama.cpp/gguf/ggml-metal.m:2353: false
2023/12/20 11:45:01 llama.go:451: signal: abort trap
2023/12/20 11:45:01 llama.go:459: error starting llama runner: llama runner process has terminated
2023/12/20 11:45:01 llama.go:525: llama runner stopped successfully
[GIN] 2023/12/20 - 11:45:01 | 500 | 25.934369375s | 127.0.0.1 | POST "/api/generate"
```
I would greatly appreciate any help! :)
Thanks
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1627/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1627/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1629
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1629/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1629/comments
|
https://api.github.com/repos/ollama/ollama/issues/1629/events
|
https://github.com/ollama/ollama/issues/1629
| 2,050,479,541
|
I_kwDOJ0Z1Ps56N9W1
| 1,629
|
[Bug] Allocation problems when trying to use phi model
|
{
"login": "valentimarco",
"id": 26926690,
"node_id": "MDQ6VXNlcjI2OTI2Njkw",
"avatar_url": "https://avatars.githubusercontent.com/u/26926690?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/valentimarco",
"html_url": "https://github.com/valentimarco",
"followers_url": "https://api.github.com/users/valentimarco/followers",
"following_url": "https://api.github.com/users/valentimarco/following{/other_user}",
"gists_url": "https://api.github.com/users/valentimarco/gists{/gist_id}",
"starred_url": "https://api.github.com/users/valentimarco/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/valentimarco/subscriptions",
"organizations_url": "https://api.github.com/users/valentimarco/orgs",
"repos_url": "https://api.github.com/users/valentimarco/repos",
"events_url": "https://api.github.com/users/valentimarco/events{/privacy}",
"received_events_url": "https://api.github.com/users/valentimarco/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 5
| 2023-12-20T12:43:11
| 2024-01-12T05:56:55
| 2024-01-12T05:56:54
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi, i saw the new phi model on the registry and i wanted to try on my little server. The specs are below:
- R5 2600
- ram 32 gb
- 128 gb ssd sata
- nvidia gtx 960 4GB (this is a special version from MSI)
- Ollama:latest docker version
I used before ollama with llama2:7b that was slow (ofc the vram was at the limit and only allocating 20/33 layers) but i thought "maybe with the new phi model i can run this". So i pulled the phi:latest and i get a core dumped error...
Now the vram requirement are 2258.20 MiB and i know that i can fit it, but when trying to inference i get this error:
```bash
map[frequency_penalty:0 image_data:[] main_gpu:0 mirostat:0 mirostat_eta:0.1 mirostat_tau:5 n_keep:0 n_predict:-1 penalize_nl:true presence_penalty:0 prompt:System: A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
User: why is the sky blue?
Assistant: repeat_last_n:64 repeat_penalty:1.1 seed:-1 stop:[<|endoftext|> User: Assistant: System:] stream:true temperature:0.8 tfs_z:1 top_k:40 top_p:0.9 typical_p:1]
cuBLAS error 15 at /go/src/github.com/jmorganca/ollama/llm/llama.cpp/gguf/ggml-cuda.cu:8458
current device: 0
GGML_ASSERT: /go/src/github.com/jmorganca/ollama/llm/llama.cpp/gguf/ggml-cuda.cu:8458: !"cuBLAS error"
2023/12/20 12:32:01 llama.go:455: signal: aborted (core dumped)
```
Also i have the same issue when in my pc (r5 5600x, 16gb, 3070ti) i try to inference mistral:latest (i used before with 0 problems).
I will put the all stacktrace here:
```bash
2023/12/20 12:31:46 routes.go:895: Listening on [::]:11434 (version 0.1.17)
2023/12/20 12:31:53 llama.go:300: 4035 MB VRAM available, loading up to 63 GPU layers
2023/12/20 12:31:53 llama.go:440: starting llama runner
{{false 2048 512 0 -1 0 false true false false true false true 0 0 0} 0 -1 -1 40 0.9 1 1 64 0.8 1.1 0 0 0 5 0.1 true [<|endoftext|> User: Assistant: System:]} 0 0
[--model /root/.ollama/models/blobs/sha256:bd608f9545597ea3278b78038943059d1c29c62f3ca02c86523014f3a8c7a7f1 --ctx-size 2048 --batch-size 512 --n-gpu-layers 63 --embedding]
2023/12/20 12:31:53 llama.go:498: waiting for llama runner to start responding
ggml_init_cublas: GGML_CUDA_FORCE_MMQ: no
ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes
ggml_init_cublas: found 1 CUDA devices:
Device 0: NVIDIA GeForce GTX 960, compute capability 5.2
INFO build info | timestamp=1703075513 function=main line=2667 build=468 commit=a7aee47
INFO system info | timestamp=1703075513 function=main line=2670 n_threads=6 n_threads_batch=-1 total_threads=12 system_info=AVX = 1 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 |
llama_model_loader: loaded meta data with 20 key-value pairs and 325 tensors from /root/.ollama/models/blobs/sha256:bd608f9545597ea3278b78038943059d1c29c62f3ca02c86523014f3a8c7a7f1 (version GGUF V3 (latest))
llama_model_loader: - tensor 0: token_embd.weight q4_0 [ 2560, 51200, 1, 1 ]
llama_model_loader: - tensor 1: blk.0.attn_norm.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 2: blk.0.attn_norm.weight f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 3: blk.0.attn_qkv.bias f32 [ 7680, 1, 1, 1 ]
llama_model_loader: - tensor 4: blk.0.attn_qkv.weight q4_0 [ 2560, 7680, 1, 1 ]
llama_model_loader: - tensor 5: blk.0.attn_output.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 6: blk.0.attn_output.weight q4_0 [ 2560, 2560, 1, 1 ]
llama_model_loader: - tensor 7: blk.0.ffn_up.bias f32 [ 10240, 1, 1, 1 ]
llama_model_loader: - tensor 8: blk.0.ffn_up.weight q4_0 [ 2560, 10240, 1, 1 ]
llama_model_loader: - tensor 9: blk.0.ffn_down.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 10: blk.0.ffn_down.weight q4_0 [ 10240, 2560, 1, 1 ]
llama_model_loader: - tensor 11: blk.1.attn_norm.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 12: blk.1.attn_norm.weight f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 13: blk.1.attn_qkv.bias f32 [ 7680, 1, 1, 1 ]
llama_model_loader: - tensor 14: blk.1.attn_qkv.weight q4_0 [ 2560, 7680, 1, 1 ]
llama_model_loader: - tensor 15: blk.1.attn_output.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 16: blk.1.attn_output.weight q4_0 [ 2560, 2560, 1, 1 ]
llama_model_loader: - tensor 17: blk.1.ffn_up.bias f32 [ 10240, 1, 1, 1 ]
llama_model_loader: - tensor 18: blk.1.ffn_up.weight q4_0 [ 2560, 10240, 1, 1 ]
llama_model_loader: - tensor 19: blk.1.ffn_down.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 20: blk.1.ffn_down.weight q4_0 [ 10240, 2560, 1, 1 ]
llama_model_loader: - tensor 21: blk.10.attn_norm.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 22: blk.10.attn_norm.weight f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 23: blk.10.attn_qkv.bias f32 [ 7680, 1, 1, 1 ]
llama_model_loader: - tensor 24: blk.10.attn_qkv.weight q4_0 [ 2560, 7680, 1, 1 ]
llama_model_loader: - tensor 25: blk.10.attn_output.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 26: blk.10.attn_output.weight q4_0 [ 2560, 2560, 1, 1 ]
llama_model_loader: - tensor 27: blk.10.ffn_up.bias f32 [ 10240, 1, 1, 1 ]
llama_model_loader: - tensor 28: blk.10.ffn_up.weight q4_0 [ 2560, 10240, 1, 1 ]
llama_model_loader: - tensor 29: blk.10.ffn_down.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 30: blk.10.ffn_down.weight q4_0 [ 10240, 2560, 1, 1 ]
llama_model_loader: - tensor 31: blk.11.attn_norm.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 32: blk.11.attn_norm.weight f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 33: blk.11.attn_qkv.bias f32 [ 7680, 1, 1, 1 ]
llama_model_loader: - tensor 34: blk.11.attn_qkv.weight q4_0 [ 2560, 7680, 1, 1 ]
llama_model_loader: - tensor 35: blk.11.attn_output.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 36: blk.11.attn_output.weight q4_0 [ 2560, 2560, 1, 1 ]
llama_model_loader: - tensor 37: blk.11.ffn_up.bias f32 [ 10240, 1, 1, 1 ]
llama_model_loader: - tensor 38: blk.11.ffn_up.weight q4_0 [ 2560, 10240, 1, 1 ]
llama_model_loader: - tensor 39: blk.11.ffn_down.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 40: blk.11.ffn_down.weight q4_0 [ 10240, 2560, 1, 1 ]
llama_model_loader: - tensor 41: blk.12.attn_norm.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 42: blk.12.attn_norm.weight f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 43: blk.12.attn_qkv.bias f32 [ 7680, 1, 1, 1 ]
llama_model_loader: - tensor 44: blk.12.attn_qkv.weight q4_0 [ 2560, 7680, 1, 1 ]
llama_model_loader: - tensor 45: blk.12.attn_output.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 46: blk.12.attn_output.weight q4_0 [ 2560, 2560, 1, 1 ]
llama_model_loader: - tensor 47: blk.12.ffn_up.bias f32 [ 10240, 1, 1, 1 ]
llama_model_loader: - tensor 48: blk.12.ffn_up.weight q4_0 [ 2560, 10240, 1, 1 ]
llama_model_loader: - tensor 49: blk.12.ffn_down.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 50: blk.12.ffn_down.weight q4_0 [ 10240, 2560, 1, 1 ]
llama_model_loader: - tensor 51: blk.13.attn_norm.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 52: blk.13.attn_norm.weight f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 53: blk.13.attn_qkv.bias f32 [ 7680, 1, 1, 1 ]
llama_model_loader: - tensor 54: blk.13.attn_qkv.weight q4_0 [ 2560, 7680, 1, 1 ]
llama_model_loader: - tensor 55: blk.13.attn_output.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 56: blk.13.attn_output.weight q4_0 [ 2560, 2560, 1, 1 ]
llama_model_loader: - tensor 57: blk.13.ffn_up.bias f32 [ 10240, 1, 1, 1 ]
llama_model_loader: - tensor 58: blk.13.ffn_up.weight q4_0 [ 2560, 10240, 1, 1 ]
llama_model_loader: - tensor 59: blk.13.ffn_down.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 60: blk.13.ffn_down.weight q4_0 [ 10240, 2560, 1, 1 ]
llama_model_loader: - tensor 61: blk.14.attn_norm.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 62: blk.14.attn_norm.weight f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 63: blk.14.attn_qkv.bias f32 [ 7680, 1, 1, 1 ]
llama_model_loader: - tensor 64: blk.14.attn_qkv.weight q4_0 [ 2560, 7680, 1, 1 ]
llama_model_loader: - tensor 65: blk.14.attn_output.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 66: blk.14.attn_output.weight q4_0 [ 2560, 2560, 1, 1 ]
llama_model_loader: - tensor 67: blk.14.ffn_up.bias f32 [ 10240, 1, 1, 1 ]
llama_model_loader: - tensor 68: blk.14.ffn_up.weight q4_0 [ 2560, 10240, 1, 1 ]
llama_model_loader: - tensor 69: blk.14.ffn_down.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 70: blk.14.ffn_down.weight q4_0 [ 10240, 2560, 1, 1 ]
llama_model_loader: - tensor 71: blk.15.attn_norm.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 72: blk.15.attn_norm.weight f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 73: blk.15.attn_qkv.bias f32 [ 7680, 1, 1, 1 ]
llama_model_loader: - tensor 74: blk.15.attn_qkv.weight q4_0 [ 2560, 7680, 1, 1 ]
llama_model_loader: - tensor 75: blk.15.attn_output.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 76: blk.15.attn_output.weight q4_0 [ 2560, 2560, 1, 1 ]
llama_model_loader: - tensor 77: blk.15.ffn_up.bias f32 [ 10240, 1, 1, 1 ]
llama_model_loader: - tensor 78: blk.15.ffn_up.weight q4_0 [ 2560, 10240, 1, 1 ]
llama_model_loader: - tensor 79: blk.15.ffn_down.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 80: blk.15.ffn_down.weight q4_0 [ 10240, 2560, 1, 1 ]
llama_model_loader: - tensor 81: blk.16.attn_norm.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 82: blk.16.attn_norm.weight f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 83: blk.16.attn_qkv.bias f32 [ 7680, 1, 1, 1 ]
llama_model_loader: - tensor 84: blk.16.attn_qkv.weight q4_0 [ 2560, 7680, 1, 1 ]
llama_model_loader: - tensor 85: blk.16.attn_output.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 86: blk.16.attn_output.weight q4_0 [ 2560, 2560, 1, 1 ]
llama_model_loader: - tensor 87: blk.16.ffn_up.bias f32 [ 10240, 1, 1, 1 ]
llama_model_loader: - tensor 88: blk.16.ffn_up.weight q4_0 [ 2560, 10240, 1, 1 ]
llama_model_loader: - tensor 89: blk.16.ffn_down.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 90: blk.16.ffn_down.weight q4_0 [ 10240, 2560, 1, 1 ]
llama_model_loader: - tensor 91: blk.17.attn_norm.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 92: blk.17.attn_norm.weight f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 93: blk.17.attn_qkv.bias f32 [ 7680, 1, 1, 1 ]
llama_model_loader: - tensor 94: blk.17.attn_qkv.weight q4_0 [ 2560, 7680, 1, 1 ]
llama_model_loader: - tensor 95: blk.17.attn_output.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 96: blk.17.attn_output.weight q4_0 [ 2560, 2560, 1, 1 ]
llama_model_loader: - tensor 97: blk.17.ffn_up.bias f32 [ 10240, 1, 1, 1 ]
llama_model_loader: - tensor 98: blk.17.ffn_up.weight q4_0 [ 2560, 10240, 1, 1 ]
llama_model_loader: - tensor 99: blk.17.ffn_down.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 100: blk.17.ffn_down.weight q4_0 [ 10240, 2560, 1, 1 ]
llama_model_loader: - tensor 101: blk.18.attn_norm.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 102: blk.18.attn_norm.weight f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 103: blk.18.attn_qkv.bias f32 [ 7680, 1, 1, 1 ]
llama_model_loader: - tensor 104: blk.18.attn_qkv.weight q4_0 [ 2560, 7680, 1, 1 ]
llama_model_loader: - tensor 105: blk.18.attn_output.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 106: blk.18.attn_output.weight q4_0 [ 2560, 2560, 1, 1 ]
llama_model_loader: - tensor 107: blk.18.ffn_up.bias f32 [ 10240, 1, 1, 1 ]
llama_model_loader: - tensor 108: blk.18.ffn_up.weight q4_0 [ 2560, 10240, 1, 1 ]
llama_model_loader: - tensor 109: blk.18.ffn_down.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 110: blk.18.ffn_down.weight q4_0 [ 10240, 2560, 1, 1 ]
llama_model_loader: - tensor 111: blk.19.attn_norm.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 112: blk.19.attn_norm.weight f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 113: blk.19.attn_qkv.bias f32 [ 7680, 1, 1, 1 ]
llama_model_loader: - tensor 114: blk.19.attn_qkv.weight q4_0 [ 2560, 7680, 1, 1 ]
llama_model_loader: - tensor 115: blk.19.attn_output.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 116: blk.19.attn_output.weight q4_0 [ 2560, 2560, 1, 1 ]
llama_model_loader: - tensor 117: blk.19.ffn_up.bias f32 [ 10240, 1, 1, 1 ]
llama_model_loader: - tensor 118: blk.19.ffn_up.weight q4_0 [ 2560, 10240, 1, 1 ]
llama_model_loader: - tensor 119: blk.19.ffn_down.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 120: blk.19.ffn_down.weight q4_0 [ 10240, 2560, 1, 1 ]
llama_model_loader: - tensor 121: blk.2.attn_norm.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 122: blk.2.attn_norm.weight f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 123: blk.2.attn_qkv.bias f32 [ 7680, 1, 1, 1 ]
llama_model_loader: - tensor 124: blk.2.attn_qkv.weight q4_0 [ 2560, 7680, 1, 1 ]
llama_model_loader: - tensor 125: blk.2.attn_output.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 126: blk.2.attn_output.weight q4_0 [ 2560, 2560, 1, 1 ]
llama_model_loader: - tensor 127: blk.2.ffn_up.bias f32 [ 10240, 1, 1, 1 ]
llama_model_loader: - tensor 128: blk.2.ffn_up.weight q4_0 [ 2560, 10240, 1, 1 ]
llama_model_loader: - tensor 129: blk.2.ffn_down.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 130: blk.2.ffn_down.weight q4_0 [ 10240, 2560, 1, 1 ]
llama_model_loader: - tensor 131: blk.20.attn_norm.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 132: blk.20.attn_norm.weight f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 133: blk.20.attn_qkv.bias f32 [ 7680, 1, 1, 1 ]
llama_model_loader: - tensor 134: blk.20.attn_qkv.weight q4_0 [ 2560, 7680, 1, 1 ]
llama_model_loader: - tensor 135: blk.20.attn_output.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 136: blk.20.attn_output.weight q4_0 [ 2560, 2560, 1, 1 ]
llama_model_loader: - tensor 137: blk.20.ffn_up.bias f32 [ 10240, 1, 1, 1 ]
llama_model_loader: - tensor 138: blk.20.ffn_up.weight q4_0 [ 2560, 10240, 1, 1 ]
llama_model_loader: - tensor 139: blk.20.ffn_down.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 140: blk.20.ffn_down.weight q4_0 [ 10240, 2560, 1, 1 ]
llama_model_loader: - tensor 141: blk.21.attn_norm.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 142: blk.21.attn_norm.weight f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 143: blk.21.attn_qkv.bias f32 [ 7680, 1, 1, 1 ]
llama_model_loader: - tensor 144: blk.21.attn_qkv.weight q4_0 [ 2560, 7680, 1, 1 ]
llama_model_loader: - tensor 145: blk.21.attn_output.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 146: blk.21.attn_output.weight q4_0 [ 2560, 2560, 1, 1 ]
llama_model_loader: - tensor 147: blk.21.ffn_up.bias f32 [ 10240, 1, 1, 1 ]
llama_model_loader: - tensor 148: blk.21.ffn_up.weight q4_0 [ 2560, 10240, 1, 1 ]
llama_model_loader: - tensor 149: blk.21.ffn_down.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 150: blk.21.ffn_down.weight q4_0 [ 10240, 2560, 1, 1 ]
llama_model_loader: - tensor 151: blk.22.attn_norm.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 152: blk.22.attn_norm.weight f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 153: blk.22.attn_qkv.bias f32 [ 7680, 1, 1, 1 ]
llama_model_loader: - tensor 154: blk.22.attn_qkv.weight q4_0 [ 2560, 7680, 1, 1 ]
llama_model_loader: - tensor 155: blk.22.attn_output.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 156: blk.22.attn_output.weight q4_0 [ 2560, 2560, 1, 1 ]
llama_model_loader: - tensor 157: blk.22.ffn_up.bias f32 [ 10240, 1, 1, 1 ]
llama_model_loader: - tensor 158: blk.22.ffn_up.weight q4_0 [ 2560, 10240, 1, 1 ]
llama_model_loader: - tensor 159: blk.22.ffn_down.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 160: blk.22.ffn_down.weight q4_0 [ 10240, 2560, 1, 1 ]
llama_model_loader: - tensor 161: blk.23.attn_norm.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 162: blk.23.attn_norm.weight f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 163: blk.23.attn_qkv.bias f32 [ 7680, 1, 1, 1 ]
llama_model_loader: - tensor 164: blk.23.attn_qkv.weight q4_0 [ 2560, 7680, 1, 1 ]
llama_model_loader: - tensor 165: blk.23.attn_output.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 166: blk.23.attn_output.weight q4_0 [ 2560, 2560, 1, 1 ]
llama_model_loader: - tensor 167: blk.23.ffn_up.bias f32 [ 10240, 1, 1, 1 ]
llama_model_loader: - tensor 168: blk.23.ffn_up.weight q4_0 [ 2560, 10240, 1, 1 ]
llama_model_loader: - tensor 169: blk.23.ffn_down.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 170: blk.23.ffn_down.weight q4_0 [ 10240, 2560, 1, 1 ]
llama_model_loader: - tensor 171: blk.24.attn_norm.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 172: blk.24.attn_norm.weight f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 173: blk.24.attn_qkv.bias f32 [ 7680, 1, 1, 1 ]
llama_model_loader: - tensor 174: blk.24.attn_qkv.weight q4_0 [ 2560, 7680, 1, 1 ]
llama_model_loader: - tensor 175: blk.24.attn_output.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 176: blk.24.attn_output.weight q4_0 [ 2560, 2560, 1, 1 ]
llama_model_loader: - tensor 177: blk.24.ffn_up.bias f32 [ 10240, 1, 1, 1 ]
llama_model_loader: - tensor 178: blk.24.ffn_up.weight q4_0 [ 2560, 10240, 1, 1 ]
llama_model_loader: - tensor 179: blk.24.ffn_down.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 180: blk.24.ffn_down.weight q4_0 [ 10240, 2560, 1, 1 ]
llama_model_loader: - tensor 181: blk.25.attn_norm.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 182: blk.25.attn_norm.weight f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 183: blk.25.attn_qkv.bias f32 [ 7680, 1, 1, 1 ]
llama_model_loader: - tensor 184: blk.25.attn_qkv.weight q4_0 [ 2560, 7680, 1, 1 ]
llama_model_loader: - tensor 185: blk.25.attn_output.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 186: blk.25.attn_output.weight q4_0 [ 2560, 2560, 1, 1 ]
llama_model_loader: - tensor 187: blk.25.ffn_up.bias f32 [ 10240, 1, 1, 1 ]
llama_model_loader: - tensor 188: blk.25.ffn_up.weight q4_0 [ 2560, 10240, 1, 1 ]
llama_model_loader: - tensor 189: blk.25.ffn_down.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 190: blk.25.ffn_down.weight q4_0 [ 10240, 2560, 1, 1 ]
llama_model_loader: - tensor 191: blk.26.attn_norm.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 192: blk.26.attn_norm.weight f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 193: blk.26.attn_qkv.bias f32 [ 7680, 1, 1, 1 ]
llama_model_loader: - tensor 194: blk.26.attn_qkv.weight q4_0 [ 2560, 7680, 1, 1 ]
llama_model_loader: - tensor 195: blk.26.attn_output.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 196: blk.26.attn_output.weight q4_0 [ 2560, 2560, 1, 1 ]
llama_model_loader: - tensor 197: blk.26.ffn_up.bias f32 [ 10240, 1, 1, 1 ]
llama_model_loader: - tensor 198: blk.26.ffn_up.weight q4_0 [ 2560, 10240, 1, 1 ]
llama_model_loader: - tensor 199: blk.26.ffn_down.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 200: blk.26.ffn_down.weight q4_0 [ 10240, 2560, 1, 1 ]
llama_model_loader: - tensor 201: blk.27.attn_norm.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 202: blk.27.attn_norm.weight f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 203: blk.27.attn_qkv.bias f32 [ 7680, 1, 1, 1 ]
llama_model_loader: - tensor 204: blk.27.attn_qkv.weight q4_0 [ 2560, 7680, 1, 1 ]
llama_model_loader: - tensor 205: blk.27.attn_output.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 206: blk.27.attn_output.weight q4_0 [ 2560, 2560, 1, 1 ]
llama_model_loader: - tensor 207: blk.27.ffn_up.bias f32 [ 10240, 1, 1, 1 ]
llama_model_loader: - tensor 208: blk.27.ffn_up.weight q4_0 [ 2560, 10240, 1, 1 ]
llama_model_loader: - tensor 209: blk.27.ffn_down.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 210: blk.27.ffn_down.weight q4_0 [ 10240, 2560, 1, 1 ]
llama_model_loader: - tensor 211: blk.28.attn_norm.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 212: blk.28.attn_norm.weight f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 213: blk.28.attn_qkv.bias f32 [ 7680, 1, 1, 1 ]
llama_model_loader: - tensor 214: blk.28.attn_qkv.weight q4_0 [ 2560, 7680, 1, 1 ]
llama_model_loader: - tensor 215: blk.28.attn_output.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 216: blk.28.attn_output.weight q4_0 [ 2560, 2560, 1, 1 ]
llama_model_loader: - tensor 217: blk.28.ffn_up.bias f32 [ 10240, 1, 1, 1 ]
llama_model_loader: - tensor 218: blk.28.ffn_up.weight q4_0 [ 2560, 10240, 1, 1 ]
llama_model_loader: - tensor 219: blk.28.ffn_down.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 220: blk.28.ffn_down.weight q4_0 [ 10240, 2560, 1, 1 ]
llama_model_loader: - tensor 221: blk.29.attn_norm.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 222: blk.29.attn_norm.weight f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 223: blk.29.attn_qkv.bias f32 [ 7680, 1, 1, 1 ]
llama_model_loader: - tensor 224: blk.29.attn_qkv.weight q4_0 [ 2560, 7680, 1, 1 ]
llama_model_loader: - tensor 225: blk.29.attn_output.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 226: blk.29.attn_output.weight q4_0 [ 2560, 2560, 1, 1 ]
llama_model_loader: - tensor 227: blk.29.ffn_up.bias f32 [ 10240, 1, 1, 1 ]
llama_model_loader: - tensor 228: blk.29.ffn_up.weight q4_0 [ 2560, 10240, 1, 1 ]
llama_model_loader: - tensor 229: blk.29.ffn_down.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 230: blk.29.ffn_down.weight q4_0 [ 10240, 2560, 1, 1 ]
llama_model_loader: - tensor 231: blk.3.attn_norm.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 232: blk.3.attn_norm.weight f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 233: blk.3.attn_qkv.bias f32 [ 7680, 1, 1, 1 ]
llama_model_loader: - tensor 234: blk.3.attn_qkv.weight q4_0 [ 2560, 7680, 1, 1 ]
llama_model_loader: - tensor 235: blk.3.attn_output.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 236: blk.3.attn_output.weight q4_0 [ 2560, 2560, 1, 1 ]
llama_model_loader: - tensor 237: blk.3.ffn_up.bias f32 [ 10240, 1, 1, 1 ]
llama_model_loader: - tensor 238: blk.3.ffn_up.weight q4_0 [ 2560, 10240, 1, 1 ]
llama_model_loader: - tensor 239: blk.3.ffn_down.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 240: blk.3.ffn_down.weight q4_0 [ 10240, 2560, 1, 1 ]
llama_model_loader: - tensor 241: blk.30.attn_norm.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 242: blk.30.attn_norm.weight f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 243: blk.4.attn_norm.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 244: blk.4.attn_norm.weight f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 245: blk.4.attn_qkv.bias f32 [ 7680, 1, 1, 1 ]
llama_model_loader: - tensor 246: blk.4.attn_qkv.weight q4_0 [ 2560, 7680, 1, 1 ]
llama_model_loader: - tensor 247: blk.4.attn_output.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 248: blk.4.attn_output.weight q4_0 [ 2560, 2560, 1, 1 ]
llama_model_loader: - tensor 249: blk.4.ffn_up.bias f32 [ 10240, 1, 1, 1 ]
llama_model_loader: - tensor 250: blk.4.ffn_up.weight q4_0 [ 2560, 10240, 1, 1 ]
llama_model_loader: - tensor 251: blk.4.ffn_down.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 252: blk.4.ffn_down.weight q4_0 [ 10240, 2560, 1, 1 ]
llama_model_loader: - tensor 253: blk.5.attn_norm.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 254: blk.5.attn_norm.weight f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 255: blk.5.attn_qkv.bias f32 [ 7680, 1, 1, 1 ]
llama_model_loader: - tensor 256: blk.5.attn_qkv.weight q4_0 [ 2560, 7680, 1, 1 ]
llama_model_loader: - tensor 257: blk.5.attn_output.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 258: blk.5.attn_output.weight q4_0 [ 2560, 2560, 1, 1 ]
llama_model_loader: - tensor 259: blk.5.ffn_up.bias f32 [ 10240, 1, 1, 1 ]
llama_model_loader: - tensor 260: blk.5.ffn_up.weight q4_0 [ 2560, 10240, 1, 1 ]
llama_model_loader: - tensor 261: blk.5.ffn_down.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 262: blk.5.ffn_down.weight q4_0 [ 10240, 2560, 1, 1 ]
llama_model_loader: - tensor 263: blk.6.attn_norm.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 264: blk.6.attn_norm.weight f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 265: blk.6.attn_qkv.bias f32 [ 7680, 1, 1, 1 ]
llama_model_loader: - tensor 266: blk.6.attn_qkv.weight q4_0 [ 2560, 7680, 1, 1 ]
llama_model_loader: - tensor 267: blk.6.attn_output.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 268: blk.6.attn_output.weight q4_0 [ 2560, 2560, 1, 1 ]
llama_model_loader: - tensor 269: blk.6.ffn_up.bias f32 [ 10240, 1, 1, 1 ]
llama_model_loader: - tensor 270: blk.6.ffn_up.weight q4_0 [ 2560, 10240, 1, 1 ]
llama_model_loader: - tensor 271: blk.6.ffn_down.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 272: blk.6.ffn_down.weight q4_0 [ 10240, 2560, 1, 1 ]
llama_model_loader: - tensor 273: blk.7.attn_norm.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 274: blk.7.attn_norm.weight f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 275: blk.7.attn_qkv.bias f32 [ 7680, 1, 1, 1 ]
llama_model_loader: - tensor 276: blk.7.attn_qkv.weight q4_0 [ 2560, 7680, 1, 1 ]
llama_model_loader: - tensor 277: blk.7.attn_output.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 278: blk.7.attn_output.weight q4_0 [ 2560, 2560, 1, 1 ]
llama_model_loader: - tensor 279: blk.7.ffn_up.bias f32 [ 10240, 1, 1, 1 ]
llama_model_loader: - tensor 280: blk.7.ffn_up.weight q4_0 [ 2560, 10240, 1, 1 ]
llama_model_loader: - tensor 281: blk.7.ffn_down.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 282: blk.7.ffn_down.weight q4_0 [ 10240, 2560, 1, 1 ]
llama_model_loader: - tensor 283: blk.8.attn_norm.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 284: blk.8.attn_norm.weight f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 285: blk.8.attn_qkv.bias f32 [ 7680, 1, 1, 1 ]
llama_model_loader: - tensor 286: blk.8.attn_qkv.weight q4_0 [ 2560, 7680, 1, 1 ]
llama_model_loader: - tensor 287: blk.8.attn_output.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 288: blk.8.attn_output.weight q4_0 [ 2560, 2560, 1, 1 ]
llama_model_loader: - tensor 289: blk.8.ffn_up.bias f32 [ 10240, 1, 1, 1 ]
llama_model_loader: - tensor 290: blk.8.ffn_up.weight q4_0 [ 2560, 10240, 1, 1 ]
llama_model_loader: - tensor 291: blk.8.ffn_down.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 292: blk.8.ffn_down.weight q4_0 [ 10240, 2560, 1, 1 ]
llama_model_loader: - tensor 293: blk.9.attn_norm.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 294: blk.9.attn_norm.weight f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 295: blk.9.attn_qkv.bias f32 [ 7680, 1, 1, 1 ]
llama_model_loader: - tensor 296: blk.9.attn_qkv.weight q4_0 [ 2560, 7680, 1, 1 ]
llama_model_loader: - tensor 297: blk.9.attn_output.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 298: blk.9.attn_output.weight q4_0 [ 2560, 2560, 1, 1 ]
llama_model_loader: - tensor 299: blk.9.ffn_up.bias f32 [ 10240, 1, 1, 1 ]
llama_model_loader: - tensor 300: blk.9.ffn_up.weight q4_0 [ 2560, 10240, 1, 1 ]
llama_model_loader: - tensor 301: blk.9.ffn_down.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 302: blk.9.ffn_down.weight q4_0 [ 10240, 2560, 1, 1 ]
llama_model_loader: - tensor 303: output.bias f32 [ 51200, 1, 1, 1 ]
llama_model_loader: - tensor 304: output.weight q6_K [ 2560, 51200, 1, 1 ]
llama_model_loader: - tensor 305: output_norm.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 306: output_norm.weight f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 307: blk.30.attn_qkv.bias f32 [ 7680, 1, 1, 1 ]
llama_model_loader: - tensor 308: blk.30.attn_qkv.weight q4_0 [ 2560, 7680, 1, 1 ]
llama_model_loader: - tensor 309: blk.30.attn_output.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 310: blk.30.attn_output.weight q4_0 [ 2560, 2560, 1, 1 ]
llama_model_loader: - tensor 311: blk.30.ffn_up.bias f32 [ 10240, 1, 1, 1 ]
llama_model_loader: - tensor 312: blk.30.ffn_up.weight q4_0 [ 2560, 10240, 1, 1 ]
llama_model_loader: - tensor 313: blk.30.ffn_down.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 314: blk.30.ffn_down.weight q4_0 [ 10240, 2560, 1, 1 ]
llama_model_loader: - tensor 315: blk.31.attn_norm.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 316: blk.31.attn_norm.weight f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 317: blk.31.attn_qkv.bias f32 [ 7680, 1, 1, 1 ]
llama_model_loader: - tensor 318: blk.31.attn_qkv.weight q4_0 [ 2560, 7680, 1, 1 ]
llama_model_loader: - tensor 319: blk.31.attn_output.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 320: blk.31.attn_output.weight q4_0 [ 2560, 2560, 1, 1 ]
llama_model_loader: - tensor 321: blk.31.ffn_up.bias f32 [ 10240, 1, 1, 1 ]
llama_model_loader: - tensor 322: blk.31.ffn_up.weight q4_0 [ 2560, 10240, 1, 1 ]
llama_model_loader: - tensor 323: blk.31.ffn_down.bias f32 [ 2560, 1, 1, 1 ]
llama_model_loader: - tensor 324: blk.31.ffn_down.weight q4_0 [ 10240, 2560, 1, 1 ]
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = phi2
llama_model_loader: - kv 1: general.name str = Phi2
llama_model_loader: - kv 2: phi2.context_length u32 = 2048
llama_model_loader: - kv 3: phi2.embedding_length u32 = 2560
llama_model_loader: - kv 4: phi2.feed_forward_length u32 = 10240
llama_model_loader: - kv 5: phi2.block_count u32 = 32
llama_model_loader: - kv 6: phi2.attention.head_count u32 = 32
llama_model_loader: - kv 7: phi2.attention.head_count_kv u32 = 32
llama_model_loader: - kv 8: phi2.attention.layer_norm_epsilon f32 = 0.000010
llama_model_loader: - kv 9: phi2.rope.dimension_count u32 = 32
llama_model_loader: - kv 10: general.file_type u32 = 2
llama_model_loader: - kv 11: tokenizer.ggml.add_bos_token bool = false
llama_model_loader: - kv 12: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,51200] = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 14: tokenizer.ggml.token_type arr[i32,51200] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 15: tokenizer.ggml.merges arr[str,50000] = ["Ġ t", "Ġ a", "h e", "i n", "r e",...
llama_model_loader: - kv 16: tokenizer.ggml.bos_token_id u32 = 50256
llama_model_loader: - kv 17: tokenizer.ggml.eos_token_id u32 = 50256
llama_model_loader: - kv 18: tokenizer.ggml.unknown_token_id u32 = 50256
llama_model_loader: - kv 19: general.quantization_version u32 = 2
llama_model_loader: - type f32: 195 tensors
llama_model_loader: - type q4_0: 129 tensors
llama_model_loader: - type q6_K: 1 tensors
llm_load_vocab: mismatch in special tokens definition ( 910/51200 vs 944/51200 ).
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = phi2
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 51200
llm_load_print_meta: n_merges = 50000
llm_load_print_meta: n_ctx_train = 2048
llm_load_print_meta: n_embd = 2560
llm_load_print_meta: n_head = 32
llm_load_print_meta: n_head_kv = 32
llm_load_print_meta: n_layer = 32
llm_load_print_meta: n_rot = 32
llm_load_print_meta: n_gqa = 1
llm_load_print_meta: f_norm_eps = 1.0e-05
llm_load_print_meta: f_norm_rms_eps = 0.0e+00
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: n_ff = 10240
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx = 2048
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: model type = 3B
llm_load_print_meta: model ftype = Q4_0
llm_load_print_meta: model params = 2.78 B
llm_load_print_meta: model size = 1.49 GiB (4.61 BPW)
llm_load_print_meta: general.name = Phi2
llm_load_print_meta: BOS token = 50256 '<|endoftext|>'
llm_load_print_meta: EOS token = 50256 '<|endoftext|>'
llm_load_print_meta: UNK token = 50256 '<|endoftext|>'
llm_load_print_meta: LF token = 128 'Ä'
llm_load_tensors: ggml ctx size = 0.12 MiB
llm_load_tensors: using CUDA for GPU acceleration
llm_load_tensors: mem required = 70.44 MiB
llm_load_tensors: offloading 32 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 33/33 layers to GPU
llm_load_tensors: VRAM used: 1456.19 MiB
...........................................................................................
llama_new_context_with_model: n_ctx = 2048
llama_new_context_with_model: freq_base = 10000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: VRAM kv self = 640.00 MB
llama_new_context_with_model: KV self size = 640.00 MiB, K (f16): 320.00 MiB, V (f16): 320.00 MiB
llama_build_graph: non-view tensors processed: 774/774
llama_new_context_with_model: compute buffer total size = 165.19 MiB
llama_new_context_with_model: VRAM scratch buffer: 162.00 MiB
llama_new_context_with_model: total VRAM used: 2258.20 MiB (model: 1456.19 MiB, context: 802.00 MiB)
INFO HTTP server listening | timestamp=1703075515 function=main line=3097 port=64867 hostname=127.0.0.1
INFO request | timestamp=1703075515 function=log_server_request line=2608 remote_addr=127.0.0.1 remote_port=46464 status=200 method=HEAD path=/ params={}
2023/12/20 12:31:55 llama.go:512: llama runner started in 2.200799 seconds
2023/12/20 12:31:55 llama.go:581: loaded 0 images
map[frequency_penalty:0 image_data:[] main_gpu:0 mirostat:0 mirostat_eta:0.1 mirostat_tau:5 n_keep:0 n_predict:-1 penalize_nl:true presence_penalty:0 prompt:System: A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
User: why is the sky blue?
Assistant: repeat_last_n:64 repeat_penalty:1.1 seed:-1 stop:[<|endoftext|> User: Assistant: System:] stream:true temperature:0.8 tfs_z:1 top_k:40 top_p:0.9 typical_p:1]
cuBLAS error 15 at /go/src/github.com/jmorganca/ollama/llm/llama.cpp/gguf/ggml-cuda.cu:8458
current device: 0
GGML_ASSERT: /go/src/github.com/jmorganca/ollama/llm/llama.cpp/gguf/ggml-cuda.cu:8458: !"cuBLAS error"
2023/12/20 12:32:01 llama.go:455: signal: aborted (core dumped)
[GIN] 2023/12/20 - 12:32:01 | 500 | 10.306732132s | 192.168.1.120 | POST "/api/chat"
2023/12/20 12:32:01 llama.go:529: llama runner stopped successfully
2023/12/20 12:36:55 llama.go:529: llama runner stopped successfully
```
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1629/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1629/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4328
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4328/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4328/comments
|
https://api.github.com/repos/ollama/ollama/issues/4328/events
|
https://github.com/ollama/ollama/pull/4328
| 2,290,455,248
|
PR_kwDOJ0Z1Ps5vIy6D
| 4,328
|
count memory up to NumGPU if set by user
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-05-10T21:51:32
| 2024-05-14T20:47:45
| 2024-05-14T20:47:45
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4328",
"html_url": "https://github.com/ollama/ollama/pull/4328",
"diff_url": "https://github.com/ollama/ollama/pull/4328.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4328.patch",
"merged_at": "2024-05-14T20:47:45"
}
| null |
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4328/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4328/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6112
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6112/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6112/comments
|
https://api.github.com/repos/ollama/ollama/issues/6112/events
|
https://github.com/ollama/ollama/pull/6112
| 2,441,451,506
|
PR_kwDOJ0Z1Ps53EQnT
| 6,112
|
Add Braina AI as an Ollama Desktop GUI
|
{
"login": "wallacelance",
"id": 177184683,
"node_id": "U_kgDOCo-fqw",
"avatar_url": "https://avatars.githubusercontent.com/u/177184683?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wallacelance",
"html_url": "https://github.com/wallacelance",
"followers_url": "https://api.github.com/users/wallacelance/followers",
"following_url": "https://api.github.com/users/wallacelance/following{/other_user}",
"gists_url": "https://api.github.com/users/wallacelance/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wallacelance/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wallacelance/subscriptions",
"organizations_url": "https://api.github.com/users/wallacelance/orgs",
"repos_url": "https://api.github.com/users/wallacelance/repos",
"events_url": "https://api.github.com/users/wallacelance/events{/privacy}",
"received_events_url": "https://api.github.com/users/wallacelance/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-08-01T04:38:53
| 2024-09-06T02:39:19
| 2024-09-06T02:22:20
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6112",
"html_url": "https://github.com/ollama/ollama/pull/6112",
"diff_url": "https://github.com/ollama/ollama/pull/6112.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6112.patch",
"merged_at": null
}
|
### Overview
[Braina](https://www.brainasoft.com/braina/) supports Ollama natively on Windows. It automatically synchronizes with Ollama model lists, and allows users to use advanced features such as Voice (Both Speech to Text and Text to Speech), Web Search, File and Webpage attachments, Custom Prompts etc.
### Screenshots
**Main Chat UI**
<img width="657" alt="LLM_Movies_results_with_web_access_Ollama_UI" src="https://github.com/user-attachments/assets/53e22fab-3977-4aa0-bdd3-9834de97f2d1">
**Model Management**
<img width="515" alt="manage-language-models" src="https://github.com/user-attachments/assets/059f48f5-eeaf-4dd8-9b56-2c096be9859b">
|
{
"login": "wallacelance",
"id": 177184683,
"node_id": "U_kgDOCo-fqw",
"avatar_url": "https://avatars.githubusercontent.com/u/177184683?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wallacelance",
"html_url": "https://github.com/wallacelance",
"followers_url": "https://api.github.com/users/wallacelance/followers",
"following_url": "https://api.github.com/users/wallacelance/following{/other_user}",
"gists_url": "https://api.github.com/users/wallacelance/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wallacelance/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wallacelance/subscriptions",
"organizations_url": "https://api.github.com/users/wallacelance/orgs",
"repos_url": "https://api.github.com/users/wallacelance/repos",
"events_url": "https://api.github.com/users/wallacelance/events{/privacy}",
"received_events_url": "https://api.github.com/users/wallacelance/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6112/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6112/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/144
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/144/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/144/comments
|
https://api.github.com/repos/ollama/ollama/issues/144/events
|
https://github.com/ollama/ollama/pull/144
| 1,814,557,715
|
PR_kwDOJ0Z1Ps5WCFQ7
| 144
|
remove unused code
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-07-20T18:18:26
| 2023-07-24T19:30:57
| 2023-07-20T18:57:30
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/144",
"html_url": "https://github.com/ollama/ollama/pull/144",
"diff_url": "https://github.com/ollama/ollama/pull/144.diff",
"patch_url": "https://github.com/ollama/ollama/pull/144.patch",
"merged_at": "2023-07-20T18:57:30"
}
|
cleaning up some unused code I noticed
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/144/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/144/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/8424
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8424/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8424/comments
|
https://api.github.com/repos/ollama/ollama/issues/8424/events
|
https://github.com/ollama/ollama/issues/8424
| 2,787,551,152
|
I_kwDOJ0Z1Ps6mJqew
| 8,424
|
requesting support new model: MiniCPM-o-2_6
|
{
"login": "utopeadia",
"id": 98788152,
"node_id": "U_kgDOBeNjOA",
"avatar_url": "https://avatars.githubusercontent.com/u/98788152?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/utopeadia",
"html_url": "https://github.com/utopeadia",
"followers_url": "https://api.github.com/users/utopeadia/followers",
"following_url": "https://api.github.com/users/utopeadia/following{/other_user}",
"gists_url": "https://api.github.com/users/utopeadia/gists{/gist_id}",
"starred_url": "https://api.github.com/users/utopeadia/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/utopeadia/subscriptions",
"organizations_url": "https://api.github.com/users/utopeadia/orgs",
"repos_url": "https://api.github.com/users/utopeadia/repos",
"events_url": "https://api.github.com/users/utopeadia/events{/privacy}",
"received_events_url": "https://api.github.com/users/utopeadia/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
open
| false
| null |
[] | null | 6
| 2025-01-14T15:47:24
| 2025-01-20T12:12:03
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Model URL: https://huggingface.co/openbmb/MiniCPM-o-2_6
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8424/reactions",
"total_count": 34,
"+1": 24,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 3,
"rocket": 3,
"eyes": 4
}
|
https://api.github.com/repos/ollama/ollama/issues/8424/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/8071
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8071/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8071/comments
|
https://api.github.com/repos/ollama/ollama/issues/8071/events
|
https://github.com/ollama/ollama/pull/8071
| 2,736,185,344
|
PR_kwDOJ0Z1Ps6FCKJr
| 8,071
|
llama: parse JSON schema using nlohmann::ordered_json
|
{
"login": "iscy",
"id": 294710,
"node_id": "MDQ6VXNlcjI5NDcxMA==",
"avatar_url": "https://avatars.githubusercontent.com/u/294710?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iscy",
"html_url": "https://github.com/iscy",
"followers_url": "https://api.github.com/users/iscy/followers",
"following_url": "https://api.github.com/users/iscy/following{/other_user}",
"gists_url": "https://api.github.com/users/iscy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/iscy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iscy/subscriptions",
"organizations_url": "https://api.github.com/users/iscy/orgs",
"repos_url": "https://api.github.com/users/iscy/repos",
"events_url": "https://api.github.com/users/iscy/events{/privacy}",
"received_events_url": "https://api.github.com/users/iscy/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-12-12T15:11:18
| 2024-12-12T17:57:29
| 2024-12-12T17:57:29
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/8071",
"html_url": "https://github.com/ollama/ollama/pull/8071",
"diff_url": "https://github.com/ollama/ollama/pull/8071.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8071.patch",
"merged_at": "2024-12-12T17:57:29"
}
|
PR #8002 has handled the JSON within Go to ensure we could keep the schema as-is, without affecting the order of the properties. However, when parsed within the cpp wrapper, `nlohmann::json` was used instead of relying on `nlohmann::ordered_json`. This PR simply changes the parser for the ordered one in order to maintain the order.
|
{
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "https://api.github.com/users/ParthSareen/followers",
"following_url": "https://api.github.com/users/ParthSareen/following{/other_user}",
"gists_url": "https://api.github.com/users/ParthSareen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ParthSareen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ParthSareen/subscriptions",
"organizations_url": "https://api.github.com/users/ParthSareen/orgs",
"repos_url": "https://api.github.com/users/ParthSareen/repos",
"events_url": "https://api.github.com/users/ParthSareen/events{/privacy}",
"received_events_url": "https://api.github.com/users/ParthSareen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8071/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8071/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4799
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4799/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4799/comments
|
https://api.github.com/repos/ollama/ollama/issues/4799/events
|
https://github.com/ollama/ollama/issues/4799
| 2,331,629,024
|
I_kwDOJ0Z1Ps6K-dXg
| 4,799
|
ollama(commits: d4a8610) run deepseek-v2:16b Error: llama runner process has terminated: signal: aborted (core dumped)
|
{
"login": "zhqfdn",
"id": 25156863,
"node_id": "MDQ6VXNlcjI1MTU2ODYz",
"avatar_url": "https://avatars.githubusercontent.com/u/25156863?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhqfdn",
"html_url": "https://github.com/zhqfdn",
"followers_url": "https://api.github.com/users/zhqfdn/followers",
"following_url": "https://api.github.com/users/zhqfdn/following{/other_user}",
"gists_url": "https://api.github.com/users/zhqfdn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhqfdn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhqfdn/subscriptions",
"organizations_url": "https://api.github.com/users/zhqfdn/orgs",
"repos_url": "https://api.github.com/users/zhqfdn/repos",
"events_url": "https://api.github.com/users/zhqfdn/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhqfdn/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 22
| 2024-06-03T16:54:31
| 2024-06-18T23:31:00
| 2024-06-18T23:31:00
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Jun 04 00:46:12 localhost.localdomain ollama[114642]: llama_model_loader: - type f32: 108 tensors
Jun 04 00:46:12 localhost.localdomain ollama[114642]: llama_model_loader: - type q4_0: 268 tensors
Jun 04 00:46:12 localhost.localdomain ollama[114642]: llama_model_loader: - type q6_K: 1 tensors
Jun 04 00:46:13 localhost.localdomain ollama[114642]: llm_load_vocab: special tokens cache size = 2400
Jun 04 00:46:13 localhost.localdomain ollama[114642]: time=2024-06-04T00:46:13.129+08:00 level=INFO source=server.go:567 msg="waiting for server to become available" status="llm server loading model"
Jun 04 00:46:13 localhost.localdomain ollama[114642]: llm_load_vocab: token to piece cache size = 1.3318 MB
Jun 04 00:46:13 localhost.localdomain ollama[114642]: llm_load_print_meta: format = GGUF V3 (latest)
Jun 04 00:46:13 localhost.localdomain ollama[114642]: llm_load_print_meta: arch = deepseek2
Jun 04 00:46:13 localhost.localdomain ollama[114642]: llm_load_print_meta: vocab type = BPE
Jun 04 00:46:13 localhost.localdomain ollama[114642]: llm_load_print_meta: n_vocab = 102400
Jun 04 00:46:13 localhost.localdomain ollama[114642]: llm_load_print_meta: n_merges = 99757
Jun 04 00:46:13 localhost.localdomain ollama[114642]: llm_load_print_meta: n_ctx_train = 163840
Jun 04 00:46:13 localhost.localdomain ollama[114642]: llm_load_print_meta: n_embd = 2048
Jun 04 00:46:13 localhost.localdomain ollama[114642]: llm_load_print_meta: n_head = 16
Jun 04 00:46:13 localhost.localdomain ollama[114642]: llm_load_print_meta: n_head_kv = 16
Jun 04 00:46:13 localhost.localdomain ollama[114642]: llm_load_print_meta: n_layer = 27
Jun 04 00:46:13 localhost.localdomain ollama[114642]: llm_load_print_meta: n_rot = 64
Jun 04 00:46:13 localhost.localdomain ollama[114642]: llm_load_print_meta: n_embd_head_k = 192
Jun 04 00:46:13 localhost.localdomain ollama[114642]: llm_load_print_meta: n_embd_head_v = 128
Jun 04 00:46:13 localhost.localdomain ollama[114642]: llm_load_print_meta: n_gqa = 1
Jun 04 00:46:13 localhost.localdomain ollama[114642]: llm_load_print_meta: n_embd_k_gqa = 3072
Jun 04 00:46:13 localhost.localdomain ollama[114642]: llm_load_print_meta: n_embd_v_gqa = 2048
Jun 04 00:46:13 localhost.localdomain ollama[114642]: llm_load_print_meta: f_norm_eps = 0.0e+00
Jun 04 00:46:13 localhost.localdomain ollama[114642]: llm_load_print_meta: f_norm_rms_eps = 1.0e-06
Jun 04 00:46:13 localhost.localdomain ollama[114642]: llm_load_print_meta: f_clamp_kqv = 0.0e+00
Jun 04 00:46:13 localhost.localdomain ollama[114642]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00
Jun 04 00:46:13 localhost.localdomain ollama[114642]: llm_load_print_meta: f_logit_scale = 0.0e+00
Jun 04 00:46:13 localhost.localdomain ollama[114642]: llm_load_print_meta: n_ff = 10944
Jun 04 00:46:13 localhost.localdomain ollama[114642]: llm_load_print_meta: n_expert = 64
Jun 04 00:46:13 localhost.localdomain ollama[114642]: llm_load_print_meta: n_expert_used = 6
Jun 04 00:46:13 localhost.localdomain ollama[114642]: llm_load_print_meta: causal attn = 1
Jun 04 00:46:13 localhost.localdomain ollama[114642]: llm_load_print_meta: pooling type = 0
Jun 04 00:46:13 localhost.localdomain ollama[114642]: llm_load_print_meta: rope type = 0
Jun 04 00:46:13 localhost.localdomain ollama[114642]: llm_load_print_meta: rope scaling = yarn
Jun 04 00:46:13 localhost.localdomain ollama[114642]: llm_load_print_meta: freq_base_train = 10000.0
Jun 04 00:46:13 localhost.localdomain ollama[114642]: llm_load_print_meta: freq_scale_train = 0.025
Jun 04 00:46:13 localhost.localdomain ollama[114642]: llm_load_print_meta: n_yarn_orig_ctx = 4096
Jun 04 00:46:13 localhost.localdomain ollama[114642]: llm_load_print_meta: rope_finetuned = unknown
Jun 04 00:46:13 localhost.localdomain ollama[114642]: llm_load_print_meta: ssm_d_conv = 0
Jun 04 00:46:13 localhost.localdomain ollama[114642]: llm_load_print_meta: ssm_d_inner = 0
Jun 04 00:46:13 localhost.localdomain ollama[114642]: llm_load_print_meta: ssm_d_state = 0
Jun 04 00:46:13 localhost.localdomain ollama[114642]: llm_load_print_meta: ssm_dt_rank = 0
Jun 04 00:46:13 localhost.localdomain ollama[114642]: llm_load_print_meta: model type = 16B
Jun 04 00:46:13 localhost.localdomain ollama[114642]: llm_load_print_meta: model ftype = Q4_0
Jun 04 00:46:13 localhost.localdomain ollama[114642]: llm_load_print_meta: model params = 15.71 B
Jun 04 00:46:13 localhost.localdomain ollama[114642]: llm_load_print_meta: model size = 8.29 GiB (4.53 BPW)
Jun 04 00:46:13 localhost.localdomain ollama[114642]: llm_load_print_meta: general.name = DeepSeek-V2-Lite-Chat
Jun 04 00:46:13 localhost.localdomain ollama[114642]: llm_load_print_meta: BOS token = 100000 '<|begin▁of▁sentence|>'
Jun 04 00:46:13 localhost.localdomain ollama[114642]: llm_load_print_meta: EOS token = 100001 '<|end▁of▁sentence|>'
Jun 04 00:46:13 localhost.localdomain ollama[114642]: llm_load_print_meta: PAD token = 100001 '<|end▁of▁sentence|>'
Jun 04 00:46:13 localhost.localdomain ollama[114642]: llm_load_print_meta: LF token = 126 'Ä'
Jun 04 00:46:13 localhost.localdomain ollama[114642]: llm_load_print_meta: n_layer_dense_lead = 1
Jun 04 00:46:13 localhost.localdomain ollama[114642]: llm_load_print_meta: n_lora_q = 0
Jun 04 00:46:13 localhost.localdomain ollama[114642]: llm_load_print_meta: n_lora_kv = 512
Jun 04 00:46:13 localhost.localdomain ollama[114642]: llm_load_print_meta: n_ff_exp = 1408
Jun 04 00:46:13 localhost.localdomain ollama[114642]: llm_load_print_meta: n_expert_shared = 2
Jun 04 00:46:13 localhost.localdomain ollama[114642]: llm_load_print_meta: expert_weights_scale = 1.0
Jun 04 00:46:13 localhost.localdomain ollama[114642]: llm_load_print_meta: rope_yarn_log_mul = 0.0707
Jun 04 00:46:13 localhost.localdomain ollama[114642]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: yes
Jun 04 00:46:13 localhost.localdomain ollama[114642]: ggml_cuda_init: CUDA_USE_TENSOR_CORES: no
Jun 04 00:46:13 localhost.localdomain ollama[114642]: ggml_cuda_init: found 1 CUDA devices:
Jun 04 00:46:13 localhost.localdomain ollama[114642]: Device 0: Tesla T4, compute capability 7.5, VMM: yes
Jun 04 00:46:13 localhost.localdomain ollama[114642]: llm_load_tensors: ggml ctx size = 0.35 MiB
Jun 04 00:46:13 localhost.localdomain ollama[114642]: llm_load_tensors: offloading 27 repeating layers to GPU
Jun 04 00:46:13 localhost.localdomain ollama[114642]: llm_load_tensors: offloading non-repeating layers to GPU
Jun 04 00:46:13 localhost.localdomain ollama[114642]: llm_load_tensors: offloaded 28/28 layers to GPU
Jun 04 00:46:13 localhost.localdomain ollama[114642]: llm_load_tensors: CPU buffer size = 112.50 MiB
Jun 04 00:46:13 localhost.localdomain ollama[114642]: llm_load_tensors: CUDA0 buffer size = 8376.27 MiB
Jun 04 00:46:15 localhost.localdomain ollama[114642]: llama_new_context_with_model: n_ctx = 20480
Jun 04 00:46:15 localhost.localdomain ollama[114642]: llama_new_context_with_model: n_batch = 512
Jun 04 00:46:15 localhost.localdomain ollama[114642]: llama_new_context_with_model: n_ubatch = 512
Jun 04 00:46:15 localhost.localdomain ollama[114642]: llama_new_context_with_model: flash_attn = 1
Jun 04 00:46:15 localhost.localdomain ollama[114642]: llama_new_context_with_model: freq_base = 10000.0
Jun 04 00:46:15 localhost.localdomain ollama[114642]: llama_new_context_with_model: freq_scale = 0.025
Jun 04 00:46:15 localhost.localdomain ollama[114642]: llama_kv_cache_init: CUDA0 KV buffer size = 5400.00 MiB
Jun 04 00:46:15 localhost.localdomain ollama[114642]: llama_new_context_with_model: KV self size = 5400.00 MiB, K (f16): 3240.00 MiB, V (f16): 2160.00 MiB
Jun 04 00:46:15 localhost.localdomain ollama[114642]: llama_new_context_with_model: CUDA_Host output buffer size = 3.98 MiB
Jun 04 00:46:15 localhost.localdomain ollama[114642]: GGML_ASSERT: /home/tools/ollama/llm/llama.cpp/ggml.c:5714: ggml_nelements(a) == ne0*ne1
Jun 04 00:46:15 localhost.localdomain systemd-coredump[114889]: [🡕] Process 114887 (ollama_llama_se) of user 996 dumped core.
Jun 04 00:46:15 localhost.localdomain systemd-coredump[114896]: [🡕] Process 114812 (ollama_llama_se) of user 996 dumped core.
Jun 04 00:46:15 localhost.localdomain ollama[114642]: time=2024-06-04T00:46:15.954+08:00 level=INFO source=server.go:567 msg="waiting for server to become available" status="llm server error"
Jun 04 00:46:16 localhost.localdomain ollama[114642]: time=2024-06-04T00:46:16.205+08:00 level=ERROR source=sched.go:344 msg="error loading llama server" error="llama runner process has terminated: signal: aborted (core dumped) "
Jun 04 00:46:16 localhost.localdomain ollama[114642]: [GIN] 2024/06/04 - 00:46:16 | 500 | 5.505031311s | 10.10.11.11 | POST "/api/chat"
Jun 04 00:46:21 localhost.localdomain ollama[114642]: time=2024-06-04T00:46:21.896+08:00 level=WARN source=sched.go:512 msg="gpu VRAM usage didn't recover within timeout" seconds=5.6909691559999995
Jun 04 00:46:23 localhost.localdomain ollama[114642]: time=2024-06-04T00:46:23.229+08:00 level=WARN source=sched.go:512 msg="gpu VRAM usage didn't recover within timeout" seconds=7.023871504
Jun 04 00:46:24 localhost.localdomain ollama[114642]: time=2024-06-04T00:46:24.409+08:00 level=WARN source=sched.go:512 msg="gpu VRAM usage didn't recover within timeout" seconds=8.204406674
-------------------------------------------------------------
localhost.localdomain Tue Jun 4 00:48:30 2024 550.54.15
[0] Tesla T4 | 43°C, 0 % | 2642 / 15360 MB | ollama/114698(2640M)
[1] Tesla T4 | 34°C, 0 % | 2 / 15360 MB |
[2] Tesla T4 | 32°C, 0 % | 2 / 15360 MB |
[3] Tesla T4 | 33°C, 0 % | 2 / 15360 MB |
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
d4a8610
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4799/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4799/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3568
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3568/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3568/comments
|
https://api.github.com/repos/ollama/ollama/issues/3568/events
|
https://github.com/ollama/ollama/issues/3568
| 2,234,544,997
|
I_kwDOJ0Z1Ps6FMHNl
| 3,568
|
ollama crashed at 0.1.31 - CUDA out of memory
|
{
"login": "abnormalboy",
"id": 77949946,
"node_id": "MDQ6VXNlcjc3OTQ5OTQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/77949946?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abnormalboy",
"html_url": "https://github.com/abnormalboy",
"followers_url": "https://api.github.com/users/abnormalboy/followers",
"following_url": "https://api.github.com/users/abnormalboy/following{/other_user}",
"gists_url": "https://api.github.com/users/abnormalboy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abnormalboy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abnormalboy/subscriptions",
"organizations_url": "https://api.github.com/users/abnormalboy/orgs",
"repos_url": "https://api.github.com/users/abnormalboy/repos",
"events_url": "https://api.github.com/users/abnormalboy/events{/privacy}",
"received_events_url": "https://api.github.com/users/abnormalboy/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg",
"url": "https://api.github.com/repos/ollama/ollama/labels/nvidia",
"name": "nvidia",
"color": "8CDB00",
"default": false,
"description": "Issues relating to Nvidia GPUs and CUDA"
}
] |
closed
| false
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 6
| 2024-04-10T00:53:35
| 2024-05-05T00:26:21
| 2024-05-05T00:26:21
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
When i use langchain in python , the ollama crashed . What model i use is "gemma:7b". when i use "llama2:7b" ollama is normal work. Is my memory is not support? My computer GPU memory is 8GB.
```pyhon
from langchain.llms.ollama import Ollama
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
from langchain_community.vectorstores import DocArrayInMemorySearch
from langchain_core.runnables import RunnableParallel, RunnablePassthrough
from langchain_community.embeddings import OllamaEmbeddings
import asyncio
async def main():
vectorstore = DocArrayInMemorySearch.from_texts(
["以可解析的json格式返回", "如{\"hello\":\"world\"}"],
embedding=OllamaEmbeddings(),
)
retriever = vectorstore.as_retriever()
template = """Answer the question based only on the following context:
{context}
Question: {question}
"""
prompt = ChatPromptTemplate.from_template(template)
model = Ollama(model="gemma")
output_parser = StrOutputParser()
setup_and_retrieval = RunnableParallel(
{"context": retriever, "question": RunnablePassthrough()}
)
chain = setup_and_retrieval | prompt | model | output_parser
print(chain.input_schema.schema())
chunks = []
async for chunk in chain.astream("返回一个苹果"):
chunks.append(chunk)
print(chunk, end="", flush=True)
asyncio.run(main())
```
```ollama log
time=2024-04-10T08:55:13.247+08:00 level=WARN source=server.go:113 msg="server crash 59 - exit code 3221226505 - respawning"
```
### What did you expect to see?
_No response_
### Steps to reproduce
_No response_
### Are there any recent changes that introduced the issue?
_No response_
### OS
Windows
### Architecture
amd64
### Platform
_No response_
### Ollama version
0.1.31
### GPU
Nvidia
### GPU info
<img width="635" alt="Snipaste_2024-04-10_08-50-17" src="https://github.com/ollama/ollama/assets/77949946/0d6a607e-c8ab-4db8-8b01-1c0e9738647b">
### CPU
Intel
### Other software
_No response_
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3568/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3568/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3047
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3047/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3047/comments
|
https://api.github.com/repos/ollama/ollama/issues/3047/events
|
https://github.com/ollama/ollama/issues/3047
| 2,177,892,096
|
I_kwDOJ0Z1Ps6Bz_8A
| 3,047
|
Ollama logging for ConnectionResetError
|
{
"login": "Bardo-Konrad",
"id": 1641761,
"node_id": "MDQ6VXNlcjE2NDE3NjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1641761?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bardo-Konrad",
"html_url": "https://github.com/Bardo-Konrad",
"followers_url": "https://api.github.com/users/Bardo-Konrad/followers",
"following_url": "https://api.github.com/users/Bardo-Konrad/following{/other_user}",
"gists_url": "https://api.github.com/users/Bardo-Konrad/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bardo-Konrad/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bardo-Konrad/subscriptions",
"organizations_url": "https://api.github.com/users/Bardo-Konrad/orgs",
"repos_url": "https://api.github.com/users/Bardo-Konrad/repos",
"events_url": "https://api.github.com/users/Bardo-Konrad/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bardo-Konrad/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info",
"name": "needs more info",
"color": "BA8041",
"default": false,
"description": "More information is needed to assist"
}
] |
closed
| false
| null |
[] | null | 3
| 2024-03-10T20:30:30
| 2024-03-12T07:21:34
| 2024-03-12T07:21:34
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I access ollama using the python library.
It communicates well but after some exchanges I always get the following. It seems that I need to reset ollama via python or maybe context length is surpassed, how do I figure it out?
```
Traceback (most recent call last):
File "c:\Lib\site-packages\urllib3\connectionpool.py", line 715, in urlopen
httplib_response = self._make_request(
^^^^^^^^^^^^^^^^^^^
File "c:\Lib\site-packages\urllib3\connectionpool.py", line 467, in _make_request
six.raise_from(e, None)
File "<string>", line 3, in raise_from
File "c:\Lib\site-packages\urllib3\connectionpool.py", line 462, in _make_request
httplib_response = conn.getresponse()
^^^^^^^^^^^^^^^^^^
File "c:\Lib\http\client.py", line 1386, in getresponse
response.begin()
File "c:\Lib\http\client.py", line 325, in begin
version, status, reason = self._read_status()
^^^^^^^^^^^^^^^^^^^
File "c:\Lib\http\client.py", line 286, in _read_status
line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Lib\socket.py", line 706, in readinto
return self._sock.recv_into(b)
^^^^^^^^^^^^^^^^^^^^^^^
ConnectionResetError: [WinError 10054] Eine vorhandene Verbindung wurde vom Remotehost geschlossen
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "c:\Lib\site-packages\requests\adapters.py", line 486, in send
resp = conn.urlopen(
^^^^^^^^^^^^^
File "c:\Lib\site-packages\urllib3\connectionpool.py", line 799, in urlopen
retries = retries.increment(
^^^^^^^^^^^^^^^^^^
File "c:\Lib\site-packages\urllib3\util\retry.py", line 550, in increment
raise six.reraise(type(error), error, _stacktrace)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Lib\site-packages\urllib3\packages\six.py", line 769, in reraise
raise value.with_traceback(tb)
File "c:\Lib\site-packages\urllib3\connectionpool.py", line 715, in urlopen
httplib_response = self._make_request(
^^^^^^^^^^^^^^^^^^^
File "c:\Lib\site-packages\urllib3\connectionpool.py", line 467, in _make_request
six.raise_from(e, None)
File "<string>", line 3, in raise_from
File "c:\Lib\site-packages\urllib3\connectionpool.py", line 462, in _make_request
httplib_response = conn.getresponse()
^^^^^^^^^^^^^^^^^^
File "c:\Lib\http\client.py", line 1386, in getresponse
response.begin()
File "c:\Lib\http\client.py", line 325, in begin
version, status, reason = self._read_status()
^^^^^^^^^^^^^^^^^^^
File "c:\Lib\http\client.py", line 286, in _read_status
line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Lib\socket.py", line 706, in readinto
return self._sock.recv_into(b)
^^^^^^^^^^^^^^^^^^^^^^^
urllib3.exceptions.ProtocolError: ('Connection aborted.', ConnectionResetError(10054, 'Eine vorhandene Verbindung wurde vom Remotehost geschlossen', None, 10054, None))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "c:\Lib\site-packages\langchain_community\embeddings\ollama.py", line 157, in _process_emb_response
res = requests.post(
^^^^^^^^^^^^^^
File "c:\Lib\site-packages\requests\api.py", line 115, in post
return request("post", url, data=data, json=json, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Lib\site-packages\requests\api.py", line 59, in request
return session.request(method=method, url=url, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Lib\site-packages\requests\sessions.py", line 589, in request
resp = self.send(prep, **send_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Lib\site-packages\requests\sessions.py", line 703, in send
r = adapter.send(request, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Lib\site-packages\requests\adapters.py", line 501, in send
raise ConnectionError(err, request=request)
requests.exceptions.ConnectionError: ('Connection aborted.', ConnectionResetError(10054, 'Eine vorhandene Verbindung wurde vom Remotehost geschlossen', None, 10054, None))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "E:\test.py", line 123, in <module>
rag(ds("documents"), "")
File "E:\test.py", line 93, in rag
result = chain.invoke(aufgabe).replace("\n"," ").replace("\r"," ").replace(" "," ")
^^^^^^^^^^^^^^^^^^^^^
File "c:\Lib\site-packages\langchain_core\runnables\base.py", line 2075, in invoke
input = step.invoke(
^^^^^^^^^^^^
File "c:\Lib\site-packages\langchain_core\runnables\base.py", line 2712, in invoke
output = {key: future.result() for key, future in zip(steps, futures)}
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Lib\site-packages\langchain_core\runnables\base.py", line 2712, in <dictcomp>
output = {key: future.result() for key, future in zip(steps, futures)}
^^^^^^^^^^^^^^^
File "c:\Lib\concurrent\futures\_base.py", line 456, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "c:\Lib\concurrent\futures\_base.py", line 401, in __get_result
raise self._exception
File "c:\Lib\concurrent\futures\thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Lib\site-packages\langchain_core\retrievers.py", line 141, in invoke
return self.get_relevant_documents(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Lib\site-packages\langchain_core\retrievers.py", line 244, in get_relevant_documents
raise e
File "c:\Lib\site-packages\langchain_core\retrievers.py", line 237, in get_relevant_documents
result = self._get_relevant_documents(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Lib\site-packages\langchain_core\vectorstores.py", line 674, in _get_relevant_documents
docs = self.vectorstore.similarity_search(query, **self.search_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Lib\site-packages\langchain_community\vectorstores\chroma.py", line 348, in similarity_search
docs_and_scores = self.similarity_search_with_score(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Lib\site-packages\langchain_community\vectorstores\chroma.py", line 437, in similarity_search_with_score
query_embedding = self._embedding_function.embed_query(query)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Lib\site-packages\langchain_community\embeddings\ollama.py", line 217, in embed_query
embedding = self._embed([instruction_pair])[0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Lib\site-packages\langchain_community\embeddings\ollama.py", line 192, in _embed
return [self._process_emb_response(prompt) for prompt in iter_]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Lib\site-packages\langchain_community\embeddings\ollama.py", line 192, in <listcomp>
return [self._process_emb_response(prompt) for prompt in iter_]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Lib\site-packages\langchain_community\embeddings\ollama.py", line 163, in _process_emb_response
raise ValueError(f"Error raised by inference endpoint: {e}")
ValueError: Error raised by inference endpoint: ('Connection aborted.', ConnectionResetError(10054, 'Eine vorhandene Verbindung wurde vom Remotehost geschlossen', None, 10054, None))
```
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3047/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3047/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5337
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5337/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5337/comments
|
https://api.github.com/repos/ollama/ollama/issues/5337/events
|
https://github.com/ollama/ollama/issues/5337
| 2,378,812,185
|
I_kwDOJ0Z1Ps6NycsZ
| 5,337
|
How can I set the parameter "num_return_sequences" to get multiple answers within one prompt?
|
{
"login": "superjessie",
"id": 29222783,
"node_id": "MDQ6VXNlcjI5MjIyNzgz",
"avatar_url": "https://avatars.githubusercontent.com/u/29222783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/superjessie",
"html_url": "https://github.com/superjessie",
"followers_url": "https://api.github.com/users/superjessie/followers",
"following_url": "https://api.github.com/users/superjessie/following{/other_user}",
"gists_url": "https://api.github.com/users/superjessie/gists{/gist_id}",
"starred_url": "https://api.github.com/users/superjessie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/superjessie/subscriptions",
"organizations_url": "https://api.github.com/users/superjessie/orgs",
"repos_url": "https://api.github.com/users/superjessie/repos",
"events_url": "https://api.github.com/users/superjessie/events{/privacy}",
"received_events_url": "https://api.github.com/users/superjessie/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 1
| 2024-06-27T18:16:58
| 2024-06-29T13:46:21
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
The parameter "num_return_sequences" is a parameter in model.generate(), while I did not figure out how to set it when using ollama to run LLMs.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5337/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5337/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/7862
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7862/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7862/comments
|
https://api.github.com/repos/ollama/ollama/issues/7862/events
|
https://github.com/ollama/ollama/issues/7862
| 2,698,619,827
|
I_kwDOJ0Z1Ps6g2auz
| 7,862
|
no ssh key found
|
{
"login": "14919598",
"id": 185652779,
"node_id": "U_kgDOCxDWKw",
"avatar_url": "https://avatars.githubusercontent.com/u/185652779?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/14919598",
"html_url": "https://github.com/14919598",
"followers_url": "https://api.github.com/users/14919598/followers",
"following_url": "https://api.github.com/users/14919598/following{/other_user}",
"gists_url": "https://api.github.com/users/14919598/gists{/gist_id}",
"starred_url": "https://api.github.com/users/14919598/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/14919598/subscriptions",
"organizations_url": "https://api.github.com/users/14919598/orgs",
"repos_url": "https://api.github.com/users/14919598/repos",
"events_url": "https://api.github.com/users/14919598/events{/privacy}",
"received_events_url": "https://api.github.com/users/14919598/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-11-27T13:48:07
| 2024-12-14T15:34:37
| 2024-12-14T15:34:36
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I am trying to pull and run the tulu 8b model,but it says:pulling manifest
Error: pull model manifest: ssh: no key found,I don't know what's wrong.

### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.4.5
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7862/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7862/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5793
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5793/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5793/comments
|
https://api.github.com/repos/ollama/ollama/issues/5793/events
|
https://github.com/ollama/ollama/issues/5793
| 2,418,538,861
|
I_kwDOJ0Z1Ps6QJ_lt
| 5,793
|
ollama 0.2.7 function call error "llama3 does not support tools"
|
{
"login": "liseri",
"id": 2908276,
"node_id": "MDQ6VXNlcjI5MDgyNzY=",
"avatar_url": "https://avatars.githubusercontent.com/u/2908276?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/liseri",
"html_url": "https://github.com/liseri",
"followers_url": "https://api.github.com/users/liseri/followers",
"following_url": "https://api.github.com/users/liseri/following{/other_user}",
"gists_url": "https://api.github.com/users/liseri/gists{/gist_id}",
"starred_url": "https://api.github.com/users/liseri/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/liseri/subscriptions",
"organizations_url": "https://api.github.com/users/liseri/orgs",
"repos_url": "https://api.github.com/users/liseri/repos",
"events_url": "https://api.github.com/users/liseri/events{/privacy}",
"received_events_url": "https://api.github.com/users/liseri/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 8
| 2024-07-19T10:05:52
| 2024-10-07T07:06:49
| 2024-07-22T01:41:52
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
ollama 0.2.7 function call error "llama3 does not support tools"
1. install ollama 0.2.7 in docker
2. ollama pull llama3
3. curl
```
curl --location --request POST 'http://localhost:11434/v1/chat/completions' \
--header 'Content-Type: application/json' \
--data-raw '{
"messages": [
{
"content": "what'\''s the weather of beijing",
"role": "user"
}
],
"model": "llama3",
"n": 1,
"temperature": 0.9,
"tools": [
{
"type": "function",
"function": {
"name": "get_weather",
"description": "get weather of location or city",
"parameters": {
"type": "object",
"properties": {
"query": {
"location": "location or city",
"type": "string"
}
},
"required": [
"query"
]
}
}
}
]
}'
```
4. error
```
{
"error": {
"message": "llama3 does not support tools",
"type": "api_error",
"param": null,
"code": null
}
}
```
### OS
Docker
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.2.7
|
{
"login": "liseri",
"id": 2908276,
"node_id": "MDQ6VXNlcjI5MDgyNzY=",
"avatar_url": "https://avatars.githubusercontent.com/u/2908276?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/liseri",
"html_url": "https://github.com/liseri",
"followers_url": "https://api.github.com/users/liseri/followers",
"following_url": "https://api.github.com/users/liseri/following{/other_user}",
"gists_url": "https://api.github.com/users/liseri/gists{/gist_id}",
"starred_url": "https://api.github.com/users/liseri/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/liseri/subscriptions",
"organizations_url": "https://api.github.com/users/liseri/orgs",
"repos_url": "https://api.github.com/users/liseri/repos",
"events_url": "https://api.github.com/users/liseri/events{/privacy}",
"received_events_url": "https://api.github.com/users/liseri/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5793/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5793/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7861
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7861/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7861/comments
|
https://api.github.com/repos/ollama/ollama/issues/7861/events
|
https://github.com/ollama/ollama/issues/7861
| 2,698,335,036
|
I_kwDOJ0Z1Ps6g1VM8
| 7,861
|
Support AMD 780m
|
{
"login": "fce2",
"id": 16529960,
"node_id": "MDQ6VXNlcjE2NTI5OTYw",
"avatar_url": "https://avatars.githubusercontent.com/u/16529960?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fce2",
"html_url": "https://github.com/fce2",
"followers_url": "https://api.github.com/users/fce2/followers",
"following_url": "https://api.github.com/users/fce2/following{/other_user}",
"gists_url": "https://api.github.com/users/fce2/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fce2/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fce2/subscriptions",
"organizations_url": "https://api.github.com/users/fce2/orgs",
"repos_url": "https://api.github.com/users/fce2/repos",
"events_url": "https://api.github.com/users/fce2/events{/privacy}",
"received_events_url": "https://api.github.com/users/fce2/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 3
| 2024-11-27T12:06:28
| 2024-12-04T02:02:34
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Please consider to add better AMD support (e.g. 7840u with 780m)
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7861/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7861/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/7149
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7149/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7149/comments
|
https://api.github.com/repos/ollama/ollama/issues/7149/events
|
https://github.com/ollama/ollama/pull/7149
| 2,576,011,282
|
PR_kwDOJ0Z1Ps5-Felo
| 7,149
|
Create ezaii.go
|
{
"login": "sahandmohammadrehzaii",
"id": 139042771,
"node_id": "U_kgDOCEmf0w",
"avatar_url": "https://avatars.githubusercontent.com/u/139042771?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sahandmohammadrehzaii",
"html_url": "https://github.com/sahandmohammadrehzaii",
"followers_url": "https://api.github.com/users/sahandmohammadrehzaii/followers",
"following_url": "https://api.github.com/users/sahandmohammadrehzaii/following{/other_user}",
"gists_url": "https://api.github.com/users/sahandmohammadrehzaii/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sahandmohammadrehzaii/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sahandmohammadrehzaii/subscriptions",
"organizations_url": "https://api.github.com/users/sahandmohammadrehzaii/orgs",
"repos_url": "https://api.github.com/users/sahandmohammadrehzaii/repos",
"events_url": "https://api.github.com/users/sahandmohammadrehzaii/events{/privacy}",
"received_events_url": "https://api.github.com/users/sahandmohammadrehzaii/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-10-09T13:52:13
| 2024-10-09T18:18:52
| 2024-10-09T18:18:52
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7149",
"html_url": "https://github.com/ollama/ollama/pull/7149",
"diff_url": "https://github.com/ollama/ollama/pull/7149.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7149.patch",
"merged_at": null
}
| null |
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7149/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7149/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2961
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2961/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2961/comments
|
https://api.github.com/repos/ollama/ollama/issues/2961/events
|
https://github.com/ollama/ollama/pull/2961
| 2,172,443,038
|
PR_kwDOJ0Z1Ps5o49sr
| 2,961
|
cmd: document environment variables for serve command
|
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-03-06T21:17:52
| 2024-03-06T21:48:47
| 2024-03-06T21:48:46
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/2961",
"html_url": "https://github.com/ollama/ollama/pull/2961",
"diff_url": "https://github.com/ollama/ollama/pull/2961.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2961.patch",
"merged_at": "2024-03-06T21:48:46"
}
|
Updates #2944
|
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2961/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2961/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2940
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2940/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2940/comments
|
https://api.github.com/repos/ollama/ollama/issues/2940/events
|
https://github.com/ollama/ollama/issues/2940
| 2,169,811,531
|
I_kwDOJ0Z1Ps6BVLJL
| 2,940
|
OLLAMA_MODELS env variable in bashrc doesnt work
|
{
"login": "harsham05",
"id": 8755540,
"node_id": "MDQ6VXNlcjg3NTU1NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/8755540?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/harsham05",
"html_url": "https://github.com/harsham05",
"followers_url": "https://api.github.com/users/harsham05/followers",
"following_url": "https://api.github.com/users/harsham05/following{/other_user}",
"gists_url": "https://api.github.com/users/harsham05/gists{/gist_id}",
"starred_url": "https://api.github.com/users/harsham05/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/harsham05/subscriptions",
"organizations_url": "https://api.github.com/users/harsham05/orgs",
"repos_url": "https://api.github.com/users/harsham05/repos",
"events_url": "https://api.github.com/users/harsham05/events{/privacy}",
"received_events_url": "https://api.github.com/users/harsham05/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 6
| 2024-03-05T17:47:27
| 2024-07-05T11:52:51
| 2024-03-12T01:22:06
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Ive added the following to my .bashrc but Ollama doesnt seem to storing them there.
`export OLLAMA_MODELS=/path/to/models/`
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2940/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2940/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1387
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1387/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1387/comments
|
https://api.github.com/repos/ollama/ollama/issues/1387/events
|
https://github.com/ollama/ollama/issues/1387
| 2,025,182,162
|
I_kwDOJ0Z1Ps54tdPS
| 1,387
|
ollama push {model} - 401 Couldn't Authorize
|
{
"login": "josiahbryan",
"id": 4821548,
"node_id": "MDQ6VXNlcjQ4MjE1NDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/4821548?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/josiahbryan",
"html_url": "https://github.com/josiahbryan",
"followers_url": "https://api.github.com/users/josiahbryan/followers",
"following_url": "https://api.github.com/users/josiahbryan/following{/other_user}",
"gists_url": "https://api.github.com/users/josiahbryan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/josiahbryan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/josiahbryan/subscriptions",
"organizations_url": "https://api.github.com/users/josiahbryan/orgs",
"repos_url": "https://api.github.com/users/josiahbryan/repos",
"events_url": "https://api.github.com/users/josiahbryan/events{/privacy}",
"received_events_url": "https://api.github.com/users/josiahbryan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2023-12-05T03:02:41
| 2023-12-05T19:30:33
| 2023-12-05T19:30:32
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Even after adding the contents of ` ~/.ollama/id_ed25519.pub` to the SSH keys section of my Ollama `josiahbryan` account, I still got:
```
% ollama push josiahbryan/dragon-mistral-7b-v0-q4
retrieving manifest
Error: on pull registry responded with code 401: {"message":"Couldn't authorize"}
```
Suggestions?
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1387/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1387/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/910
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/910/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/910/comments
|
https://api.github.com/repos/ollama/ollama/issues/910/events
|
https://github.com/ollama/ollama/issues/910
| 1,962,673,276
|
I_kwDOJ0Z1Ps50_AR8
| 910
|
invalid URL escape
|
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5667396210,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2acg",
"url": "https://api.github.com/repos/ollama/ollama/labels/good%20first%20issue",
"name": "good first issue",
"color": "7057ff",
"default": true,
"description": "Good for newcomers"
}
] |
closed
| false
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 0
| 2023-10-26T04:18:34
| 2023-10-26T19:24:13
| 2023-10-26T19:24:13
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Need to escape:
```
OLLAMA_HOST=https://redacted.fly.dev/ ollama run llama2:13b
Error: parse "https://redacted.fly.dev%2F:11434": invalid URL escape "%2F"
```
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/910/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/910/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4327
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4327/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4327/comments
|
https://api.github.com/repos/ollama/ollama/issues/4327/events
|
https://github.com/ollama/ollama/pull/4327
| 2,290,454,200
|
PR_kwDOJ0Z1Ps5vIyrx
| 4,327
|
Ollama `ps` command for showing currently loaded models
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-05-10T21:50:12
| 2024-05-14T00:17:37
| 2024-05-14T00:17:36
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4327",
"html_url": "https://github.com/ollama/ollama/pull/4327",
"diff_url": "https://github.com/ollama/ollama/pull/4327.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4327.patch",
"merged_at": "2024-05-14T00:17:36"
}
|
This change adds a rudimentary `ps` command which makes use of the new scheduler changes in the server. The UX also
The UX for this depends on whether you're using the CPU, GPU, or a hybrid of both and looks like:
```
NAME ID SIZE PROCESSOR UNTIL
mistral:latest 61e88e884507 5.4 GB 100% GPU 28 seconds from now
NAME ID SIZE PROCESSOR UNTIL
mistral:latest 61e88e884507 5.4 GB 48%/52% CPU/GPU 28 seconds from now
NAME ID SIZE PROCESSOR UNTIL
mistral:latest 61e88e884507 5.4 GB 100% CPU 28 seconds from now
```
Additionally, there is a new `--keepalive` flag in the REPL which can be used to set how long you want the model to stay resident in memory after the model has finished inference. It takes a duration string (e.g. `3m30s`), however we can switch this to also accept integers similar to the API.
This also introduces a new `/api/ps` endpoint which returns back a response similar to the `/api/tags` endpoint albeit with additional information. The size of the running model *will not* match the amount reported from the `/api/tags` endpoint for a given model since it can take additional memory when loaded onto the GPU or as a hybrid.
Partially addresses #3902
Fixes #4013
Replaces #2359
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4327/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4327/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6026
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6026/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6026/comments
|
https://api.github.com/repos/ollama/ollama/issues/6026/events
|
https://github.com/ollama/ollama/issues/6026
| 2,433,920,762
|
I_kwDOJ0Z1Ps6REq76
| 6,026
|
The 1k context limit in Open-WebUI request is causing low-quality responses.
|
{
"login": "anrgct",
"id": 16172523,
"node_id": "MDQ6VXNlcjE2MTcyNTIz",
"avatar_url": "https://avatars.githubusercontent.com/u/16172523?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anrgct",
"html_url": "https://github.com/anrgct",
"followers_url": "https://api.github.com/users/anrgct/followers",
"following_url": "https://api.github.com/users/anrgct/following{/other_user}",
"gists_url": "https://api.github.com/users/anrgct/gists{/gist_id}",
"starred_url": "https://api.github.com/users/anrgct/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anrgct/subscriptions",
"organizations_url": "https://api.github.com/users/anrgct/orgs",
"repos_url": "https://api.github.com/users/anrgct/repos",
"events_url": "https://api.github.com/users/anrgct/events{/privacy}",
"received_events_url": "https://api.github.com/users/anrgct/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 13
| 2024-07-28T12:57:42
| 2024-08-10T15:38:43
| 2024-08-10T15:38:43
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
When using open-webui, I've noticed that long contextual messages sent to ollama consistently result in poor responses. After investigating the issue, it appears that the `/api/chat` and `/v1/chat/completions` endpoints are defaulting to a 1k context limit. This means that when the content exceeds this length, the system automatically discards the earlier portions, leading to subpar answers. What follows is the captured network request data for open-webui version 0.3.8.
```
curl 'http://localhost:11434/api/chat' \
-X POST \
-H 'Host: localhost:11434' \
-H 'Accept: */*' \
-H 'User-Agent: Python/3.11 aiohttp/3.9.5' \
-H 'Content-Type: text/plain; charset=utf-8' \
--data-raw '{"model": "qwen1_5-4b-chat-q4_k_m", "messages": [{"role": "user", "content": "<long context>"}], "options": {}, "stream": true}'
```
Based on the final response, we can observe that the prompt_eval_count is 1026, which indicates that only approximately 1,000 tokens of context were processed.
```
{"role":"assistant","content":""},"done_reason":"stop","done":true,"total_duration":9987503333,"load_duration":28999667,"prompt_eval_count":1026,"prompt_eval_duration":1896469000,"eval_count":238,"eval_duration":8059779000}
```
I'm uncertain whether I should submit an issue about this bug to the open-webui repository.
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.3.0
|
{
"login": "anrgct",
"id": 16172523,
"node_id": "MDQ6VXNlcjE2MTcyNTIz",
"avatar_url": "https://avatars.githubusercontent.com/u/16172523?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anrgct",
"html_url": "https://github.com/anrgct",
"followers_url": "https://api.github.com/users/anrgct/followers",
"following_url": "https://api.github.com/users/anrgct/following{/other_user}",
"gists_url": "https://api.github.com/users/anrgct/gists{/gist_id}",
"starred_url": "https://api.github.com/users/anrgct/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anrgct/subscriptions",
"organizations_url": "https://api.github.com/users/anrgct/orgs",
"repos_url": "https://api.github.com/users/anrgct/repos",
"events_url": "https://api.github.com/users/anrgct/events{/privacy}",
"received_events_url": "https://api.github.com/users/anrgct/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6026/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6026/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6019
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6019/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6019/comments
|
https://api.github.com/repos/ollama/ollama/issues/6019/events
|
https://github.com/ollama/ollama/pull/6019
| 2,433,619,028
|
PR_kwDOJ0Z1Ps52ptV-
| 6,019
|
Update README.md / Added my mobile app to the list
|
{
"login": "Calvicii",
"id": 80085756,
"node_id": "MDQ6VXNlcjgwMDg1NzU2",
"avatar_url": "https://avatars.githubusercontent.com/u/80085756?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Calvicii",
"html_url": "https://github.com/Calvicii",
"followers_url": "https://api.github.com/users/Calvicii/followers",
"following_url": "https://api.github.com/users/Calvicii/following{/other_user}",
"gists_url": "https://api.github.com/users/Calvicii/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Calvicii/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Calvicii/subscriptions",
"organizations_url": "https://api.github.com/users/Calvicii/orgs",
"repos_url": "https://api.github.com/users/Calvicii/repos",
"events_url": "https://api.github.com/users/Calvicii/events{/privacy}",
"received_events_url": "https://api.github.com/users/Calvicii/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-07-27T20:38:37
| 2024-07-27T20:39:34
| 2024-07-27T20:39:34
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6019",
"html_url": "https://github.com/ollama/ollama/pull/6019",
"diff_url": "https://github.com/ollama/ollama/pull/6019.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6019.patch",
"merged_at": null
}
|
I have this functional app that acts as a client for Ollama.
|
{
"login": "Calvicii",
"id": 80085756,
"node_id": "MDQ6VXNlcjgwMDg1NzU2",
"avatar_url": "https://avatars.githubusercontent.com/u/80085756?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Calvicii",
"html_url": "https://github.com/Calvicii",
"followers_url": "https://api.github.com/users/Calvicii/followers",
"following_url": "https://api.github.com/users/Calvicii/following{/other_user}",
"gists_url": "https://api.github.com/users/Calvicii/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Calvicii/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Calvicii/subscriptions",
"organizations_url": "https://api.github.com/users/Calvicii/orgs",
"repos_url": "https://api.github.com/users/Calvicii/repos",
"events_url": "https://api.github.com/users/Calvicii/events{/privacy}",
"received_events_url": "https://api.github.com/users/Calvicii/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6019/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6019/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3882
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3882/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3882/comments
|
https://api.github.com/repos/ollama/ollama/issues/3882/events
|
https://github.com/ollama/ollama/pull/3882
| 2,261,738,601
|
PR_kwDOJ0Z1Ps5toGtE
| 3,882
|
AMD gfx patch rev is hex
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-04-24T16:44:29
| 2024-04-24T18:07:52
| 2024-04-24T18:07:49
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3882",
"html_url": "https://github.com/ollama/ollama/pull/3882",
"diff_url": "https://github.com/ollama/ollama/pull/3882.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3882.patch",
"merged_at": "2024-04-24T18:07:49"
}
|
Correctly handle gfx90a discovery
Fixes #3809
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3882/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3882/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/8003
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8003/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8003/comments
|
https://api.github.com/repos/ollama/ollama/issues/8003/events
|
https://github.com/ollama/ollama/issues/8003
| 2,725,718,623
|
I_kwDOJ0Z1Ps6idypf
| 8,003
|
Allow for forcing an order of properties in structured JSON response
|
{
"login": "scd31",
"id": 57571338,
"node_id": "MDQ6VXNlcjU3NTcxMzM4",
"avatar_url": "https://avatars.githubusercontent.com/u/57571338?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/scd31",
"html_url": "https://github.com/scd31",
"followers_url": "https://api.github.com/users/scd31/followers",
"following_url": "https://api.github.com/users/scd31/following{/other_user}",
"gists_url": "https://api.github.com/users/scd31/gists{/gist_id}",
"starred_url": "https://api.github.com/users/scd31/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/scd31/subscriptions",
"organizations_url": "https://api.github.com/users/scd31/orgs",
"repos_url": "https://api.github.com/users/scd31/repos",
"events_url": "https://api.github.com/users/scd31/events{/privacy}",
"received_events_url": "https://api.github.com/users/scd31/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 3
| 2024-12-09T01:28:04
| 2024-12-09T12:04:51
| 2024-12-09T12:04:50
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
When having an LLM respond with JSON I often do something along the lines of `{reasoning: "...", actual_property_i_care_about: "..."}`. The idea is that the `reasoning` property isn't used on my end but gives the LLM the ability to think first, like with CoT. Of course, this requires the LLM to populate the `reasoning` term before the other terms.
With the new structured JSON response it would be nice if there were a way to enforce this. Even telling the LLM to put the elements in a specific order is ignored.
|
{
"login": "scd31",
"id": 57571338,
"node_id": "MDQ6VXNlcjU3NTcxMzM4",
"avatar_url": "https://avatars.githubusercontent.com/u/57571338?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/scd31",
"html_url": "https://github.com/scd31",
"followers_url": "https://api.github.com/users/scd31/followers",
"following_url": "https://api.github.com/users/scd31/following{/other_user}",
"gists_url": "https://api.github.com/users/scd31/gists{/gist_id}",
"starred_url": "https://api.github.com/users/scd31/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/scd31/subscriptions",
"organizations_url": "https://api.github.com/users/scd31/orgs",
"repos_url": "https://api.github.com/users/scd31/repos",
"events_url": "https://api.github.com/users/scd31/events{/privacy}",
"received_events_url": "https://api.github.com/users/scd31/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8003/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8003/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5949
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5949/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5949/comments
|
https://api.github.com/repos/ollama/ollama/issues/5949/events
|
https://github.com/ollama/ollama/issues/5949
| 2,429,779,374
|
I_kwDOJ0Z1Ps6Q032u
| 5,949
|
Out of Memory Error when using Meta-Llama-3.1-8B-Instruct-Q8_0.gguf model with Ollama ROCm with num_ctx=120000
|
{
"login": "renbuarl",
"id": 176577927,
"node_id": "U_kgDOCoZdhw",
"avatar_url": "https://avatars.githubusercontent.com/u/176577927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/renbuarl",
"html_url": "https://github.com/renbuarl",
"followers_url": "https://api.github.com/users/renbuarl/followers",
"following_url": "https://api.github.com/users/renbuarl/following{/other_user}",
"gists_url": "https://api.github.com/users/renbuarl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/renbuarl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/renbuarl/subscriptions",
"organizations_url": "https://api.github.com/users/renbuarl/orgs",
"repos_url": "https://api.github.com/users/renbuarl/repos",
"events_url": "https://api.github.com/users/renbuarl/events{/privacy}",
"received_events_url": "https://api.github.com/users/renbuarl/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6433346500,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf3UTxA",
"url": "https://api.github.com/repos/ollama/ollama/labels/amd",
"name": "amd",
"color": "000000",
"default": false,
"description": "Issues relating to AMD GPUs and ROCm"
},
{
"id": 6849881759,
"node_id": "LA_kwDOJ0Z1Ps8AAAABmEjmnw",
"url": "https://api.github.com/repos/ollama/ollama/labels/memory",
"name": "memory",
"color": "5017EA",
"default": false,
"description": ""
}
] |
open
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 15
| 2024-07-25T11:58:25
| 2024-10-17T17:37:30
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
OS: Linux 6.5.0-44-generic #44~22.04.1-Ubuntu
GPU:
AMD Radeon RX 7900 XTX (24 GiB VRAM)
AMD Radeon RX 7900 XTX (24 GiB VRAM)
AMD Radeon RX 7900 XTX (24 GiB VRAM)
Ollama version: 0.2.8
ROCm module version: 6.7.0
amdgpu-install_6.1.60103-1_all.deb
Model: Meta-Llama-3.1-8B-Instruct-Q8_0
While testing the Meta-Llama-3.1-8B-Instruct-Q8_0.gguf model, I encountered an out of memory error well before reaching the maximum context size of 128k for the model. The model crashes after processing approximately 28,000 tokens, regardless of whether using one GPU with 24GB of memory (nctx = 30,000) or three GPUs with a combined memory of 72GB (nctx = 120,000).
Error:
Jul 25 12:39:17 ailab ollama[683]: CUDA error: out of memory
Jul 25 12:39:17 ailab ollama[683]: current device: 0, in function alloc at /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml/src/ggml-cuda.cu:291
Jul 25 12:39:17 ailab ollama[683]: ggml_cuda_device_malloc(&ptr, look_ahead_size, device)
Jul 25 12:39:17 ailab ollama[683]: GGML_ASSERT: /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml/src/ggml-cuda.cu:101: !"CUDA error"
There might be similar issues, but out of memory errors with multiple GPUs have not been reported yet.
### OS
Linux
### GPU
AMD
### CPU
Intel
### Ollama version
0.2.8
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5949/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5949/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/1980
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1980/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1980/comments
|
https://api.github.com/repos/ollama/ollama/issues/1980/events
|
https://github.com/ollama/ollama/issues/1980
| 2,080,495,348
|
I_kwDOJ0Z1Ps58Adb0
| 1,980
|
Make update script skip execution if current version is latest (improvement)
|
{
"login": "atassis",
"id": 5769345,
"node_id": "MDQ6VXNlcjU3NjkzNDU=",
"avatar_url": "https://avatars.githubusercontent.com/u/5769345?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/atassis",
"html_url": "https://github.com/atassis",
"followers_url": "https://api.github.com/users/atassis/followers",
"following_url": "https://api.github.com/users/atassis/following{/other_user}",
"gists_url": "https://api.github.com/users/atassis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/atassis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/atassis/subscriptions",
"organizations_url": "https://api.github.com/users/atassis/orgs",
"repos_url": "https://api.github.com/users/atassis/repos",
"events_url": "https://api.github.com/users/atassis/events{/privacy}",
"received_events_url": "https://api.github.com/users/atassis/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-01-13T22:19:56
| 2024-07-24T21:48:58
| 2024-07-24T21:48:58
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
You can put a `version` file in the root of download directory and check if current installed ollama has an identical version, for example.
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1980/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1980/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3212
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3212/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3212/comments
|
https://api.github.com/repos/ollama/ollama/issues/3212/events
|
https://github.com/ollama/ollama/issues/3212
| 2,191,147,373
|
I_kwDOJ0Z1Ps6CmkFt
| 3,212
|
ollama pull modelName Error
|
{
"login": "ZPLSSSTD",
"id": 21329959,
"node_id": "MDQ6VXNlcjIxMzI5OTU5",
"avatar_url": "https://avatars.githubusercontent.com/u/21329959?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZPLSSSTD",
"html_url": "https://github.com/ZPLSSSTD",
"followers_url": "https://api.github.com/users/ZPLSSSTD/followers",
"following_url": "https://api.github.com/users/ZPLSSSTD/following{/other_user}",
"gists_url": "https://api.github.com/users/ZPLSSSTD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZPLSSSTD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZPLSSSTD/subscriptions",
"organizations_url": "https://api.github.com/users/ZPLSSSTD/orgs",
"repos_url": "https://api.github.com/users/ZPLSSSTD/repos",
"events_url": "https://api.github.com/users/ZPLSSSTD/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZPLSSSTD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 6677370291,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCVsw",
"url": "https://api.github.com/repos/ollama/ollama/labels/networking",
"name": "networking",
"color": "0B5368",
"default": false,
"description": "Issues relating to ollama pull and push"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-03-18T03:24:55
| 2024-03-28T20:52:24
| 2024-03-28T20:52:24
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I once accidentally installed Ollama: 7b successfully.Afterwards, I executed the command olama pull llama2. But there was an error, and the error message is as follows
`
pulling manifest
Error: pull model manifest: Get "https://ollama.com/token?nonce=qKzQl7GvJl7HVA-mW_-3Ow&scope=repository%!A(MISSING)library%!F(MISSING)llama2%!A(MISSING)pull&service=ollama.com&ts=1710731810": read tcp 192.168.4.190:13291->34.120.132.20:443: wsarecv: An existing connection was forcibly closed by the remote host.
`
I searched for the issue, but the answer inside cannot solve my problem. I have tried restarting my computer or changing my WiFi, but it still doesn't work.
### What did you expect to see?
I hope to be able to install the model normally
### Steps to reproduce
_No response_
### Are there any recent changes that introduced the issue?
_No response_
### OS
Windows
### Architecture
amd64
### Platform
_No response_
### Ollama version
0.1.29
### GPU
Nvidia
### GPU info
Nvidia Geforce RTX 2060
### CPU
AMD
### Other software
_No response_
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3212/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3212/timeline
| null |
not_planned
| false
|
https://api.github.com/repos/ollama/ollama/issues/3298
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3298/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3298/comments
|
https://api.github.com/repos/ollama/ollama/issues/3298/events
|
https://github.com/ollama/ollama/issues/3298
| 2,203,068,766
|
I_kwDOJ0Z1Ps6DUCle
| 3,298
|
Vision with llava-1.6-7B is unusable via CLI
|
{
"login": "olafgeibig",
"id": 295644,
"node_id": "MDQ6VXNlcjI5NTY0NA==",
"avatar_url": "https://avatars.githubusercontent.com/u/295644?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/olafgeibig",
"html_url": "https://github.com/olafgeibig",
"followers_url": "https://api.github.com/users/olafgeibig/followers",
"following_url": "https://api.github.com/users/olafgeibig/following{/other_user}",
"gists_url": "https://api.github.com/users/olafgeibig/gists{/gist_id}",
"starred_url": "https://api.github.com/users/olafgeibig/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/olafgeibig/subscriptions",
"organizations_url": "https://api.github.com/users/olafgeibig/orgs",
"repos_url": "https://api.github.com/users/olafgeibig/repos",
"events_url": "https://api.github.com/users/olafgeibig/events{/privacy}",
"received_events_url": "https://api.github.com/users/olafgeibig/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 21
| 2024-03-22T18:24:39
| 2024-10-11T19:45:57
| 2024-05-10T23:22:24
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
The image recognition is very poor. It can't describe the picture properly and it also can't extract text. It seems to process a very downscaled image because it complains about the text being too small and it makes assumptions about image elements that seem to be likely but aren't true. It hallucinates a lot and makes things up.
Basically this model is very capable. But with Ollama it is unusable and I tried different tags. llava:7b-v1.6-mistral-q5_K_M and llava:7b from the ollama library. Additionally I downloded a Q5_K_M quant of the model from HF and imported it into my local ollama with a self-created modelfile. The problem was always the same.
I used this image, a screenshot from a PDF manual:

Prompt:
> extract the text underneath the image: /Users/olaf/Desktop/Screenshot\ 2024-03-22\ at\ 18.37.08.png
> Added image '/Users/olaf/Desktop/Screenshot 2024-03-22 at 18.37.08.png'
Ollama response:
> The image shows a digital advertisement or informational graphic, but it is too small and low resolution to provide a clear transcription of the text. The visible content includes a large purple logo in the upper right corner with the name "SWITCH" below it, and several smaller graphics or logos aligned vertically along the top and bottom edges of the image, which may be related to the product or service advertised on this webpage.
>
> Please note that due to the low resolution and size of the text in the image, it's difficult to provide an exact transcription of what is written.
It did not even try to extract the text, instead described the image but the description is complete nonsense.
Converting the image to JPG improved the recognition but the model still made things up and the text extraction is just partially correct:
> The text underneath the image reads:
>
> "Click on any package result to see above. Otherwise the top row will not be visible. A popup will appear and display the packages in the next step.
>
> The first group of buttons is for the product options. The second set of buttons are for different shipping options. The third set ofbuttons allows you to change your email if needed.
>
> Please note that these buttons might not be active on this page but they will be when you click through from a live site."
Using a python script as described in https://ollama.com/blog/vision-models does not improve anything.
Log output after script execution with the png file:
```
time=2024-03-22T20:03:14.404+01:00 level=INFO source=dyn_ext_server.go:172 msg="loaded 1 images"
{"function":"launch_slot_with_data","level":"INFO","line":833,"msg":"slot is processing task","slot_id":0,"task_id":637,"tid":"0x172f5f000","timestamp":1711134194}
{"function":"update_slots","level":"INFO","line":1852,"msg":"kv cache rm [p0, end)","p0":0,"slot_id":0,"task_id":637,"tid":"0x172f5f000","timestamp":1711134194}
encode_image_with_clip: image embedding created: 576 tokens
encode_image_with_clip: image encoded in 658.77 ms by CLIP ( 1.14 ms per image patch)
{"function":"print_timings","level":"INFO","line":264,"msg":"prompt eval time = 6374.43 ms / 1 tokens ( 6374.43 ms per token, 0.16 tokens per second)","n_prompt_tokens_processed":1,"n_tokens_second":0.15687684555803996,"slot_id":0,"t_prompt_processing":6374.427,"t_token":6374.427,"task_id":637,"tid":"0x172f5f000","timestamp":1711134212}
{"function":"print_timings","level":"INFO","line":278,"msg":"generation eval time = 11258.49 ms / 321 runs ( 35.07 ms per token, 28.51 tokens per second)","n_decoded":321,"n_tokens_second":28.511811350934032,"slot_id":0,"t_token":35.073183800623056,"t_token_generation":11258.492,"task_id":637,"tid":"0x172f5f000","timestamp":1711134212}
{"function":"print_timings","level":"INFO","line":287,"msg":" total time = 17632.92 ms","slot_id":0,"t_prompt_processing":6374.427,"t_token_generation":11258.492,"t_total":17632.919,"task_id":637,"tid":"0x172f5f000","timestamp":1711134212}
{"function":"update_slots","level":"INFO","line":1660,"msg":"slot released","n_cache_tokens":322,"n_ctx":2048,"n_past":907,"n_system_tokens":0,"slot_id":0,"task_id":637,"tid":"0x172f5f000","timestamp":1711134212,"truncated":false}
[GIN] 2024/03/22 - 20:03:32 | 200 | 17.686484875s | 127.0.0.1 | POST "/api/chat"
```
### What did you expect to see?
I compared it with the results from this HF space running the same model in FP16 but most probably with pytorch and transformers: https://huggingface.co/spaces/merve/llava-next
prompt:
> extract the text underneath the image
response:
> The text underneath the image is as follows:
>
> "Click on any package results in additional information popping up, as seen above. Otherwise, the top row of text buttons represents view filters for seeing and sorting packs differently. The first group of buttons offers to filter packages by their source, either by showing only those by Bitwig, only those by Artists, or only those from Partners (like sound design companies, etc.). Or simply turn off this filter to see all packages
This is a good answer. It is exactly the text beneath the image in the image.
### Steps to reproduce
1. ollama run llava:7b
2. Use the uploaded picture with the prompt from above
### Are there any recent changes that introduced the issue?
_No response_
### OS
macOS
### Architecture
arm64
### Platform
_No response_
### Ollama version
0.1.29
### GPU
Apple
### GPU info
M1 Pro
### CPU
Apple
### Other software
_No response_
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3298/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3298/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8254
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8254/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8254/comments
|
https://api.github.com/repos/ollama/ollama/issues/8254/events
|
https://github.com/ollama/ollama/issues/8254
| 2,760,469,386
|
I_kwDOJ0Z1Ps6kiWuK
| 8,254
|
ollama not use GPU: when using NVIDIA GPU, it detected amdgpu driver and then use CPU to compute
|
{
"login": "Roc136",
"id": 57868577,
"node_id": "MDQ6VXNlcjU3ODY4NTc3",
"avatar_url": "https://avatars.githubusercontent.com/u/57868577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Roc136",
"html_url": "https://github.com/Roc136",
"followers_url": "https://api.github.com/users/Roc136/followers",
"following_url": "https://api.github.com/users/Roc136/following{/other_user}",
"gists_url": "https://api.github.com/users/Roc136/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Roc136/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Roc136/subscriptions",
"organizations_url": "https://api.github.com/users/Roc136/orgs",
"repos_url": "https://api.github.com/users/Roc136/repos",
"events_url": "https://api.github.com/users/Roc136/events{/privacy}",
"received_events_url": "https://api.github.com/users/Roc136/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 4
| 2024-12-27T06:35:01
| 2024-12-28T12:10:22
| 2024-12-28T12:10:22
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I'm running ollama on a device with NVIDIA A100 80G GPU and Intel(R) Xeon(R) Gold 5320 CPU. I built Ollama using the command `make CUSTOM_CPU_FLAGS=""`, started it with `ollama serve`, and ran `ollama run llama2` to load the Llama2 model.
Problem:
Ollama is running on the CPU instead of the GPU.
I checked the logs by setting `OLLAMA_DEBUG=1` and found the following lines:
```
time=2024-12-27T13:46:02.212 level=DEBUG source=amd_linux.go:421 msg="amdgpu driver not detected /sys/module/amdgpu"
releasing cuda driver library
```
It seems that Ollama is attempting to use the AMD driver?
I want to know is it correct or why it cann't use GPU.
some info about GPU
```
$ nvidia-smi
Fri Dec 27 13:53:29 2024
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.216.03 Driver Version: 535.216.03 CUDA Version: 12.2 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 NVIDIA A100 80GB PCIe Off | 00000000:00:0A.0 Off | 0 |
| N/A 50C P0 71W / 300W | 3MiB / 81920MiB | 0% Default |
| | | Disabled |
+-----------------------------------------+----------------------+----------------------+
+---------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
| No running processes found |
+---------------------------------------------------------------------------------------+
```
full logs your may needed:
```
2024/12/27 13:46:01 routes.go:1259: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2024-12-27T13:46:01.756 level=INFO source=images.go:757 msg="total blobs: 11"
time=2024-12-27T13:46:01.757 level=INFO source=images.go:764 msg="total unused blobs removed: 0"
[GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached.
[GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.
- using env: export GIN_MODE=release
- using code: gin.SetMode(gin.ReleaseMode)
[GIN-debug] POST /api/pull --> github.com/ollama/ollama/server.(*Server).PullHandler-fm (5 handlers)
[GIN-debug] POST /api/generate --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers)
[GIN-debug] POST /api/chat --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers)
[GIN-debug] POST /api/embed --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers)
[GIN-debug] POST /api/embeddings --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers)
[GIN-debug] POST /api/create --> github.com/ollama/ollama/server.(*Server).CreateHandler-fm (5 handlers)
[GIN-debug] POST /api/push --> github.com/ollama/ollama/server.(*Server).PushHandler-fm (5 handlers)
[GIN-debug] POST /api/copy --> github.com/ollama/ollama/server.(*Server).CopyHandler-fm (5 handlers)
[GIN-debug] DELETE /api/delete --> github.com/ollama/ollama/server.(*Server).DeleteHandler-fm (5 handlers)
[GIN-debug] POST /api/show --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (5 handlers)
[GIN-debug] POST /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers)
[GIN-debug] HEAD /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers)
[GIN-debug] GET /api/ps --> github.com/ollama/ollama/server.(*Server).PsHandler-fm (5 handlers)
[GIN-debug] POST /v1/chat/completions --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers)
[GIN-debug] POST /v1/completions --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers)
[GIN-debug] POST /v1/embeddings --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers)
[GIN-debug] GET /v1/models --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (6 handlers)
[GIN-debug] GET /v1/models/:model --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (6 handlers)
[GIN-debug] GET / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] GET /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)
[GIN-debug] GET /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
[GIN-debug] HEAD / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] HEAD /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)
[GIN-debug] HEAD /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
time=2024-12-27T13:46:01.757 level=INFO source=routes.go:1310 msg="Listening on 127.0.0.1:11434 (version 0.5.4-11-g023e4bc)"
time=2024-12-27T13:46:01.758 level=DEBUG source=common.go:85 msg="no dynamic runners detected, using only built-in"
time=2024-12-27T13:46:01.758 level=INFO source=routes.go:1339 msg="Dynamic LLM libraries" runners=[cpu]
time=2024-12-27T13:46:01.758 level=DEBUG source=routes.go:1340 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY"
time=2024-12-27T13:46:01.758 level=DEBUG source=sched.go:105 msg="starting llm scheduler"
time=2024-12-27T13:46:01.758 level=INFO source=gpu.go:226 msg="looking for compatible GPUs"
time=2024-12-27T13:46:01.762 level=DEBUG source=gpu.go:99 msg="searching for GPU discovery libraries for NVIDIA"
time=2024-12-27T13:46:01.762 level=DEBUG source=gpu.go:517 msg="Searching for GPU library" name=libcuda.so*
time=2024-12-27T13:46:01.762 level=DEBUG source=gpu.go:543 msg="gpu library search" globs="[libcuda.so* /libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]"
time=2024-12-27T13:46:01.766 level=DEBUG source=gpu.go:577 msg="discovered GPU libraries" paths=[/usr/lib/x86_64-linux-gnu/libcuda.so.535.216.03]
initializing /usr/lib/x86_64-linux-gnu/libcuda.so.535.216.03
dlsym: cuInit - 0x7fa8f88134b0
dlsym: cuDriverGetVersion - 0x7fa8f88134d0
dlsym: cuDeviceGetCount - 0x7fa8f8813510
dlsym: cuDeviceGet - 0x7fa8f88134f0
dlsym: cuDeviceGetAttribute - 0x7fa8f88135f0
dlsym: cuDeviceGetUuid - 0x7fa8f8813550
dlsym: cuDeviceGetName - 0x7fa8f8813530
dlsym: cuCtxCreate_v3 - 0x7fa8f881b1b0
dlsym: cuMemGetInfo_v2 - 0x7fa8f8826680
dlsym: cuCtxDestroy - 0x7fa8f8875680
calling cuInit
calling cuDriverGetVersion
raw version 0x2ef4
CUDA driver version: 12.2
calling cuDeviceGetCount
device count 1
time=2024-12-27T13:46:01.781 level=DEBUG source=gpu.go:134 msg="detected GPUs" count=1 library=/usr/lib/x86_64-linux-gnu/libcuda.so.535.216.03
[GPU-f6468958-9e61-d8c3-ca75-787ea65d2617] CUDA totalMem 81050 mb
[GPU-f6468958-9e61-d8c3-ca75-787ea65d2617] CUDA freeMem 80627 mb
[GPU-f6468958-9e61-d8c3-ca75-787ea65d2617] Compute Capability 8.0
time=2024-12-27T13:46:02.212 level=DEBUG source=amd_linux.go:421 msg="amdgpu driver not detected /sys/module/amdgpu"
releasing cuda driver library
time=2024-12-27T13:46:02.212 level=INFO source=types.go:131 msg="inference compute" id=GPU-f6468958-9e61-d8c3-ca75-787ea65d2617 library=cuda variant=v12 compute=8.0 driver=12.2 name="NVIDIA A100 80GB PCIe" total="79.2 GiB" available="78.7 GiB"
[GIN] 2024/12/27 - 13:46:40 | 200 | 112.806µs | 127.0.0.1 | HEAD "/"
[GIN] 2024/12/27 - 13:46:40 | 200 | 10.851251ms | 127.0.0.1 | POST "/api/show"
time=2024-12-27T13:46:40.950 level=DEBUG source=gpu.go:406 msg="updating system memory data" before.total="94.2 GiB" before.free="88.9 GiB" before.free_swap="119.2 GiB" now.total="94.2 GiB" now.free="88.8 GiB" now.free_swap="119.2 GiB"
initializing /usr/lib/x86_64-linux-gnu/libcuda.so.535.216.03
dlsym: cuInit - 0x7fa8f88134b0
dlsym: cuDriverGetVersion - 0x7fa8f88134d0
dlsym: cuDeviceGetCount - 0x7fa8f8813510
dlsym: cuDeviceGet - 0x7fa8f88134f0
dlsym: cuDeviceGetAttribute - 0x7fa8f88135f0
dlsym: cuDeviceGetUuid - 0x7fa8f8813550
dlsym: cuDeviceGetName - 0x7fa8f8813530
dlsym: cuCtxCreate_v3 - 0x7fa8f881b1b0
dlsym: cuMemGetInfo_v2 - 0x7fa8f8826680
dlsym: cuCtxDestroy - 0x7fa8f8875680
calling cuInit
calling cuDriverGetVersion
raw version 0x2ef4
CUDA driver version: 12.2
calling cuDeviceGetCount
device count 1
time=2024-12-27T13:46:41.140 level=DEBUG source=gpu.go:456 msg="updating cuda memory data" gpu=GPU-f6468958-9e61-d8c3-ca75-787ea65d2617 name="NVIDIA A100 80GB PCIe" overhead="0 B" before.total="79.2 GiB" before.free="78.7 GiB" now.total="79.2 GiB" now.free="78.7 GiB" now.used="422.9 MiB"
releasing cuda driver library
time=2024-12-27T13:46:41.140 level=DEBUG source=sched.go:181 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=0x55ab4f7b39a0 gpu_count=1
time=2024-12-27T13:46:41.156 level=DEBUG source=sched.go:224 msg="loading first model" model=/root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246
time=2024-12-27T13:46:41.156 level=DEBUG source=memory.go:107 msg=evaluating library=cuda gpu_count=1 available="[78.7 GiB]"
time=2024-12-27T13:46:41.156 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 gpu=GPU-f6468958-9e61-d8c3-ca75-787ea65d2617 parallel=4 available=84544258048 required="8.7 GiB"
time=2024-12-27T13:46:41.157 level=DEBUG source=gpu.go:406 msg="updating system memory data" before.total="94.2 GiB" before.free="88.8 GiB" before.free_swap="119.2 GiB" now.total="94.2 GiB" now.free="88.8 GiB" now.free_swap="119.2 GiB"
initializing /usr/lib/x86_64-linux-gnu/libcuda.so.535.216.03
dlsym: cuInit - 0x7fa8f88134b0
dlsym: cuDriverGetVersion - 0x7fa8f88134d0
dlsym: cuDeviceGetCount - 0x7fa8f8813510
dlsym: cuDeviceGet - 0x7fa8f88134f0
dlsym: cuDeviceGetAttribute - 0x7fa8f88135f0
dlsym: cuDeviceGetUuid - 0x7fa8f8813550
dlsym: cuDeviceGetName - 0x7fa8f8813530
dlsym: cuCtxCreate_v3 - 0x7fa8f881b1b0
dlsym: cuMemGetInfo_v2 - 0x7fa8f8826680
dlsym: cuCtxDestroy - 0x7fa8f8875680
calling cuInit
calling cuDriverGetVersion
raw version 0x2ef4
CUDA driver version: 12.2
calling cuDeviceGetCount
device count 1
time=2024-12-27T13:46:41.342 level=DEBUG source=gpu.go:456 msg="updating cuda memory data" gpu=GPU-f6468958-9e61-d8c3-ca75-787ea65d2617 name="NVIDIA A100 80GB PCIe" overhead="0 B" before.total="79.2 GiB" before.free="78.7 GiB" now.total="79.2 GiB" now.free="78.7 GiB" now.used="422.9 MiB"
releasing cuda driver library
time=2024-12-27T13:46:41.342 level=INFO source=server.go:104 msg="system memory" total="94.2 GiB" free="88.8 GiB" free_swap="119.2 GiB"
time=2024-12-27T13:46:41.342 level=DEBUG source=memory.go:107 msg=evaluating library=cuda gpu_count=1 available="[78.7 GiB]"
time=2024-12-27T13:46:41.343 level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=33 layers.offload=33 layers.split="" memory.available="[78.7 GiB]" memory.gpu_overhead="0 B" memory.required.full="8.7 GiB" memory.required.partial="8.7 GiB" memory.required.kv="4.0 GiB" memory.required.allocations="[8.7 GiB]" memory.weights.total="7.4 GiB" memory.weights.repeating="7.3 GiB" memory.weights.nonrepeating="102.6 MiB" memory.graph.full="560.0 MiB" memory.graph.partial="681.0 MiB"
time=2024-12-27T13:46:41.344 level=DEBUG source=gpu.go:714 msg="no filter required for library cpu"
time=2024-12-27T13:46:41.344 level=INFO source=server.go:376 msg="starting llama server" cmd="/usr/bin/ollama runner --model /root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 --ctx-size 8192 --batch-size 512 --n-gpu-layers 33 --verbose --threads 64 --parallel 4 --port 44134"
time=2024-12-27T13:46:41.344 level=DEBUG source=server.go:393 msg=subprocess environment="[PATH=/bin:/usr/local/go/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin LD_LIBRARY_PATH=.:/usr/bin]"
time=2024-12-27T13:46:41.345 level=INFO source=sched.go:449 msg="loaded runners" count=1
time=2024-12-27T13:46:41.345 level=INFO source=server.go:555 msg="waiting for llama runner to start responding"
time=2024-12-27T13:46:41.345 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error"
time=2024-12-27T13:46:41.364 level=INFO source=runner.go:938 msg="starting go runner"
time=2024-12-27T13:46:41.364 level=INFO source=runner.go:939 msg=system info="CPU : LLAMAFILE = 1 | AARCH64_REPACK = 1 | CPU : LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(gcc)" threads=64
time=2024-12-27T13:46:41.364 level=INFO source=runner.go:997 msg="Server listening on 127.0.0.1:44134"
llama_model_loader: loaded meta data with 23 key-value pairs and 291 tensors from /root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.name str = LLaMA v2
llama_model_loader: - kv 2: llama.context_length u32 = 4096
llama_model_loader: - kv 3: llama.embedding_length u32 = 4096
llama_model_loader: - kv 4: llama.block_count u32 = 32
llama_model_loader: - kv 5: llama.feed_forward_length u32 = 11008
llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 7: llama.attention.head_count u32 = 32
llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 32
llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 10: general.file_type u32 = 2
llama_model_loader: - kv 11: tokenizer.ggml.model str = llama
llama_model_loader: - kv 12: tokenizer.ggml.tokens arr[str,32000] = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv 13: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv 14: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv 15: tokenizer.ggml.merges arr[str,61249] = ["▁ t", "e r", "i n", "▁ a", "e n...
llama_model_loader: - kv 16: tokenizer.ggml.bos_token_id u32 = 1
llama_model_loader: - kv 17: tokenizer.ggml.eos_token_id u32 = 2
llama_model_loader: - kv 18: tokenizer.ggml.unknown_token_id u32 = 0
llama_model_loader: - kv 19: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 20: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 21: tokenizer.chat_template str = {% if messages[0]['role'] == 'system'...
llama_model_loader: - kv 22: general.quantization_version u32 = 2
llama_model_loader: - type f32: 65 tensors
llama_model_loader: - type q4_0: 225 tensors
llama_model_loader: - type q6_K: 1 tensors
llm_load_vocab: control token: 2 '</s>' is not marked as EOG
llm_load_vocab: control token: 1 '<s>' is not marked as EOG
llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
llm_load_vocab: special tokens cache size = 3
llm_load_vocab: token to piece cache size = 0.1684 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = SPM
llm_load_print_meta: n_vocab = 32000
llm_load_print_meta: n_merges = 0
llm_load_print_meta: vocab_only = 0
llm_load_print_meta: n_ctx_train = 4096
llm_load_print_meta: n_embd = 4096
llm_load_print_meta: n_layer = 32
llm_load_print_meta: n_head = 32
llm_load_print_meta: n_head_kv = 32
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_swa = 0
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 1
llm_load_print_meta: n_embd_k_gqa = 4096
llm_load_print_meta: n_embd_v_gqa = 4096
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 11008
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 4096
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: ssm_dt_b_c_rms = 0
llm_load_print_meta: model type = 7B
llm_load_print_meta: model ftype = Q4_0
llm_load_print_meta: model params = 6.74 B
llm_load_print_meta: model size = 3.56 GiB (4.54 BPW)
llm_load_print_meta: general.name = LLaMA v2
llm_load_print_meta: BOS token = 1 '<s>'
llm_load_print_meta: EOS token = 2 '</s>'
llm_load_print_meta: UNK token = 0 '<unk>'
llm_load_print_meta: LF token = 13 '<0x0A>'
llm_load_print_meta: EOG token = 2 '</s>'
llm_load_print_meta: max token length = 48
llm_load_tensors: tensor 'token_embd.weight' (q4_0) (and 290 others) cannot be used with preferred buffer type CPU_AARCH64, using CPU instead
time=2024-12-27T13:46:41.597 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model"
llm_load_tensors: CPU_Mapped model buffer size = 3647.87 MiB
llama_new_context_with_model: n_seq_max = 4
llama_new_context_with_model: n_ctx = 8192
llama_new_context_with_model: n_ctx_per_seq = 2048
llama_new_context_with_model: n_batch = 2048
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 10000.0
llama_new_context_with_model: freq_scale = 1
llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (4096) -- the full capacity of the model will not be utilized
time=2024-12-27T13:46:41.847 level=DEBUG source=server.go:600 msg="model load progress 1.00"
time=2024-12-27T13:46:42.098 level=DEBUG source=server.go:603 msg="model load completed, waiting for server to become available" status="llm server loading model"
llama_kv_cache_init: CPU KV buffer size = 4096.00 MiB
llama_new_context_with_model: KV self size = 4096.00 MiB, K (f16): 2048.00 MiB, V (f16): 2048.00 MiB
llama_new_context_with_model: CPU output buffer size = 0.55 MiB
llama_new_context_with_model: CPU compute buffer size = 560.01 MiB
llama_new_context_with_model: graph nodes = 1030
llama_new_context_with_model: graph splits = 1
time=2024-12-27T13:46:44.356 level=INFO source=server.go:594 msg="llama runner started in 3.01 seconds"
time=2024-12-27T13:46:44.356 level=DEBUG source=sched.go:462 msg="finished setting up runner" model=/root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246
[GIN] 2024/12/27 - 13:46:44 | 200 | 3.416664942s | 127.0.0.1 | POST "/api/generate"
time=2024-12-27T13:46:44.357 level=DEBUG source=sched.go:466 msg="context for request finished"
time=2024-12-27T13:46:44.357 level=DEBUG source=sched.go:339 msg="runner with non-zero duration has gone idle, adding timer" modelPath=/root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 duration=5m0s
time=2024-12-27T13:46:44.357 level=DEBUG source=sched.go:357 msg="after processing request finished event" modelPath=/root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 refCount=0
```
### OS
Linux, Docker
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.5.4-11-g023e4bc
|
{
"login": "Roc136",
"id": 57868577,
"node_id": "MDQ6VXNlcjU3ODY4NTc3",
"avatar_url": "https://avatars.githubusercontent.com/u/57868577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Roc136",
"html_url": "https://github.com/Roc136",
"followers_url": "https://api.github.com/users/Roc136/followers",
"following_url": "https://api.github.com/users/Roc136/following{/other_user}",
"gists_url": "https://api.github.com/users/Roc136/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Roc136/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Roc136/subscriptions",
"organizations_url": "https://api.github.com/users/Roc136/orgs",
"repos_url": "https://api.github.com/users/Roc136/repos",
"events_url": "https://api.github.com/users/Roc136/events{/privacy}",
"received_events_url": "https://api.github.com/users/Roc136/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8254/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8254/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4403
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4403/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4403/comments
|
https://api.github.com/repos/ollama/ollama/issues/4403/events
|
https://github.com/ollama/ollama/issues/4403
| 2,292,851,077
|
I_kwDOJ0Z1Ps6IqiGF
| 4,403
|
为什么同样的脚本(Qwen1.5/examples/web_demo.py)去执行Qwen1.5-32B-Chat-GPTQ-Int4的推理时,4090 24G 比 V100 32G 回答速度快了5倍,这是什么原因,是显卡性能的问题,还是代码还有哪个配置没有打开,导致V100的计算能力没有发挥出来?
|
{
"login": "lbl1120",
"id": 152936427,
"node_id": "U_kgDOCR2f6w",
"avatar_url": "https://avatars.githubusercontent.com/u/152936427?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lbl1120",
"html_url": "https://github.com/lbl1120",
"followers_url": "https://api.github.com/users/lbl1120/followers",
"following_url": "https://api.github.com/users/lbl1120/following{/other_user}",
"gists_url": "https://api.github.com/users/lbl1120/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lbl1120/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lbl1120/subscriptions",
"organizations_url": "https://api.github.com/users/lbl1120/orgs",
"repos_url": "https://api.github.com/users/lbl1120/repos",
"events_url": "https://api.github.com/users/lbl1120/events{/privacy}",
"received_events_url": "https://api.github.com/users/lbl1120/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-05-13T13:34:08
| 2024-05-13T17:34:08
| 2024-05-13T17:34:08
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
为什么同样的脚本(Qwen1.5/examples/web_demo.py)去执行Qwen1.5-32B-Chat-GPTQ-Int4的推理时,4090 24G 比 V100 32G 回答速度快了5倍,这是什么原因,是显卡性能的问题,还是代码还有哪个配置没有打开,导致V100的计算能力没有发挥出来?
![Uploading 屏幕截图 2024-05-13 212724.png…]()
|
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4403/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4403/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3724
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3724/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3724/comments
|
https://api.github.com/repos/ollama/ollama/issues/3724/events
|
https://github.com/ollama/ollama/pull/3724
| 2,249,756,328
|
PR_kwDOJ0Z1Ps5tADl6
| 3,724
|
types/model: accept former `:` as a separator in digest
|
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-04-18T05:07:15
| 2024-04-18T21:17:47
| 2024-04-18T21:17:46
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3724",
"html_url": "https://github.com/ollama/ollama/pull/3724",
"diff_url": "https://github.com/ollama/ollama/pull/3724.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3724.patch",
"merged_at": "2024-04-18T21:17:46"
}
|
This also converges the old sep `:` to the new sep `-`.
|
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3724/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3724/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4504
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4504/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4504/comments
|
https://api.github.com/repos/ollama/ollama/issues/4504/events
|
https://github.com/ollama/ollama/issues/4504
| 2,303,534,980
|
I_kwDOJ0Z1Ps6JTSeE
| 4,504
|
on https://www.ollama.com/library add sort filter by model strengths
|
{
"login": "arjunkrishna",
"id": 5271912,
"node_id": "MDQ6VXNlcjUyNzE5MTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/5271912?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arjunkrishna",
"html_url": "https://github.com/arjunkrishna",
"followers_url": "https://api.github.com/users/arjunkrishna/followers",
"following_url": "https://api.github.com/users/arjunkrishna/following{/other_user}",
"gists_url": "https://api.github.com/users/arjunkrishna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/arjunkrishna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arjunkrishna/subscriptions",
"organizations_url": "https://api.github.com/users/arjunkrishna/orgs",
"repos_url": "https://api.github.com/users/arjunkrishna/repos",
"events_url": "https://api.github.com/users/arjunkrishna/events{/privacy}",
"received_events_url": "https://api.github.com/users/arjunkrishna/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 0
| 2024-05-17T20:29:10
| 2024-05-17T20:29:10
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hello,
on https://www.ollama.com/library it would be great to have some additional categories where you can sort models by their strengths based on various benchmarks. That way novices like me can figure out which models are good at what right from the ollama webpage.
Thanks,
Arjun
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4504/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4504/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/5192
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5192/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5192/comments
|
https://api.github.com/repos/ollama/ollama/issues/5192/events
|
https://github.com/ollama/ollama/pull/5192
| 2,364,863,406
|
PR_kwDOJ0Z1Ps5zGKOF
| 5,192
|
handle asymmetric embedding KVs
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-06-20T16:47:12
| 2024-06-20T17:46:25
| 2024-06-20T17:46:24
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5192",
"html_url": "https://github.com/ollama/ollama/pull/5192",
"diff_url": "https://github.com/ollama/ollama/pull/5192.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5192.patch",
"merged_at": "2024-06-20T17:46:24"
}
|
KV size assumed a symmetric K and V embedding sizes which isn't always the case, e.g. deepseek v2
smoke tested memory usage against llama2, llama3, gemma, phi3, qwen2, and deepseek v2
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5192/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5192/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5838
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5838/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5838/comments
|
https://api.github.com/repos/ollama/ollama/issues/5838/events
|
https://github.com/ollama/ollama/issues/5838
| 2,421,764,768
|
I_kwDOJ0Z1Ps6QWTKg
| 5,838
|
ollama CORS check is case-sensitive
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 7706482389,
"node_id": "LA_kwDOJ0Z1Ps8AAAABy1eW1Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/api",
"name": "api",
"color": "bfdadc",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] | null | 0
| 2024-07-22T02:24:06
| 2024-12-10T21:43:23
| 2024-12-10T21:43:23
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
ollama uses `github.com/gin-contrib/cors` to check `Host` header for allowed origins (`OLLAMA_ORIGINS`). If the value of the `Host` is not all lowercase, the check fails.
```
$ curl -D - -s -H Host:localhost localhost:11434/api/version
HTTP/1.1 200 OK
Content-Type: application/json; charset=utf-8
Date: Mon, 22 Jul 2024 02:22:46 GMT
Content-Length: 19
{"version":"0.2.7"}
```
```
$ curl -D - -s -H Host:Localhost localhost:11434/api/version
HTTP/1.1 403 Forbidden
Date: Mon, 22 Jul 2024 02:22:53 GMT
Content-Length: 0
```
Adding `Localhost` to `OLLAMA_ORIGINS` doesn't work because the `cors` package normalizes all of the rules to lowercase.
This prevents clients that don't lowercase their `Host` header from sending completions (https://github.com/ollama/ollama/issues/5816)
### OS
Linux, Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.2.7
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5838/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5838/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6695
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6695/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6695/comments
|
https://api.github.com/repos/ollama/ollama/issues/6695/events
|
https://github.com/ollama/ollama/issues/6695
| 2,512,185,141
|
I_kwDOJ0Z1Ps6VvOc1
| 6,695
|
Q6_K is slower than Q8_0
|
{
"login": "napa3um",
"id": 665538,
"node_id": "MDQ6VXNlcjY2NTUzOA==",
"avatar_url": "https://avatars.githubusercontent.com/u/665538?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/napa3um",
"html_url": "https://github.com/napa3um",
"followers_url": "https://api.github.com/users/napa3um/followers",
"following_url": "https://api.github.com/users/napa3um/following{/other_user}",
"gists_url": "https://api.github.com/users/napa3um/gists{/gist_id}",
"starred_url": "https://api.github.com/users/napa3um/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/napa3um/subscriptions",
"organizations_url": "https://api.github.com/users/napa3um/orgs",
"repos_url": "https://api.github.com/users/napa3um/repos",
"events_url": "https://api.github.com/users/napa3um/events{/privacy}",
"received_events_url": "https://api.github.com/users/napa3um/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-09-08T04:06:41
| 2024-12-02T22:00:37
| 2024-12-02T22:00:37
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
gemma2:9b-instruct-**q6_K** : gemma2:9b-instruct-**q8_0** = **21**t/s : **25**t/s
mistral-nemo:12b-instruct-2407-**q6_K** : mistral-nemo:12b-instruct-2407-**q8_0** = **17**t/s : **21**t/s
It used to be different.
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.3.9
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6695/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6695/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/594
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/594/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/594/comments
|
https://api.github.com/repos/ollama/ollama/issues/594/events
|
https://github.com/ollama/ollama/pull/594
| 1,912,373,419
|
PR_kwDOJ0Z1Ps5bK89D
| 594
|
exit on unknown distro
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-09-25T22:30:03
| 2023-09-25T22:30:59
| 2023-09-25T22:30:58
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/594",
"html_url": "https://github.com/ollama/ollama/pull/594",
"diff_url": "https://github.com/ollama/ollama/pull/594.diff",
"patch_url": "https://github.com/ollama/ollama/pull/594.patch",
"merged_at": "2023-09-25T22:30:58"
}
| null |
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/594/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/594/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/8616
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8616/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8616/comments
|
https://api.github.com/repos/ollama/ollama/issues/8616/events
|
https://github.com/ollama/ollama/issues/8616
| 2,813,973,892
|
I_kwDOJ0Z1Ps6nudWE
| 8,616
|
Ollama: torch.OutOfMemoryError: CUDA out of memory
|
{
"login": "kennethwork101",
"id": 147571330,
"node_id": "U_kgDOCMvCgg",
"avatar_url": "https://avatars.githubusercontent.com/u/147571330?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kennethwork101",
"html_url": "https://github.com/kennethwork101",
"followers_url": "https://api.github.com/users/kennethwork101/followers",
"following_url": "https://api.github.com/users/kennethwork101/following{/other_user}",
"gists_url": "https://api.github.com/users/kennethwork101/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kennethwork101/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kennethwork101/subscriptions",
"organizations_url": "https://api.github.com/users/kennethwork101/orgs",
"repos_url": "https://api.github.com/users/kennethwork101/repos",
"events_url": "https://api.github.com/users/kennethwork101/events{/privacy}",
"received_events_url": "https://api.github.com/users/kennethwork101/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 0
| 2025-01-27T20:21:48
| 2025-01-27T20:21:48
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Running some tests using pytest with the following 6 models. What I find is that if I run all tests with each model before go on to the next model, the tests mostly worked fine. 123/126 passed. But if I run each test against all 6 models sequentially and then go to the next test then I see hangs or out of memory error. Is this a known issue? I expect the order of running tests using Ollama should not matter.
ollama version is 0.5.7
pytest 8.3.4
| NVIDIA-SMI 550.144.03 Driver Version: 550.144.03 CUDA Version: 12.4 |
| 0 NVIDIA GeForce RTX 4070 Ti Off | 00000000:01:00.0 On | N/A |
Linux kennethpc 6.8.0-51-generic #52-Ubuntu SMP PREEMPT_DYNAMIC Thu Dec 5 13:09:44 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
qwen2:latest
qwen2.5:latest
mistral:latest
llama3-groq-tool-use:latest
llama3.2:latest
llama3.2:latest
Here is some examples of the errors. Sometimes I simply see hangs:
FAILED tests/_1_misc_test.py::test_t6_func[mistral:latest] - assert None is not None
FAILED tests/_1_misc_test.py::test_t6_func[llama3-groq-tool-use:latest] - assert None is not None
FAILED tests/_2_rag_test.py::test_t7_func[qwen2:latest-chroma] - assert 768 == 384
FAILED tests/_2_rag_test.py::test_t7_func[qwen2.5:latest-chroma] - torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 20.00 MiB. GPU 0 has a total capacity of 11.72 GiB of which 41.50 MiB is free. Process 263255 has 3.40 GiB memory in use. Process 263532 has 6.09 GiB memory in use. In...
FAILED tests/_2_rag_test.py::test_t7_func[qwen2.5:latest-huggingface] - torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 20.00 MiB. GPU 0 has a total capacity of 11.72 GiB of which 41.50 MiB is free. Process 263255 has 3.40 GiB memory in use. Process 263532 has 6.09 GiB memory in use. In...
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.5.7
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8616/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8616/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/4217
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4217/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4217/comments
|
https://api.github.com/repos/ollama/ollama/issues/4217/events
|
https://github.com/ollama/ollama/issues/4217
| 2,282,031,116
|
I_kwDOJ0Z1Ps6IBQgM
| 4,217
|
how to load adapter
|
{
"login": "taozhiyuai",
"id": 146583103,
"node_id": "U_kgDOCLyuPw",
"avatar_url": "https://avatars.githubusercontent.com/u/146583103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/taozhiyuai",
"html_url": "https://github.com/taozhiyuai",
"followers_url": "https://api.github.com/users/taozhiyuai/followers",
"following_url": "https://api.github.com/users/taozhiyuai/following{/other_user}",
"gists_url": "https://api.github.com/users/taozhiyuai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/taozhiyuai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/taozhiyuai/subscriptions",
"organizations_url": "https://api.github.com/users/taozhiyuai/orgs",
"repos_url": "https://api.github.com/users/taozhiyuai/repos",
"events_url": "https://api.github.com/users/taozhiyuai/events{/privacy}",
"received_events_url": "https://api.github.com/users/taozhiyuai/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-05-07T00:51:35
| 2024-05-10T03:23:03
| 2024-05-07T16:43:11
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
how to load adapter
modelfile is the following:
FROM ./sha256-b6f248eff2d0c4f85d2f6369a27d99fc75686d67314a0b5d35a93c5aee5dcb14
TEMPLATE """{{ if .System }}<|start_header_id|>system<|end_header_id|>
{{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|>
{{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|>
{{ .Response }}<|eot_id|>"""
PARAMETER num_keep 24
PARAMETER num_ctx 1040000
PARAMETER stop "<|start_header_id|>"
PARAMETER stop "<|end_header_id|>"
PARAMETER stop "<|eot_id|>"
ADAPTER ./adapter_model.safetensors
ERROR info is :
'taozhiyu@603e5f4a42f1 Llama-3-70B-Gradient-1048k-adapter % ollama create llama3:70b-instruct-1mb-q8_0 -f modelfile
transferring model data
creating model layer
creating template layer
creating adapter layer
Error: invalid file magic'
adapter is from https://hf-mirror.com/cognitivecomputations/Llama-3-70B-Gradient-1048k-adapter
anyone can help for this issue?
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.1.32
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4217/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4217/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4659
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4659/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4659/comments
|
https://api.github.com/repos/ollama/ollama/issues/4659/events
|
https://github.com/ollama/ollama/issues/4659
| 2,318,675,815
|
I_kwDOJ0Z1Ps6KNC9n
| 4,659
|
no gpu detected with RTX 3060Ti
|
{
"login": "NoIDidntHackU",
"id": 112739711,
"node_id": "U_kgDOBrhFfw",
"avatar_url": "https://avatars.githubusercontent.com/u/112739711?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NoIDidntHackU",
"html_url": "https://github.com/NoIDidntHackU",
"followers_url": "https://api.github.com/users/NoIDidntHackU/followers",
"following_url": "https://api.github.com/users/NoIDidntHackU/following{/other_user}",
"gists_url": "https://api.github.com/users/NoIDidntHackU/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NoIDidntHackU/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NoIDidntHackU/subscriptions",
"organizations_url": "https://api.github.com/users/NoIDidntHackU/orgs",
"repos_url": "https://api.github.com/users/NoIDidntHackU/repos",
"events_url": "https://api.github.com/users/NoIDidntHackU/events{/privacy}",
"received_events_url": "https://api.github.com/users/NoIDidntHackU/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 7
| 2024-05-27T09:20:48
| 2024-05-28T10:38:05
| 2024-05-28T10:37:29
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
i have an RTX 3060 Ti and when I ran "curl -fsSL https://ollama.com/install.sh | sh" in ubuntu on WSL (using WSL for webUI) it installs fine, but at the end of install it says this:
" >>> Install complete. Run "ollama" from the command line.
WARNING: No NVIDIA/AMD GPU detected. Ollama will run in CPU-only mode."
any ideas what to do to fix this?
*EDIT*
I previously (2 days ago) installed ollama and then I uninstalled WSL entirely for another reason and reinstalled it and now the issue is happening, not sure if that was the cause but thought I should add this in here, can live without GPU for now, but is a bit annoying
### OS
WSL2
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.1.38
|
{
"login": "NoIDidntHackU",
"id": 112739711,
"node_id": "U_kgDOBrhFfw",
"avatar_url": "https://avatars.githubusercontent.com/u/112739711?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NoIDidntHackU",
"html_url": "https://github.com/NoIDidntHackU",
"followers_url": "https://api.github.com/users/NoIDidntHackU/followers",
"following_url": "https://api.github.com/users/NoIDidntHackU/following{/other_user}",
"gists_url": "https://api.github.com/users/NoIDidntHackU/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NoIDidntHackU/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NoIDidntHackU/subscriptions",
"organizations_url": "https://api.github.com/users/NoIDidntHackU/orgs",
"repos_url": "https://api.github.com/users/NoIDidntHackU/repos",
"events_url": "https://api.github.com/users/NoIDidntHackU/events{/privacy}",
"received_events_url": "https://api.github.com/users/NoIDidntHackU/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4659/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4659/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5893
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5893/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5893/comments
|
https://api.github.com/repos/ollama/ollama/issues/5893/events
|
https://github.com/ollama/ollama/pull/5893
| 2,426,203,697
|
PR_kwDOJ0Z1Ps52RPTz
| 5,893
|
Fix Embed Test Flakes
|
{
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjhan/followers",
"following_url": "https://api.github.com/users/royjhan/following{/other_user}",
"gists_url": "https://api.github.com/users/royjhan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/royjhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/royjhan/subscriptions",
"organizations_url": "https://api.github.com/users/royjhan/orgs",
"repos_url": "https://api.github.com/users/royjhan/repos",
"events_url": "https://api.github.com/users/royjhan/events{/privacy}",
"received_events_url": "https://api.github.com/users/royjhan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-07-23T22:05:50
| 2024-07-24T18:15:48
| 2024-07-24T18:15:47
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5893",
"html_url": "https://github.com/ollama/ollama/pull/5893",
"diff_url": "https://github.com/ollama/ollama/pull/5893.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5893.patch",
"merged_at": "2024-07-24T18:15:46"
}
|
different results on different taters
e.g.
=== RUN TestAllMiniLMEmbeddings
2024/07/23 17:05:36 INFO server connection host=tater21 port=55426
2024/07/23 17:05:36 INFO checking status of model model=all-minilm
2024/07/23 17:05:36 INFO model already present model=all-minilm
embed_test.go:42: expected 0.06642947345972061, got 0.0664294660091400
--- FAIL: TestAllMiniLMEmbeddings (0.31s)
also added a basic sanity check for /api/embeddings, which had no test before
|
{
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjhan/followers",
"following_url": "https://api.github.com/users/royjhan/following{/other_user}",
"gists_url": "https://api.github.com/users/royjhan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/royjhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/royjhan/subscriptions",
"organizations_url": "https://api.github.com/users/royjhan/orgs",
"repos_url": "https://api.github.com/users/royjhan/repos",
"events_url": "https://api.github.com/users/royjhan/events{/privacy}",
"received_events_url": "https://api.github.com/users/royjhan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5893/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5893/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/8029
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8029/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8029/comments
|
https://api.github.com/repos/ollama/ollama/issues/8029/events
|
https://github.com/ollama/ollama/pull/8029
| 2,730,798,743
|
PR_kwDOJ0Z1Ps6Evhpy
| 8,029
|
Prevent model thrashing from unset num_ctx
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 0
| 2024-12-10T17:54:00
| 2025-01-03T05:26:28
| null |
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/8029",
"html_url": "https://github.com/ollama/ollama/pull/8029",
"diff_url": "https://github.com/ollama/ollama/pull/8029.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8029.patch",
"merged_at": null
}
|
TLDR: a model shouldn't be evicted due to num_ctx change if the client doesn't care about context size.
Client A loads a model with a context window different to the default or the value configured in the Modelfile:
```console
$ curl localhost:11434/api/generate -d '{"model":"llama3.2","options":{"num_ctx":65536}}'
$ ollama ps
NAME ID SIZE PROCESSOR UNTIL
llama3.2:latest a80c4f17acd5 13 GB 100% GPU Forever
```
Client B does a completion but doesn't specify a context window, causing the default value of 2048 to be used, resulting in eviction and immediate reload of the model.
```console
$ curl localhost:11434/api/generate -d '{"model":"llama3.2"}'
$ ollama ps
NAME ID SIZE PROCESSOR UNTIL
llama3.2:latest a80c4f17acd5 3.1 GB 100% GPU Forever
```
Client A sends another completion with the large context causing another eviction and reload.
```console
$ curl localhost:11434/api/generate -d '{"model":"llama3.2","options":{"num_ctx":65536}}'
$ ollama ps
NAME ID SIZE PROCESSOR UNTIL
llama3.2:latest a80c4f17acd5 13 GB 100% GPU Forever
```
If client B is not concerned about the context window, it shouldn't cause the eviction of an an already loaded model. This is particularly noticeable when sharing a model between ollama and OpenAI endpoints - since the OpenAI endpoint can't set a context window, a model loaded via the ollama endpoint with a custom context window gets evicted by the next OpenAI request.
Thrashing can also occur when a client makes secondary completions after a primary completion, eg open-webui's auto-complete feature (see https://github.com/ollama/ollama/issues/7919#issuecomment-2560465774), or when a model is used for both completion and embedding (https://github.com/ollama/ollama/issues/6148#issuecomment-2568402497).
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8029/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8029/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3553
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3553/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3553/comments
|
https://api.github.com/repos/ollama/ollama/issues/3553/events
|
https://github.com/ollama/ollama/issues/3553
| 2,233,054,415
|
I_kwDOJ0Z1Ps6FGbTP
| 3,553
|
Embedding endpoint not available on windows.
|
{
"login": "elblogbruno",
"id": 10481058,
"node_id": "MDQ6VXNlcjEwNDgxMDU4",
"avatar_url": "https://avatars.githubusercontent.com/u/10481058?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/elblogbruno",
"html_url": "https://github.com/elblogbruno",
"followers_url": "https://api.github.com/users/elblogbruno/followers",
"following_url": "https://api.github.com/users/elblogbruno/following{/other_user}",
"gists_url": "https://api.github.com/users/elblogbruno/gists{/gist_id}",
"starred_url": "https://api.github.com/users/elblogbruno/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/elblogbruno/subscriptions",
"organizations_url": "https://api.github.com/users/elblogbruno/orgs",
"repos_url": "https://api.github.com/users/elblogbruno/repos",
"events_url": "https://api.github.com/users/elblogbruno/events{/privacy}",
"received_events_url": "https://api.github.com/users/elblogbruno/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-04-09T09:51:12
| 2024-08-15T16:37:10
| 2024-04-09T10:09:13
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I installed latest version of windows of ollama v0.1.31 and I can't seem to be able to use the new embedding functionalities.
For example , this url http://localhost:11434/api/embeddings gives me 404 not found.
The above exception was the direct cause of the following exception:
```
Traceback (most recent call last):
File "d:\Desktop\Proyectos\OllamaPi\langchain-python-rag-websummary\play_store_scrapping\embeddings.py", line 19, in <module>
response = ollama.embeddings(model="mxbai-embed-large", prompt=d)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\elblo\AppData\Local\Programs\Python\Python312\Lib\site-packages\ollama\_client.py", line 183, in embeddings
return self._request(
^^^^^^^^^^^^^^
File "C:\Users\elblo\AppData\Local\Programs\Python\Python312\Lib\site-packages\ollama\_client.py", line 53, in _request
response = self._client.request(method, url, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\elblo\AppData\Local\Programs\Python\Python312\Lib\site-packages\httpx\_client.py", line 814, in request
return self.send(request, auth=auth, follow_redirects=follow_redirects)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\elblo\AppData\Local\Programs\Python\Python312\Lib\site-packages\httpx\_client.py", line 901, in send
response = self._send_handling_auth(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\elblo\AppData\Local\Programs\Python\Python312\Lib\site-packages\httpx\_client.py", line 929, in _send_handling_auth
response = self._send_handling_redirects(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\elblo\AppData\Local\Programs\Python\Python312\Lib\site-packages\httpx\_client.py", line 966, in _send_handling_redirects
response = self._send_single_request(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\elblo\AppData\Local\Programs\Python\Python312\Lib\site-packages\httpx\_client.py", line 1002, in _send_single_request
response = transport.handle_request(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\elblo\AppData\Local\Programs\Python\Python312\Lib\site-packages\httpx\_transports\default.py", line 227, in handle_request
with map_httpcore_exceptions():
File "C:\Users\elblo\AppData\Local\Programs\Python\Python312\Lib\contextlib.py", line 158, in __exit__
self.gen.throw(value)
File "C:\Users\elblo\AppData\Local\Programs\Python\Python312\Lib\site-packages\httpx\_transports\default.py", line 83, in map_httpcore_exceptions
raise mapped_exc(message) from exc
httpx.ConnectError: [WinError 10049] La dirección solicitada no es válida en este contexto
```
### What did you expect to see?
_No response_
### Steps to reproduce
Access http://localhost:11434/api/embeddings
or run sample code:
```
import ollama
import chromadb # ChromaDB is a vector embedding database
documents = [
"Llamas are members of the camelid family meaning they're pretty closely related to vicuñas and camels",
"Llamas were first domesticated and used as pack animals 4,000 to 5,000 years ago in the Peruvian highlands",
"Llamas can grow as much as 6 feet tall though the average llama between 5 feet 6 inches and 5 feet 9 inches tall",
"Llamas weigh between 280 and 450 pounds and can carry 25 to 30 percent of their body weight",
"Llamas are vegetarians and have very efficient digestive systems",
"Llamas live to be about 20 years old, though some only live for 15 years and others live to be 30 years old",
]
client = chromadb.Client()
collection = client.create_collection(name="docs")
# store each document in a vector embedding database
for i, d in enumerate(documents):
response = ollama.embeddings(model="mxbai-embed-large", prompt=d)
embedding = response["embedding"]
collection.add(
ids=[str(i)],
embeddings=[embedding],
documents=[d]
)
# an example prompt
prompt = "What animals are llamas related to?"
# generate an embedding for the prompt and retrieve the most relevant doc
response = ollama.embeddings(
prompt=prompt,
model="mxbai-embed-large"
)
results = collection.query(
query_embeddings=[response["embedding"]],
n_results=1
)
data = results['documents'][0][0]
# generate a response combining the prompt and data we retrieved in step 2
output = ollama.generate(
model="llama2",
prompt=f"Using this data: {data}. Respond to this prompt: {prompt}"
)
print(output['response'])
```
### Are there any recent changes that introduced the issue?
_No response_
### OS
Windows
### Architecture
x86
### Platform
_No response_
### Ollama version
0.1.31
### GPU
Nvidia
### GPU info
NVIDIA GeForce GTX 1060 6GB
### CPU
Intel
### Other software
_No response_
|
{
"login": "elblogbruno",
"id": 10481058,
"node_id": "MDQ6VXNlcjEwNDgxMDU4",
"avatar_url": "https://avatars.githubusercontent.com/u/10481058?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/elblogbruno",
"html_url": "https://github.com/elblogbruno",
"followers_url": "https://api.github.com/users/elblogbruno/followers",
"following_url": "https://api.github.com/users/elblogbruno/following{/other_user}",
"gists_url": "https://api.github.com/users/elblogbruno/gists{/gist_id}",
"starred_url": "https://api.github.com/users/elblogbruno/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/elblogbruno/subscriptions",
"organizations_url": "https://api.github.com/users/elblogbruno/orgs",
"repos_url": "https://api.github.com/users/elblogbruno/repos",
"events_url": "https://api.github.com/users/elblogbruno/events{/privacy}",
"received_events_url": "https://api.github.com/users/elblogbruno/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3553/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3553/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7979
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7979/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7979/comments
|
https://api.github.com/repos/ollama/ollama/issues/7979/events
|
https://github.com/ollama/ollama/pull/7979
| 2,724,003,265
|
PR_kwDOJ0Z1Ps6EYMhx
| 7,979
|
bugfix: "null" value for format
|
{
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "https://api.github.com/users/ParthSareen/followers",
"following_url": "https://api.github.com/users/ParthSareen/following{/other_user}",
"gists_url": "https://api.github.com/users/ParthSareen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ParthSareen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ParthSareen/subscriptions",
"organizations_url": "https://api.github.com/users/ParthSareen/orgs",
"repos_url": "https://api.github.com/users/ParthSareen/repos",
"events_url": "https://api.github.com/users/ParthSareen/events{/privacy}",
"received_events_url": "https://api.github.com/users/ParthSareen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-12-06T22:02:27
| 2024-12-11T06:07:51
| 2024-12-06T22:13:16
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7979",
"html_url": "https://github.com/ollama/ollama/pull/7979",
"diff_url": "https://github.com/ollama/ollama/pull/7979.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7979.patch",
"merged_at": "2024-12-06T22:13:16"
}
|
Fixes https://github.com/ollama/ollama/issues/7977
|
{
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "https://api.github.com/users/ParthSareen/followers",
"following_url": "https://api.github.com/users/ParthSareen/following{/other_user}",
"gists_url": "https://api.github.com/users/ParthSareen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ParthSareen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ParthSareen/subscriptions",
"organizations_url": "https://api.github.com/users/ParthSareen/orgs",
"repos_url": "https://api.github.com/users/ParthSareen/repos",
"events_url": "https://api.github.com/users/ParthSareen/events{/privacy}",
"received_events_url": "https://api.github.com/users/ParthSareen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7979/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7979/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3235
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3235/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3235/comments
|
https://api.github.com/repos/ollama/ollama/issues/3235/events
|
https://github.com/ollama/ollama/issues/3235
| 2,194,117,935
|
I_kwDOJ0Z1Ps6Cx5Uv
| 3,235
|
Cannot install on Fedora 39 Silverblue: error: Packages not found: ./ollama-linux-amd64
|
{
"login": "jkemp814",
"id": 12059343,
"node_id": "MDQ6VXNlcjEyMDU5MzQz",
"avatar_url": "https://avatars.githubusercontent.com/u/12059343?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jkemp814",
"html_url": "https://github.com/jkemp814",
"followers_url": "https://api.github.com/users/jkemp814/followers",
"following_url": "https://api.github.com/users/jkemp814/following{/other_user}",
"gists_url": "https://api.github.com/users/jkemp814/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jkemp814/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jkemp814/subscriptions",
"organizations_url": "https://api.github.com/users/jkemp814/orgs",
"repos_url": "https://api.github.com/users/jkemp814/repos",
"events_url": "https://api.github.com/users/jkemp814/events{/privacy}",
"received_events_url": "https://api.github.com/users/jkemp814/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 13
| 2024-03-19T06:11:36
| 2024-12-05T20:38:14
| 2024-03-21T14:10:38
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Package from the releases page will not install with `rpm-ostree install ./ollama-linux-amd64`
Also when using the install script: `curl -fsSL https://ollama.com/install.sh | sh` it does not create `ollama` folder under `/usr/share`
When installing in a toolbox it cannot find the GPU.
### What did you expect to see?
Install and create user/group/folder when running the curl install script.
If installing using `rpm-ostree install` there should be package info.
### Steps to reproduce
Try and install using `rpm-ostree install` asset from releases page.
or
install try and install using the `curl install script` and notice /usr/share/ollama does not get created.
### Are there any recent changes that introduced the issue?
No first time trying to install.
### OS
Linux
### Architecture
amd64
### Platform
_No response_
### Ollama version
0.1.29
### GPU
Nvidia
### GPU info
NVIDIA-SMI 550.54.14
Driver Version: 550.54.14
CUDA Version: 12.4
### CPU
Intel
### Other software
Fedora Linux 39 Silverblue
GNOME 45.5-1
Alacritty 0.13.1
Open WebUI 0.1.113
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3235/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3235/timeline
| null |
completed
| false
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.