url
stringlengths 51
54
| repository_url
stringclasses 1
value | labels_url
stringlengths 65
68
| comments_url
stringlengths 60
63
| events_url
stringlengths 58
61
| html_url
stringlengths 39
44
| id
int64 1.78B
2.82B
| node_id
stringlengths 18
19
| number
int64 1
8.69k
| title
stringlengths 1
382
| user
dict | labels
listlengths 0
5
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
2
| milestone
null | comments
int64 0
323
| created_at
timestamp[s] | updated_at
timestamp[s] | closed_at
timestamp[s] | author_association
stringclasses 4
values | sub_issues_summary
dict | active_lock_reason
null | draft
bool 2
classes | pull_request
dict | body
stringlengths 2
118k
⌀ | closed_by
dict | reactions
dict | timeline_url
stringlengths 60
63
| performed_via_github_app
null | state_reason
stringclasses 4
values | is_pull_request
bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/ollama/ollama/issues/459
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/459/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/459/comments
|
https://api.github.com/repos/ollama/ollama/issues/459/events
|
https://github.com/ollama/ollama/pull/459
| 1,878,857,374
|
PR_kwDOJ0Z1Ps5ZaZOq
| 459
|
generate binary dependencies based on `GOARCH` on macos
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2023-09-02T21:54:52
| 2023-09-05T16:54:00
| 2023-09-05T16:53:58
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/459",
"html_url": "https://github.com/ollama/ollama/pull/459",
"diff_url": "https://github.com/ollama/ollama/pull/459.diff",
"patch_url": "https://github.com/ollama/ollama/pull/459.patch",
"merged_at": "2023-09-05T16:53:58"
}
|
This will allow building a universal binary (or cross compiling for `amd64`) on `arm64` Macs:
```
% GOARCH=amd64 go generate ./...
% GOARCH=amd64 go build .
% file ./ollama
./ollama: Mach-O 64-bit executable x86_64
```
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/459/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/459/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/956
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/956/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/956/comments
|
https://api.github.com/repos/ollama/ollama/issues/956/events
|
https://github.com/ollama/ollama/pull/956
| 1,971,226,406
|
PR_kwDOJ0Z1Ps5eRgt9
| 956
|
docs: clarify and clean up API docs
|
{
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.github.com/users/technovangelist/followers",
"following_url": "https://api.github.com/users/technovangelist/following{/other_user}",
"gists_url": "https://api.github.com/users/technovangelist/gists{/gist_id}",
"starred_url": "https://api.github.com/users/technovangelist/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/technovangelist/subscriptions",
"organizations_url": "https://api.github.com/users/technovangelist/orgs",
"repos_url": "https://api.github.com/users/technovangelist/repos",
"events_url": "https://api.github.com/users/technovangelist/events{/privacy}",
"received_events_url": "https://api.github.com/users/technovangelist/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-10-31T20:12:17
| 2023-11-01T04:43:12
| 2023-11-01T04:43:11
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/956",
"html_url": "https://github.com/ollama/ollama/pull/956",
"diff_url": "https://github.com/ollama/ollama/pull/956.diff",
"patch_url": "https://github.com/ollama/ollama/pull/956.patch",
"merged_at": "2023-11-01T04:43:11"
}
| null |
{
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.github.com/users/technovangelist/followers",
"following_url": "https://api.github.com/users/technovangelist/following{/other_user}",
"gists_url": "https://api.github.com/users/technovangelist/gists{/gist_id}",
"starred_url": "https://api.github.com/users/technovangelist/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/technovangelist/subscriptions",
"organizations_url": "https://api.github.com/users/technovangelist/orgs",
"repos_url": "https://api.github.com/users/technovangelist/repos",
"events_url": "https://api.github.com/users/technovangelist/events{/privacy}",
"received_events_url": "https://api.github.com/users/technovangelist/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/956/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/956/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3407
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3407/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3407/comments
|
https://api.github.com/repos/ollama/ollama/issues/3407/events
|
https://github.com/ollama/ollama/issues/3407
| 2,215,453,262
|
I_kwDOJ0Z1Ps6EDSJO
| 3,407
|
Ollama errors when using json mode with `command-r` model
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 0
| 2024-03-29T14:11:01
| 2024-04-19T15:41:37
| null |
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
When using json mode with command-r, Ollama will hang.
https://github.com/ggerganov/llama.cpp/issues/6112
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3407/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3407/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/1456
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1456/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1456/comments
|
https://api.github.com/repos/ollama/ollama/issues/1456/events
|
https://github.com/ollama/ollama/issues/1456
| 2,034,460,321
|
I_kwDOJ0Z1Ps55Q2ah
| 1,456
|
Wrong font in the model sorting dropdown menu in the model page for Safari
|
{
"login": "ggetv",
"id": 36490494,
"node_id": "MDQ6VXNlcjM2NDkwNDk0",
"avatar_url": "https://avatars.githubusercontent.com/u/36490494?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ggetv",
"html_url": "https://github.com/ggetv",
"followers_url": "https://api.github.com/users/ggetv/followers",
"following_url": "https://api.github.com/users/ggetv/following{/other_user}",
"gists_url": "https://api.github.com/users/ggetv/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ggetv/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ggetv/subscriptions",
"organizations_url": "https://api.github.com/users/ggetv/orgs",
"repos_url": "https://api.github.com/users/ggetv/repos",
"events_url": "https://api.github.com/users/ggetv/events{/privacy}",
"received_events_url": "https://api.github.com/users/ggetv/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6573197867,
"node_id": "LA_kwDOJ0Z1Ps8AAAABh8sKKw",
"url": "https://api.github.com/repos/ollama/ollama/labels/ollama.com",
"name": "ollama.com",
"color": "ffffff",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] | null | 3
| 2023-12-10T17:25:06
| 2024-04-08T21:46:22
| 2024-04-08T21:46:22
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Just noticed a small issue on the model page (https://ollama.ai/library?sort=newest), in Safari browser it somehow showed the wrong font, other browsers (Chrome, Firefox) do not have this issue. I am using Safari Version 17.1 (19616.2.9.11.7).
<img width="1344" alt="ollama-font-issue-safari" src="https://github.com/jmorganca/ollama/assets/36490494/28c46a5b-a3c5-4a39-ae32-1a94898b2edc">
|
{
"login": "hoyyeva",
"id": 63033505,
"node_id": "MDQ6VXNlcjYzMDMzNTA1",
"avatar_url": "https://avatars.githubusercontent.com/u/63033505?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hoyyeva",
"html_url": "https://github.com/hoyyeva",
"followers_url": "https://api.github.com/users/hoyyeva/followers",
"following_url": "https://api.github.com/users/hoyyeva/following{/other_user}",
"gists_url": "https://api.github.com/users/hoyyeva/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hoyyeva/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hoyyeva/subscriptions",
"organizations_url": "https://api.github.com/users/hoyyeva/orgs",
"repos_url": "https://api.github.com/users/hoyyeva/repos",
"events_url": "https://api.github.com/users/hoyyeva/events{/privacy}",
"received_events_url": "https://api.github.com/users/hoyyeva/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1456/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1456/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6490
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6490/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6490/comments
|
https://api.github.com/repos/ollama/ollama/issues/6490/events
|
https://github.com/ollama/ollama/issues/6490
| 2,484,748,005
|
I_kwDOJ0Z1Ps6UGj7l
| 6,490
|
WISPER
|
{
"login": "DewiarQR",
"id": 64423698,
"node_id": "MDQ6VXNlcjY0NDIzNjk4",
"avatar_url": "https://avatars.githubusercontent.com/u/64423698?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DewiarQR",
"html_url": "https://github.com/DewiarQR",
"followers_url": "https://api.github.com/users/DewiarQR/followers",
"following_url": "https://api.github.com/users/DewiarQR/following{/other_user}",
"gists_url": "https://api.github.com/users/DewiarQR/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DewiarQR/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DewiarQR/subscriptions",
"organizations_url": "https://api.github.com/users/DewiarQR/orgs",
"repos_url": "https://api.github.com/users/DewiarQR/repos",
"events_url": "https://api.github.com/users/DewiarQR/events{/privacy}",
"received_events_url": "https://api.github.com/users/DewiarQR/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-08-24T17:41:33
| 2024-08-27T21:23:24
| 2024-08-27T21:23:24
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hello. You have both regular LLM models and those that support digital vision. Now it remains to add models for transcription and voice synthesis... and it would be possible to solve any problems on your system.
Can we expect such models as
https://huggingface.co/Systran/faster-distil-whisper-large-v3
Or maybe it is possible to install it yourself now?
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6490/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6490/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1853
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1853/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1853/comments
|
https://api.github.com/repos/ollama/ollama/issues/1853/events
|
https://github.com/ollama/ollama/issues/1853
| 2,070,234,198
|
I_kwDOJ0Z1Ps57ZURW
| 1,853
|
phi not working
|
{
"login": "morandalex",
"id": 9484568,
"node_id": "MDQ6VXNlcjk0ODQ1Njg=",
"avatar_url": "https://avatars.githubusercontent.com/u/9484568?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/morandalex",
"html_url": "https://github.com/morandalex",
"followers_url": "https://api.github.com/users/morandalex/followers",
"following_url": "https://api.github.com/users/morandalex/following{/other_user}",
"gists_url": "https://api.github.com/users/morandalex/gists{/gist_id}",
"starred_url": "https://api.github.com/users/morandalex/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/morandalex/subscriptions",
"organizations_url": "https://api.github.com/users/morandalex/orgs",
"repos_url": "https://api.github.com/users/morandalex/repos",
"events_url": "https://api.github.com/users/morandalex/events{/privacy}",
"received_events_url": "https://api.github.com/users/morandalex/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 8
| 2024-01-08T11:14:02
| 2024-03-11T19:33:29
| 2024-03-11T19:33:29
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
```
ollama run phi
>>> hello
Hello, how can I assist you today?
>>> create a js function
Error: Post "http://127.0.0.1:11434/api/generate": EOF
```
mistral is working on my machine. but phi not working , what is happening ?
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1853/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1853/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4532
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4532/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4532/comments
|
https://api.github.com/repos/ollama/ollama/issues/4532/events
|
https://github.com/ollama/ollama/issues/4532
| 2,305,022,439
|
I_kwDOJ0Z1Ps6JY9nn
| 4,532
|
codegemma 2b v1.1 q8 and q5_1 have incorrect model names
|
{
"login": "mroark1m",
"id": 708826,
"node_id": "MDQ6VXNlcjcwODgyNg==",
"avatar_url": "https://avatars.githubusercontent.com/u/708826?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mroark1m",
"html_url": "https://github.com/mroark1m",
"followers_url": "https://api.github.com/users/mroark1m/followers",
"following_url": "https://api.github.com/users/mroark1m/following{/other_user}",
"gists_url": "https://api.github.com/users/mroark1m/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mroark1m/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mroark1m/subscriptions",
"organizations_url": "https://api.github.com/users/mroark1m/orgs",
"repos_url": "https://api.github.com/users/mroark1m/repos",
"events_url": "https://api.github.com/users/mroark1m/events{/privacy}",
"received_events_url": "https://api.github.com/users/mroark1m/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-05-20T03:59:44
| 2024-05-21T16:18:23
| 2024-05-21T16:18:23
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
At these two urls:
https://ollama.com/library/codegemma:2b-code-v1.1-q8_0, I see "quantization Q5_1" in the list of files
https://ollama.com/library/codegemma:2b-code-v1.1-q5_1 I see "quantization Q8_1" in the list of files


Size seems correct, did not try to download them.
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
website
|
{
"login": "mroark1m",
"id": 708826,
"node_id": "MDQ6VXNlcjcwODgyNg==",
"avatar_url": "https://avatars.githubusercontent.com/u/708826?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mroark1m",
"html_url": "https://github.com/mroark1m",
"followers_url": "https://api.github.com/users/mroark1m/followers",
"following_url": "https://api.github.com/users/mroark1m/following{/other_user}",
"gists_url": "https://api.github.com/users/mroark1m/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mroark1m/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mroark1m/subscriptions",
"organizations_url": "https://api.github.com/users/mroark1m/orgs",
"repos_url": "https://api.github.com/users/mroark1m/repos",
"events_url": "https://api.github.com/users/mroark1m/events{/privacy}",
"received_events_url": "https://api.github.com/users/mroark1m/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4532/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4532/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7567
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7567/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7567/comments
|
https://api.github.com/repos/ollama/ollama/issues/7567/events
|
https://github.com/ollama/ollama/issues/7567
| 2,642,998,002
|
I_kwDOJ0Z1Ps6diPLy
| 7,567
|
pull error - 104.21.75.227:443: wsarecv: An existing connection was forcibly closed by the remote host
|
{
"login": "Assassinator-567",
"id": 131694884,
"node_id": "U_kgDOB9mBJA",
"avatar_url": "https://avatars.githubusercontent.com/u/131694884?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Assassinator-567",
"html_url": "https://github.com/Assassinator-567",
"followers_url": "https://api.github.com/users/Assassinator-567/followers",
"following_url": "https://api.github.com/users/Assassinator-567/following{/other_user}",
"gists_url": "https://api.github.com/users/Assassinator-567/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Assassinator-567/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Assassinator-567/subscriptions",
"organizations_url": "https://api.github.com/users/Assassinator-567/orgs",
"repos_url": "https://api.github.com/users/Assassinator-567/repos",
"events_url": "https://api.github.com/users/Assassinator-567/events{/privacy}",
"received_events_url": "https://api.github.com/users/Assassinator-567/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg",
"url": "https://api.github.com/repos/ollama/ollama/labels/windows",
"name": "windows",
"color": "0052CC",
"default": false,
"description": ""
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info",
"name": "needs more info",
"color": "BA8041",
"default": false,
"description": "More information is needed to assist"
},
{
"id": 6677370291,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCVsw",
"url": "https://api.github.com/repos/ollama/ollama/labels/networking",
"name": "networking",
"color": "0B5368",
"default": false,
"description": "Issues relating to ollama pull and push"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-11-08T06:21:30
| 2024-12-23T07:50:52
| 2024-12-23T07:50:52
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
pulling manifest
pulling 2049f5674b1e... 100% ▕████████████████████████████████████████████████████████▏ 9.0 GB
Error: Head "https://registry.ollama.ai/v2/library/qwen2.5/blobs/sha256:66b9ea09bd5b7099cbb4fc820f31b575c0366fa439b08245566692c6784e281e": read tcp 192.168.1.7:56036->104.21.75.227:443: wsarecv: An existing connection was forcibly closed by the remote host.
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.4.0
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7567/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7567/timeline
| null |
not_planned
| false
|
https://api.github.com/repos/ollama/ollama/issues/2189
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2189/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2189/comments
|
https://api.github.com/repos/ollama/ollama/issues/2189/events
|
https://github.com/ollama/ollama/issues/2189
| 2,100,588,594
|
I_kwDOJ0Z1Ps59NHAy
| 2,189
|
Error: Post "http://127.0.0.1:11434/api/generate": EOF
|
{
"login": "blackandcold",
"id": 141038,
"node_id": "MDQ6VXNlcjE0MTAzOA==",
"avatar_url": "https://avatars.githubusercontent.com/u/141038?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/blackandcold",
"html_url": "https://github.com/blackandcold",
"followers_url": "https://api.github.com/users/blackandcold/followers",
"following_url": "https://api.github.com/users/blackandcold/following{/other_user}",
"gists_url": "https://api.github.com/users/blackandcold/gists{/gist_id}",
"starred_url": "https://api.github.com/users/blackandcold/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/blackandcold/subscriptions",
"organizations_url": "https://api.github.com/users/blackandcold/orgs",
"repos_url": "https://api.github.com/users/blackandcold/repos",
"events_url": "https://api.github.com/users/blackandcold/events{/privacy}",
"received_events_url": "https://api.github.com/users/blackandcold/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 3
| 2024-01-25T15:05:07
| 2024-01-25T15:14:59
| 2024-01-25T15:14:58
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Installed by script and not AUR, previously running fine but since 2 weeks I can't run it anymore. MacOS 0.1.20 works fine.
> ollama run llama2:latest
> Error: Post "http://127.0.0.1:11434/api/generate": EOF
System:
OS: EndeavourOS Linux x86_64
Kernel: 6.7.0-arch3-1
Shell: zsh 5.9
CPU: AMD Ryzen 9 5900X (24) @ 3.700GHz
GPU: AMD ATI Radeon RX 6800 16GB
Memory: 13639MiB / 128714MiB
So there is some free action on a null pointer? :)
> Jän 25 15:58:43 OS ollama[192151]: 2024/01/25 15:58:43 gpu.go:104: Radeon GPU detected
> Jän 25 15:59:26 OS ollama[192151]: [GIN] 2024/01/25 - 15:59:26 | 200 | 33.771µs | 127.0.0.1 | HEAD ">
> Jän 25 15:59:26 OS ollama[192151]: [GIN] 2024/01/25 - 15:59:26 | 200 | 2.403459ms | 127.0.0.1 | POST ">
> Jän 25 15:59:26 OS ollama[192151]: [GIN] 2024/01/25 - 15:59:26 | 200 | 771.286µs | 127.0.0.1 | POST ">
> Jän 25 15:59:27 OS ollama[192151]: 2024/01/25 15:59:27 shim_ext_server_linux.go:24: Updating PATH to /usr/local/sbi>
> Jän 25 15:59:27 OS ollama[192151]: 2024/01/25 15:59:27 shim_ext_server.go:92: Loading Dynamic Shim llm server: /tmp>
> Jän 25 15:59:27 OS ollama[192151]: 2024/01/25 15:59:27 ext_server_common.go:136: Initializing internal llama server
> **Jän 25 15:59:27 OS ollama[192151]: free(): invalid pointer**
> Jän 25 15:59:27 OS systemd[1]: ollama.service: Main process exited, code=dumped, status=6/ABRT
> Jän 25 15:59:27 OS systemd[1]: ollama.service: Failed with result 'core-dump'.
> Jän 25 15:59:27 OS systemd[1]: ollama.service: Consumed 1.181s CPU time, 406.9M memory peak, 0B memory swap peak.
> Jän 25 15:59:31 OS systemd[1]: ollama.service: Scheduled restart job, restart counter is at 2.
> Jän 25 15:59:31 OS systemd[1]: Started Ollama Service.
> Jän 25 15:59:31 OS ollama[192251]: 2024/01/25 15:59:31 images.go:808: total blobs: 24
> Jän 25 15:59:31 OS ollama[192251]: 2024/01/25 15:59:31 images.go:815: total unused blobs removed: 0
> Jän 25 15:59:31 OS ollama[192251]: 2024/01/25 15:59:31 routes.go:930: Listening on 127.0.0.1:11434 (version 0.1.20)
> Jän 25 15:59:31 OS ollama[192251]: 2024/01/25 15:59:31 shim_ext_server.go:142: Dynamic LLM variants [cuda rocm]
> Jän 25 15:59:31 OS ollama[192251]: 2024/01/25 15:59:31 gpu.go:88: Detecting GPU type
> Jän 25 15:59:31 OS ollama[192251]: 2024/01/25 15:59:31 gpu.go:203: Searching for GPU management library libnvidia-m>
> Jän 25 15:59:31 OS ollama[192251]: 2024/01/25 15:59:31 gpu.go:248: Discovered GPU libraries: [/usr/lib/libnvidia-ml>
> Jän 25 15:59:31 OS ollama[192251]: 2024/01/25 15:59:31 gpu.go:259: Unable to load CUDA management library /usr/lib/>
> Jän 25 15:59:31 OS ollama[192251]: 2024/01/25 15:59:31 gpu.go:259: Unable to load CUDA management library /usr/lib6>
> Jän 25 15:59:31 OS ollama[192251]: 2024/01/25 15:59:31 gpu.go:203: Searching for GPU management library librocm_smi>
> Jän 25 15:59:31 OS ollama[192251]: 2024/01/25 15:59:31 gpu.go:248: Discovered GPU libraries: [/opt/rocm/lib/librocm>
> Jän 25 15:59:31 OS ollama[192251]: 2024/01/25 15:59:31 gpu.go:104: Radeon GPU detected
|
{
"login": "blackandcold",
"id": 141038,
"node_id": "MDQ6VXNlcjE0MTAzOA==",
"avatar_url": "https://avatars.githubusercontent.com/u/141038?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/blackandcold",
"html_url": "https://github.com/blackandcold",
"followers_url": "https://api.github.com/users/blackandcold/followers",
"following_url": "https://api.github.com/users/blackandcold/following{/other_user}",
"gists_url": "https://api.github.com/users/blackandcold/gists{/gist_id}",
"starred_url": "https://api.github.com/users/blackandcold/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/blackandcold/subscriptions",
"organizations_url": "https://api.github.com/users/blackandcold/orgs",
"repos_url": "https://api.github.com/users/blackandcold/repos",
"events_url": "https://api.github.com/users/blackandcold/events{/privacy}",
"received_events_url": "https://api.github.com/users/blackandcold/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2189/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2189/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/149
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/149/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/149/comments
|
https://api.github.com/repos/ollama/ollama/issues/149/events
|
https://github.com/ollama/ollama/issues/149
| 1,814,816,048
|
I_kwDOJ0Z1Ps5sK-Uw
| 149
|
error on main: the file name is invalid
|
{
"login": "nathanleclaire",
"id": 1476820,
"node_id": "MDQ6VXNlcjE0NzY4MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/1476820?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nathanleclaire",
"html_url": "https://github.com/nathanleclaire",
"followers_url": "https://api.github.com/users/nathanleclaire/followers",
"following_url": "https://api.github.com/users/nathanleclaire/following{/other_user}",
"gists_url": "https://api.github.com/users/nathanleclaire/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nathanleclaire/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nathanleclaire/subscriptions",
"organizations_url": "https://api.github.com/users/nathanleclaire/orgs",
"repos_url": "https://api.github.com/users/nathanleclaire/repos",
"events_url": "https://api.github.com/users/nathanleclaire/events{/privacy}",
"received_events_url": "https://api.github.com/users/nathanleclaire/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 4
| 2023-07-20T21:21:13
| 2023-07-20T21:33:41
| 2023-07-20T21:30:15
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
after pull and build. from client:
```
$ ollama run llama2
>>> hi
Error: Post "http://127.0.0.1:11434/api/generate": EOF
```
on server side:
```
llama_new_context_with_model: kv self size = 1024.00 MB
ggml_metal_init: allocating
ggml_metal_init: using MPS
ggml_metal_init: loading '(null)'
ggml_metal_init: error: Error Domain=NSCocoaErrorDomain Code=258 "The file name is invalid."`
```
|
{
"login": "nathanleclaire",
"id": 1476820,
"node_id": "MDQ6VXNlcjE0NzY4MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/1476820?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nathanleclaire",
"html_url": "https://github.com/nathanleclaire",
"followers_url": "https://api.github.com/users/nathanleclaire/followers",
"following_url": "https://api.github.com/users/nathanleclaire/following{/other_user}",
"gists_url": "https://api.github.com/users/nathanleclaire/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nathanleclaire/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nathanleclaire/subscriptions",
"organizations_url": "https://api.github.com/users/nathanleclaire/orgs",
"repos_url": "https://api.github.com/users/nathanleclaire/repos",
"events_url": "https://api.github.com/users/nathanleclaire/events{/privacy}",
"received_events_url": "https://api.github.com/users/nathanleclaire/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/149/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/149/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4136
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4136/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4136/comments
|
https://api.github.com/repos/ollama/ollama/issues/4136/events
|
https://github.com/ollama/ollama/issues/4136
| 2,278,289,915
|
I_kwDOJ0Z1Ps6Hy_H7
| 4,136
|
[Feature] Rapid Modelfile Updates
|
{
"login": "Arcitec",
"id": 38923130,
"node_id": "MDQ6VXNlcjM4OTIzMTMw",
"avatar_url": "https://avatars.githubusercontent.com/u/38923130?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Arcitec",
"html_url": "https://github.com/Arcitec",
"followers_url": "https://api.github.com/users/Arcitec/followers",
"following_url": "https://api.github.com/users/Arcitec/following{/other_user}",
"gists_url": "https://api.github.com/users/Arcitec/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Arcitec/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Arcitec/subscriptions",
"organizations_url": "https://api.github.com/users/Arcitec/orgs",
"repos_url": "https://api.github.com/users/Arcitec/repos",
"events_url": "https://api.github.com/users/Arcitec/events{/privacy}",
"received_events_url": "https://api.github.com/users/Arcitec/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 2
| 2024-05-03T19:22:17
| 2024-05-28T04:08:48
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Ollama is an absolutely brilliant project. Thank you everyone involved in creating it!
I've been working on local models, and noticed one weakness of Ollama. The initial import obviously has to take some time to convert the GGUF model weights into Ollama's native format. But after that, I need to tweak parameters, stop-words, temperature, template, etc, to perfect the model.
This is where Ollama falls apart a little bit.
First of all, I spent an hour scouring the documentation about how to update a locally created model. Finally, I figured out that if you want to update a locally created model, you have to run `ollama create` with the *exact same* parameters again.
It would be great to add a note about that to these three locations:
- https://github.com/ollama/ollama?tab=readme-ov-file#create-a-model
- https://github.com/ollama/ollama/blob/main/docs/modelfile.md
- https://github.com/ollama/ollama/blob/main/docs/import.md
Anyway, when I finally figured out that you have to "create" the model again, I noticed that it's taking a very long time. 23 seconds to be precise. It reads the model files on disk again, converts them again, hashes them, and finally figures out that the hashes match the on-disk data (`using already created layer sha256:...`), and then it *finally* at long last updates the stored model to match the latest Modelfile contents.
I have two potential ideas for improvements.
- Option 1: Add a flag to `ollama create` with the word `--keep-weights` or similar. This would just do an instantaneous update of the Modelfile parameters, while immediately reusing the latest, previously converted weights.
- Option 2: Automatically detect changed weights, by tracking the file modification timestamp of local model files, and not doing any conversion again if the Modelfile's `FROM` file timestamp matches the same as the previously imported model metadata. This would be a very convenient solution and would avoid user-error (such as users accidentally importing a Modelfile which belongs to another model into a mismatched model name). Ollama could simply alert the user that it detected no changes to the weights, and giving the user an alert that they can run again with `--convert-weights` if they want to force weight-conversion again anyway (someone might want that?).
Anyway, I really hope that this can be improved in some way, because waiting half a minute after every tiny Modelfile parameter tweak, just to check the changes, is a very tedious process. Other than this, Ollama has been absolutely perfect! :)
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4136/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4136/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/6998
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6998/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6998/comments
|
https://api.github.com/repos/ollama/ollama/issues/6998/events
|
https://github.com/ollama/ollama/issues/6998
| 2,552,293,931
|
I_kwDOJ0Z1Ps6YIOor
| 6,998
|
Is your llama3.2 models working?
|
{
"login": "dhandhalyabhavik",
"id": 86345824,
"node_id": "MDQ6VXNlcjg2MzQ1ODI0",
"avatar_url": "https://avatars.githubusercontent.com/u/86345824?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhandhalyabhavik",
"html_url": "https://github.com/dhandhalyabhavik",
"followers_url": "https://api.github.com/users/dhandhalyabhavik/followers",
"following_url": "https://api.github.com/users/dhandhalyabhavik/following{/other_user}",
"gists_url": "https://api.github.com/users/dhandhalyabhavik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhandhalyabhavik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhandhalyabhavik/subscriptions",
"organizations_url": "https://api.github.com/users/dhandhalyabhavik/orgs",
"repos_url": "https://api.github.com/users/dhandhalyabhavik/repos",
"events_url": "https://api.github.com/users/dhandhalyabhavik/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhandhalyabhavik/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 1
| 2024-09-27T08:07:36
| 2024-09-28T23:03:55
| 2024-09-28T23:03:35
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
llm_load_tensors: ggml ctx size = 0.13 MiB
llama_model_load: error loading model: done_getting_tensors: wrong number of tensors; expected 255, got 254
llama_load_model_from_file: exception loading model
terminate called after throwing an instance of 'std::runtime_error'
what(): done_getting_tensors: wrong number of tensors; expected 255, got 254
[GIN] 2024/09/27 - 08:05:40 | 404 | 331.549µs | 10.190.167.113 | POST "/api/generate"
### OS
Linux
### GPU
_No response_
### CPU
Intel
### Ollama version
0.1.34
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6998/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6998/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2339
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2339/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2339/comments
|
https://api.github.com/repos/ollama/ollama/issues/2339/events
|
https://github.com/ollama/ollama/issues/2339
| 2,116,670,213
|
I_kwDOJ0Z1Ps5-KdMF
| 2,339
|
`/api/generate` hangs after about 100 requests
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-02-03T20:36:58
| 2024-02-27T13:40:00
| 2024-02-12T16:10:17
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null | null |
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2339/reactions",
"total_count": 8,
"+1": 8,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2339/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3388
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3388/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3388/comments
|
https://api.github.com/repos/ollama/ollama/issues/3388/events
|
https://github.com/ollama/ollama/issues/3388
| 2,213,425,294
|
I_kwDOJ0Z1Ps6D7jCO
| 3,388
|
Stanford Alpaca
|
{
"login": "xvbingbing",
"id": 45099689,
"node_id": "MDQ6VXNlcjQ1MDk5Njg5",
"avatar_url": "https://avatars.githubusercontent.com/u/45099689?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xvbingbing",
"html_url": "https://github.com/xvbingbing",
"followers_url": "https://api.github.com/users/xvbingbing/followers",
"following_url": "https://api.github.com/users/xvbingbing/following{/other_user}",
"gists_url": "https://api.github.com/users/xvbingbing/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xvbingbing/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xvbingbing/subscriptions",
"organizations_url": "https://api.github.com/users/xvbingbing/orgs",
"repos_url": "https://api.github.com/users/xvbingbing/repos",
"events_url": "https://api.github.com/users/xvbingbing/events{/privacy}",
"received_events_url": "https://api.github.com/users/xvbingbing/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 0
| 2024-03-28T14:46:34
| 2024-03-28T14:46:34
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What model would you like?
Can Alpaca model be added? Thank you so much!!
https://github.com/tatsu-lab/stanford_alpaca
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3388/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3388/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/2854
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2854/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2854/comments
|
https://api.github.com/repos/ollama/ollama/issues/2854/events
|
https://github.com/ollama/ollama/issues/2854
| 2,162,653,469
|
I_kwDOJ0Z1Ps6A53kd
| 2,854
|
Starting Ollama a second time on Windows 11 creates another instance
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg",
"url": "https://api.github.com/repos/ollama/ollama/labels/windows",
"name": "windows",
"color": "0052CC",
"default": false,
"description": ""
}
] |
open
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 3
| 2024-03-01T05:44:34
| 2024-09-24T15:53:08
| null |
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |

|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2854/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2854/timeline
| null |
reopened
| false
|
https://api.github.com/repos/ollama/ollama/issues/8121
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8121/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8121/comments
|
https://api.github.com/repos/ollama/ollama/issues/8121/events
|
https://github.com/ollama/ollama/pull/8121
| 2,743,108,550
|
PR_kwDOJ0Z1Ps6FZUoI
| 8,121
|
cuda: adjust variant based on detected runners
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 3
| 2024-12-16T18:35:34
| 2025-01-07T08:47:58
| null |
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/8121",
"html_url": "https://github.com/ollama/ollama/pull/8121",
"diff_url": "https://github.com/ollama/ollama/pull/8121.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8121.patch",
"merged_at": null
}
|
When building from source, or using downstream packaging systems, multiple versions of cuda runners may not be present. This adjusts the discoverry logic to only use versioned variants if they are detected at runtime. It also adds a new warning message in the log if no cuda runners are present but cuda GPUs are detected.
Related to #8089
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8121/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8121/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6360
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6360/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6360/comments
|
https://api.github.com/repos/ollama/ollama/issues/6360/events
|
https://github.com/ollama/ollama/issues/6360
| 2,465,916,058
|
I_kwDOJ0Z1Ps6S-uSa
| 6,360
|
Detected as a virus by windows defender during/after update
|
{
"login": "mcDandy",
"id": 18588943,
"node_id": "MDQ6VXNlcjE4NTg4OTQz",
"avatar_url": "https://avatars.githubusercontent.com/u/18588943?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mcDandy",
"html_url": "https://github.com/mcDandy",
"followers_url": "https://api.github.com/users/mcDandy/followers",
"following_url": "https://api.github.com/users/mcDandy/following{/other_user}",
"gists_url": "https://api.github.com/users/mcDandy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mcDandy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mcDandy/subscriptions",
"organizations_url": "https://api.github.com/users/mcDandy/orgs",
"repos_url": "https://api.github.com/users/mcDandy/repos",
"events_url": "https://api.github.com/users/mcDandy/events{/privacy}",
"received_events_url": "https://api.github.com/users/mcDandy/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-08-14T13:55:40
| 2024-08-14T14:02:20
| 2024-08-14T14:02:19
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Windows Defender thinks it is some sort of command and control malware.


### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.3.5
|
{
"login": "mcDandy",
"id": 18588943,
"node_id": "MDQ6VXNlcjE4NTg4OTQz",
"avatar_url": "https://avatars.githubusercontent.com/u/18588943?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mcDandy",
"html_url": "https://github.com/mcDandy",
"followers_url": "https://api.github.com/users/mcDandy/followers",
"following_url": "https://api.github.com/users/mcDandy/following{/other_user}",
"gists_url": "https://api.github.com/users/mcDandy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mcDandy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mcDandy/subscriptions",
"organizations_url": "https://api.github.com/users/mcDandy/orgs",
"repos_url": "https://api.github.com/users/mcDandy/repos",
"events_url": "https://api.github.com/users/mcDandy/events{/privacy}",
"received_events_url": "https://api.github.com/users/mcDandy/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6360/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6360/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4119
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4119/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4119/comments
|
https://api.github.com/repos/ollama/ollama/issues/4119/events
|
https://github.com/ollama/ollama/pull/4119
| 2,277,056,340
|
PR_kwDOJ0Z1Ps5ucNZK
| 4,119
|
👌 IMPROVE: add portkey library for production tools
|
{
"login": "Saif-Shines",
"id": 17451294,
"node_id": "MDQ6VXNlcjE3NDUxMjk0",
"avatar_url": "https://avatars.githubusercontent.com/u/17451294?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Saif-Shines",
"html_url": "https://github.com/Saif-Shines",
"followers_url": "https://api.github.com/users/Saif-Shines/followers",
"following_url": "https://api.github.com/users/Saif-Shines/following{/other_user}",
"gists_url": "https://api.github.com/users/Saif-Shines/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Saif-Shines/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Saif-Shines/subscriptions",
"organizations_url": "https://api.github.com/users/Saif-Shines/orgs",
"repos_url": "https://api.github.com/users/Saif-Shines/repos",
"events_url": "https://api.github.com/users/Saif-Shines/events{/privacy}",
"received_events_url": "https://api.github.com/users/Saif-Shines/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-05-03T07:02:22
| 2024-05-06T17:25:23
| 2024-05-06T17:25:23
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4119",
"html_url": "https://github.com/ollama/ollama/pull/4119",
"diff_url": "https://github.com/ollama/ollama/pull/4119.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4119.patch",
"merged_at": "2024-05-06T17:25:23"
}
| null |
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4119/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4119/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/113
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/113/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/113/comments
|
https://api.github.com/repos/ollama/ollama/issues/113/events
|
https://github.com/ollama/ollama/issues/113
| 1,811,133,459
|
I_kwDOJ0Z1Ps5r87QT
| 113
|
Some users do not have /usr/local/bin
|
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 1
| 2023-07-19T04:46:38
| 2023-07-19T08:25:46
| 2023-07-19T08:25:45
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Need to check /usr/local/bin is created to add ollama into path
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/113/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/113/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6423
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6423/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6423/comments
|
https://api.github.com/repos/ollama/ollama/issues/6423/events
|
https://github.com/ollama/ollama/issues/6423
| 2,473,791,158
|
I_kwDOJ0Z1Ps6Tcw62
| 6,423
|
Running on MI300X via Docker fails with `rocBLAS error: Could not initialize Tensile host: No devices found`
|
{
"login": "peterschmidt85",
"id": 54148038,
"node_id": "MDQ6VXNlcjU0MTQ4MDM4",
"avatar_url": "https://avatars.githubusercontent.com/u/54148038?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/peterschmidt85",
"html_url": "https://github.com/peterschmidt85",
"followers_url": "https://api.github.com/users/peterschmidt85/followers",
"following_url": "https://api.github.com/users/peterschmidt85/following{/other_user}",
"gists_url": "https://api.github.com/users/peterschmidt85/gists{/gist_id}",
"starred_url": "https://api.github.com/users/peterschmidt85/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/peterschmidt85/subscriptions",
"organizations_url": "https://api.github.com/users/peterschmidt85/orgs",
"repos_url": "https://api.github.com/users/peterschmidt85/repos",
"events_url": "https://api.github.com/users/peterschmidt85/events{/privacy}",
"received_events_url": "https://api.github.com/users/peterschmidt85/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6433346500,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf3UTxA",
"url": "https://api.github.com/repos/ollama/ollama/labels/amd",
"name": "amd",
"color": "000000",
"default": false,
"description": "Issues relating to AMD GPUs and ROCm"
},
{
"id": 6677677816,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgVG-A",
"url": "https://api.github.com/repos/ollama/ollama/labels/docker",
"name": "docker",
"color": "0052CC",
"default": false,
"description": "Issues relating to using ollama in containers"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 9
| 2024-08-19T16:55:17
| 2024-09-10T15:51:08
| 2024-09-03T23:20:07
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
**Steps to reproduce:**
1. Run a Docker container using `ollama/ollama:rocm` on a machine with a single MI300X
2. Inside the container, run `ollama run llama3.1:70B`
**Actual behaviour:**
```
rocBLAS error: Could not initialize Tensile host: No devices found
```
The full output:
```
ollama serve &
[1] 649
[root@f4425b1a0236 workflow]# Couldn't find '/root/.ollama/id_ed25519'. Generating new private key.
Your new public key is:
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHmumM0c/iN0gZ9aPo99pq6QfzU+7AuA4V3/z933kCjK
2024/08/19 16:42:26 routes.go:1123: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
time=2024-08-19T16:42:26.947Z level=INFO source=images.go:782 msg="total blobs: 0"
time=2024-08-19T16:42:26.948Z level=INFO source=images.go:790 msg="total unused blobs removed: 0"
time=2024-08-19T16:42:26.948Z level=INFO source=routes.go:1170 msg="Listening on [::]:11434 (version 0.3.5)"
time=2024-08-19T16:42:26.949Z level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama307827265/runners
time=2024-08-19T16:42:30.581Z level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11 rocm_v60102]"
time=2024-08-19T16:42:30.581Z level=INFO source=gpu.go:204 msg="looking for compatible GPUs"
time=2024-08-19T16:42:30.590Z level=WARN source=amd_linux.go:201 msg="amdgpu too old gfx000" gpu=0
time=2024-08-19T16:42:30.590Z level=WARN source=amd_linux.go:201 msg="amdgpu too old gfx000" gpu=1
time=2024-08-19T16:42:30.590Z level=WARN source=amd_linux.go:201 msg="amdgpu too old gfx000" gpu=2
time=2024-08-19T16:42:30.603Z level=INFO source=amd_linux.go:345 msg="amdgpu is supported" gpu=3 gpu_type=gfx942
time=2024-08-19T16:42:30.603Z level=WARN source=amd_linux.go:201 msg="amdgpu too old gfx000" gpu=4
time=2024-08-19T16:42:30.603Z level=WARN source=amd_linux.go:201 msg="amdgpu too old gfx000" gpu=5
time=2024-08-19T16:42:30.603Z level=WARN source=amd_linux.go:201 msg="amdgpu too old gfx000" gpu=6
time=2024-08-19T16:42:30.603Z level=WARN source=amd_linux.go:201 msg="amdgpu too old gfx000" gpu=7
time=2024-08-19T16:42:30.603Z level=INFO source=types.go:105 msg="inference compute" id=3 library=rocm compute=gfx942 driver=6.7 name=1002:74a1 total="192.0 GiB" available="191.7 GiB"
[root@f4425b1a0236 workflow]#
[root@f4425b1a0236 workflow]# ollama pull llama3.1:70b
[GIN] 2024/08/19 - 16:42:37 | 200 | 129.844µs | 127.0.0.1 | HEAD "/"
pulling manifest ⠇ time=2024-08-19T16:42:39.572Z level=INFO source=download.go:175 msg="downloading a677b4a4b70c in 65 624 MB part(s)"
pulling manifest
pulling a677b4a4b70c... 58% ▕████████████████████████████████████████████████████ ▏ 23 GB/ 39 GB 465 MB/s 35st
pulling manifest
pulling a677b4a4b70c... 100% ▕██████████████████████████████████████████████████████████████████████████████████████████▏ 39 GB t
pulling manifest
pulling a677b4a4b70c... 100% ▕██████████████████████████████████████████████████████████████████████████████████████████▏ 39 GB
pulling manifest
pulling a677b4a4b70c... 100% ▕██████████████████████████████████████████████████████████████████████████████████████████▏ 39 GB
pulling manifest
pulling a677b4a4b70c... 100% ▕██████████████████████████████████████████████████████████████████████████████████████████▏ 39 GB
pulling manifest
pulling manifest
pulling a677b4a4b70c... 100% ▕██████████████████████████████████████████████████████████████████████████████████████████▏ 39 GB
pulling 11ce4ee3e170... 100% ▕██████████████████████████████████████████████████████████████████████████████████████████▏ 1.7 KB
pulling 0ba8f0e314b4... 100% ▕██████████████████████████████████████████████████████████████████████████████████████████▏ 12 KB
pulling 56bb8bd477a5... 100% ▕██████████████████████████████████████████████████████████████████████████████████████████▏ 96 B
pulling 654440dac7f3... 100% ▕██████████████████████████████████████████████████████████████████████████████████████████▏ 486 B
verifying sha256 digest
writing manifest
removing any unused layers
success
```
```
ollama run llama3.1:70b
[GIN] 2024/08/19 - 16:45:03 | 200 | 37.636µs | 127.0.0.1 | HEAD "/"
[GIN] 2024/08/19 - 16:45:03 | 200 | 33.789282ms | 127.0.0.1 | POST "/api/show"
time=2024-08-19T16:45:03.649Z level=INFO source=sched.go:710 msg="new model will fit in available VRAM in single GPU, loading" model=/root/.ollama/models/blobs/sha256-a677b4a4b70c45e702b1d600f7905e367733c53898b8be60e3f29272cf334574 gpu=3 parallel=4 available=205843886080 required="41.2 GiB"
time=2024-08-19T16:45:03.650Z level=INFO source=memory.go:309 msg="offload to rocm" layers.requested=-1 layers.model=81 layers.offload=81 layers.split="" memory.available="[191.7 GiB]" memory.required.full="41.2 GiB" memory.required.partial="41.2 GiB" memory.required.kv="2.5 GiB" memory.required.allocations="[41.2 GiB]" memory.weights.total="38.4 GiB" memory.weights.repeating="37.6 GiB" memory.weights.nonrepeating="822.0 MiB" memory.graph.full="1.1 GiB" memory.graph.partial="1.1 GiB"
time=2024-08-19T16:45:03.665Z level=INFO source=server.go:393 msg="starting llama server" cmd="/tmp/ollama307827265/runners/rocm_v60102/ollama_llama_server --model /root/.ollama/models/blobs/sha256-a677b4a4b70c45e702b1d600f7905e367733c53898b8be60e3f29272cf334574 --ctx-size 8192 --batch-size 512 --embedding --log-disable --n-gpu-layers 81 --numa distribute --parallel 4 --port 37363"
time=2024-08-19T16:45:03.665Z level=INFO source=sched.go:445 msg="loaded runners" count=1
time=2024-08-19T16:45:03.665Z level=INFO source=server.go:593 msg="waiting for llama runner to start responding"
time=2024-08-19T16:45:03.665Z level=INFO source=server.go:627 msg="waiting for server to become available" status="llm server error"
⠹ WARNING: /proc/sys/kernel/numa_balancing is enabled, this has been observed to impair performance
INFO [main] build info | build=1 commit="1e6f655" tid="138631197918016" timestamp=1724085903
INFO [main] system info | n_threads=96 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="138631197918016" timestamp=1724085903 total_threads=192
INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="191" port="37363" tid="138631197918016" timestamp=1724085903
⠸ time=2024-08-19T16:45:03.917Z level=INFO source=server.go:627 msg="waiting for server to become available" status="llm server loading model"
llama_model_loader: loaded meta data with 29 key-value pairs and 724 tensors from /root/.ollama/models/blobs/sha256-a677b4a4b70c45e702b1d600f7905e367733c53898b8be60e3f29272cf334574 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = Meta Llama 3.1 70B Instruct
llama_model_loader: - kv 3: general.finetune str = Instruct
llama_model_loader: - kv 4: general.basename str = Meta-Llama-3.1
llama_model_loader: - kv 5: general.size_label str = 70B
llama_model_loader: - kv 6: general.license str = llama3.1
llama_model_loader: - kv 7: general.tags arr[str,6] = ["facebook", "meta", "pytorch", "llam...
llama_model_loader: - kv 8: general.languages arr[str,8] = ["en", "de", "fr", "it", "pt", "hi", ...
llama_model_loader: - kv 9: llama.block_count u32 = 80
llama_model_loader: - kv 10: llama.context_length u32 = 131072
llama_model_loader: - kv 11: llama.embedding_length u32 = 8192
llama_model_loader: - kv 12: llama.feed_forward_length u32 = 28672
llama_model_loader: - kv 13: llama.attention.head_count u32 = 64
llama_model_loader: - kv 14: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 15: llama.rope.freq_base f32 = 500000.000000
llama_model_loader: - kv 16: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 17: general.file_type u32 = 2
llama_model_loader: - kv 18: llama.vocab_size u32 = 128256
llama_model_loader: - kv 19: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 20: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 21: tokenizer.ggml.pre str = llama-bpe
llama_model_loader: - kv 22: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 23: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 24: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv 25: tokenizer.ggml.bos_token_id u32 = 128000
llama_model_loader: - kv 26: tokenizer.ggml.eos_token_id u32 = 128009
llama_model_loader: - kv 27: tokenizer.chat_template str = {{- bos_token }}\n{%- if custom_tools ...
llama_model_loader: - kv 28: general.quantization_version u32 = 2
llama_model_loader: - type f32: 162 tensors
llama_model_loader: - type q4_0: 561 tensors
llama_model_loader: - type q6_K: 1 tensors
⠦ llm_load_vocab: special tokens cache size = 256
⠧ llm_load_vocab: token to piece cache size = 0.7999 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 128256
llm_load_print_meta: n_merges = 280147
llm_load_print_meta: vocab_only = 0
llm_load_print_meta: n_ctx_train = 131072
llm_load_print_meta: n_embd = 8192
llm_load_print_meta: n_layer = 80
llm_load_print_meta: n_head = 64
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_swa = 0
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 8
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 28672
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 131072
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: model type = 70B
llm_load_print_meta: model ftype = Q4_0
llm_load_print_meta: model params = 70.55 B
llm_load_print_meta: model size = 37.22 GiB (4.53 BPW)
llm_load_print_meta: general.name = Meta Llama 3.1 70B Instruct
llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>'
llm_load_print_meta: EOS token = 128009 '<|eot_id|>'
llm_load_print_meta: LF token = 128 'Ä'
llm_load_print_meta: EOT token = 128009 '<|eot_id|>'
llm_load_print_meta: max token length = 256
⠇
rocBLAS error: Could not initialize Tensile host: No devices found
```
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6423/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6423/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4279
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4279/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4279/comments
|
https://api.github.com/repos/ollama/ollama/issues/4279/events
|
https://github.com/ollama/ollama/issues/4279
| 2,287,137,613
|
I_kwDOJ0Z1Ps6IUvNN
| 4,279
|
Ollama reports an error when running the AI model using GPU
|
{
"login": "xiaomo0925",
"id": 112382100,
"node_id": "U_kgDOBrLQlA",
"avatar_url": "https://avatars.githubusercontent.com/u/112382100?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xiaomo0925",
"html_url": "https://github.com/xiaomo0925",
"followers_url": "https://api.github.com/users/xiaomo0925/followers",
"following_url": "https://api.github.com/users/xiaomo0925/following{/other_user}",
"gists_url": "https://api.github.com/users/xiaomo0925/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xiaomo0925/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xiaomo0925/subscriptions",
"organizations_url": "https://api.github.com/users/xiaomo0925/orgs",
"repos_url": "https://api.github.com/users/xiaomo0925/repos",
"events_url": "https://api.github.com/users/xiaomo0925/events{/privacy}",
"received_events_url": "https://api.github.com/users/xiaomo0925/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-05-09T08:06:18
| 2024-05-21T23:55:54
| 2024-05-21T23:55:54
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
When I use the command :
’docker run --gpus all -d -v f:/ai/ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama‘
the following error will occur,
“docker:Error response from daemon:failed to create task for container:failed to create shim task:OCIruntime create failed:runc create failed:unable to start container process:error during container init:error runninghook #0:error running hook:exit status 1,stdout:stderr:Auto-detectedmode as 'legacy nvidia-container-cli:initialization error:load library failed:libnvidia-ml.so.1:cannot open shared object file:no such file or directory:unknown.”How should we handle this issue?
### OS
_No response_
### GPU
Nvidia
### CPU
Intel
### Ollama version
_No response_
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4279/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4279/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/693
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/693/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/693/comments
|
https://api.github.com/repos/ollama/ollama/issues/693/events
|
https://github.com/ollama/ollama/issues/693
| 1,924,959,439
|
I_kwDOJ0Z1Ps5yvIzP
| 693
|
Mario System Prompt not working with Mistral Model
|
{
"login": "ghost",
"id": 10137,
"node_id": "MDQ6VXNlcjEwMTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghost",
"html_url": "https://github.com/ghost",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.github.com/users/ghost/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghost/subscriptions",
"organizations_url": "https://api.github.com/users/ghost/orgs",
"repos_url": "https://api.github.com/users/ghost/repos",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghost/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 5
| 2023-10-03T21:12:35
| 2023-11-02T03:00:38
| 2023-11-02T03:00:38
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
In this example: https://github.com/jmorganca/ollama/blob/main/examples/mario/readme.md
I can successfully create a new model with mistral, however it seems to ignore the system prompt. I tried various system prompts but seems to revert back to Mistral.
Here is my results:
>ollama run MARIO
> who r u?
>I am Mistral...
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/693/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/693/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8189
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8189/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8189/comments
|
https://api.github.com/repos/ollama/ollama/issues/8189/events
|
https://github.com/ollama/ollama/pull/8189
| 2,753,538,512
|
PR_kwDOJ0Z1Ps6F9Qdp
| 8,189
|
remove tutorials.md which pointed to removed tutorials
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-12-20T22:01:46
| 2024-12-20T22:04:22
| 2024-12-20T22:04:20
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/8189",
"html_url": "https://github.com/ollama/ollama/pull/8189",
"diff_url": "https://github.com/ollama/ollama/pull/8189.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8189.patch",
"merged_at": "2024-12-20T22:04:20"
}
| null |
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8189/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8189/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7854
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7854/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7854/comments
|
https://api.github.com/repos/ollama/ollama/issues/7854/events
|
https://github.com/ollama/ollama/issues/7854
| 2,697,262,990
|
I_kwDOJ0Z1Ps6gxPeO
| 7,854
|
Different outputs for first and subsequent inferences after model load
|
{
"login": "akamaus",
"id": 58955,
"node_id": "MDQ6VXNlcjU4OTU1",
"avatar_url": "https://avatars.githubusercontent.com/u/58955?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/akamaus",
"html_url": "https://github.com/akamaus",
"followers_url": "https://api.github.com/users/akamaus/followers",
"following_url": "https://api.github.com/users/akamaus/following{/other_user}",
"gists_url": "https://api.github.com/users/akamaus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/akamaus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/akamaus/subscriptions",
"organizations_url": "https://api.github.com/users/akamaus/orgs",
"repos_url": "https://api.github.com/users/akamaus/repos",
"events_url": "https://api.github.com/users/akamaus/events{/privacy}",
"received_events_url": "https://api.github.com/users/akamaus/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 5
| 2024-11-27T06:17:59
| 2024-11-27T19:11:41
| 2024-11-27T19:11:41
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
The result I get just after model load into VRAM differ from subsequent ones. It's easily reproduced and consistent.
After issuing ollama clean, the first time I get A, and next times I get B. I tried several models (marco-o1 and qwen2.5 ) and both CPU (with num_gpu=0 option) and GPU inference and observe this behavior everywhere.
```
$ ollama clean qwen2.5
$ python -c 'from ollama import generate; gen1 = generate(model="qwen2.5", prompt="Sky is blue because", options={"temperature": 0, "seed":0, "num_predict": 100}); gen2 = generate(model="qwen2.5", prompt="Sky is blue because", options={"temperature": 0, "seed":0, "num_predict": 100}); gen3 = generate(model="qwen2.5", prompt="Sky is blue because", options={"temperature": 0, "seed":0, "num_predict": 100}); print(gen1["response"] == gen2["response"], gen2["response"] == gen3["response"])'
False True
```
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.4.4
|
{
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users/jessegross/followers",
"following_url": "https://api.github.com/users/jessegross/following{/other_user}",
"gists_url": "https://api.github.com/users/jessegross/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jessegross/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jessegross/subscriptions",
"organizations_url": "https://api.github.com/users/jessegross/orgs",
"repos_url": "https://api.github.com/users/jessegross/repos",
"events_url": "https://api.github.com/users/jessegross/events{/privacy}",
"received_events_url": "https://api.github.com/users/jessegross/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7854/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7854/timeline
| null |
not_planned
| false
|
https://api.github.com/repos/ollama/ollama/issues/680
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/680/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/680/comments
|
https://api.github.com/repos/ollama/ollama/issues/680/events
|
https://github.com/ollama/ollama/issues/680
| 1,922,700,473
|
I_kwDOJ0Z1Ps5ymhS5
| 680
|
Is there a way to change the download/run directory?
|
{
"login": "improvethings",
"id": 16601027,
"node_id": "MDQ6VXNlcjE2NjAxMDI3",
"avatar_url": "https://avatars.githubusercontent.com/u/16601027?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/improvethings",
"html_url": "https://github.com/improvethings",
"followers_url": "https://api.github.com/users/improvethings/followers",
"following_url": "https://api.github.com/users/improvethings/following{/other_user}",
"gists_url": "https://api.github.com/users/improvethings/gists{/gist_id}",
"starred_url": "https://api.github.com/users/improvethings/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/improvethings/subscriptions",
"organizations_url": "https://api.github.com/users/improvethings/orgs",
"repos_url": "https://api.github.com/users/improvethings/repos",
"events_url": "https://api.github.com/users/improvethings/events{/privacy}",
"received_events_url": "https://api.github.com/users/improvethings/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 25
| 2023-10-02T20:58:02
| 2025-01-30T06:12:44
| 2023-12-04T19:42:58
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
On Linux, I want to download/run it from a directory with more space than /usr/share/
|
{
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.github.com/users/technovangelist/followers",
"following_url": "https://api.github.com/users/technovangelist/following{/other_user}",
"gists_url": "https://api.github.com/users/technovangelist/gists{/gist_id}",
"starred_url": "https://api.github.com/users/technovangelist/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/technovangelist/subscriptions",
"organizations_url": "https://api.github.com/users/technovangelist/orgs",
"repos_url": "https://api.github.com/users/technovangelist/repos",
"events_url": "https://api.github.com/users/technovangelist/events{/privacy}",
"received_events_url": "https://api.github.com/users/technovangelist/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/680/reactions",
"total_count": 9,
"+1": 9,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/680/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1117
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1117/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1117/comments
|
https://api.github.com/repos/ollama/ollama/issues/1117/events
|
https://github.com/ollama/ollama/issues/1117
| 1,991,835,598
|
I_kwDOJ0Z1Ps52uP_O
| 1,117
|
Change Default 11434 Port & fw question
|
{
"login": "jjsarf",
"id": 34278274,
"node_id": "MDQ6VXNlcjM0Mjc4Mjc0",
"avatar_url": "https://avatars.githubusercontent.com/u/34278274?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jjsarf",
"html_url": "https://github.com/jjsarf",
"followers_url": "https://api.github.com/users/jjsarf/followers",
"following_url": "https://api.github.com/users/jjsarf/following{/other_user}",
"gists_url": "https://api.github.com/users/jjsarf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jjsarf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jjsarf/subscriptions",
"organizations_url": "https://api.github.com/users/jjsarf/orgs",
"repos_url": "https://api.github.com/users/jjsarf/repos",
"events_url": "https://api.github.com/users/jjsarf/events{/privacy}",
"received_events_url": "https://api.github.com/users/jjsarf/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 3
| 2023-11-14T02:03:06
| 2023-11-14T04:45:31
| 2023-11-14T02:55:17
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Does anyone know how to change Ollama's default port?
Also how do we allow other computers to hit the /generate api?
Thanks,
John
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1117/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1117/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7044
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7044/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7044/comments
|
https://api.github.com/repos/ollama/ollama/issues/7044/events
|
https://github.com/ollama/ollama/issues/7044
| 2,556,272,088
|
I_kwDOJ0Z1Ps6YXZ3Y
| 7,044
|
Support detailed logs for each request
|
{
"login": "fzyzcjy",
"id": 5236035,
"node_id": "MDQ6VXNlcjUyMzYwMzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/5236035?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fzyzcjy",
"html_url": "https://github.com/fzyzcjy",
"followers_url": "https://api.github.com/users/fzyzcjy/followers",
"following_url": "https://api.github.com/users/fzyzcjy/following{/other_user}",
"gists_url": "https://api.github.com/users/fzyzcjy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fzyzcjy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fzyzcjy/subscriptions",
"organizations_url": "https://api.github.com/users/fzyzcjy/orgs",
"repos_url": "https://api.github.com/users/fzyzcjy/repos",
"events_url": "https://api.github.com/users/fzyzcjy/events{/privacy}",
"received_events_url": "https://api.github.com/users/fzyzcjy/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 3
| 2024-09-30T10:48:31
| 2024-12-14T17:10:27
| 2024-12-14T17:10:27
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi thanks for the library! In order to see what happens, it would be great to see detailed logs for each request. For example, not only the real string send into LLM, but also temperature, top_p, etc. It would be even greater if these could be output to a separate log or tracing service etc.
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7044/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7044/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3930
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3930/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3930/comments
|
https://api.github.com/repos/ollama/ollama/issues/3930/events
|
https://github.com/ollama/ollama/issues/3930
| 2,264,929,569
|
I_kwDOJ0Z1Ps6HABUh
| 3,930
|
GPU allocation lost after container idle period
|
{
"login": "hl-hok",
"id": 120292146,
"node_id": "U_kgDOByuDMg",
"avatar_url": "https://avatars.githubusercontent.com/u/120292146?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hl-hok",
"html_url": "https://github.com/hl-hok",
"followers_url": "https://api.github.com/users/hl-hok/followers",
"following_url": "https://api.github.com/users/hl-hok/following{/other_user}",
"gists_url": "https://api.github.com/users/hl-hok/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hl-hok/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hl-hok/subscriptions",
"organizations_url": "https://api.github.com/users/hl-hok/orgs",
"repos_url": "https://api.github.com/users/hl-hok/repos",
"events_url": "https://api.github.com/users/hl-hok/events{/privacy}",
"received_events_url": "https://api.github.com/users/hl-hok/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg",
"url": "https://api.github.com/repos/ollama/ollama/labels/nvidia",
"name": "nvidia",
"color": "8CDB00",
"default": false,
"description": "Issues relating to Nvidia GPUs and CUDA"
},
{
"id": 6677677816,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgVG-A",
"url": "https://api.github.com/repos/ollama/ollama/labels/docker",
"name": "docker",
"color": "0052CC",
"default": false,
"description": "Issues relating to using ollama in containers"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 14
| 2024-04-26T04:18:38
| 2024-10-15T19:07:14
| 2024-05-31T21:21:54
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I'm experiencing an issue with Ollama where the Docker container fails to utilize the GPU unless I restart the container. This occurs when the container remains idle for an extended period (e.g., a day).
Initially, the GPU is configured correctly and allocated to the container. However, after not using the LLM for a while, the container only utilizes the CPU and ignores the available GPU resources.
Restarting the Docker container resolves the issue, and the GPU is allocated again. I've verified that my GPU configuration is correct, and the Ollama service is running normally.
**Steps to reproduce:**
1. Run an LLM using Ollama in a Docker container with a correctly configured GPU.
2. Allow the container to remain idle for an extended period (e.g., a day).
3. Attempt to use the LLM again.
4. Observe that the container only utilizes the CPU and not the GPU.
**Expected behavior:**
The Docker container should continue to utilize the allocated GPU resources even after an extended idle period.
**Environment:**
Ollama version: 0.1.32
Docker version: 26.0.2
GPU driver version: CUDA 12.4
Kernel version: 6.5.0-27-generic
### OS
Linux
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.1.32
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3930/reactions",
"total_count": 5,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3930/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2212
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2212/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2212/comments
|
https://api.github.com/repos/ollama/ollama/issues/2212/events
|
https://github.com/ollama/ollama/pull/2212
| 2,102,744,598
|
PR_kwDOJ0Z1Ps5lL_vK
| 2,212
|
fix build
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-01-26T19:04:39
| 2024-01-26T19:19:09
| 2024-01-26T19:19:08
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/2212",
"html_url": "https://github.com/ollama/ollama/pull/2212",
"diff_url": "https://github.com/ollama/ollama/pull/2212.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2212.patch",
"merged_at": "2024-01-26T19:19:08"
}
| null |
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2212/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2212/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/316
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/316/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/316/comments
|
https://api.github.com/repos/ollama/ollama/issues/316/events
|
https://github.com/ollama/ollama/pull/316
| 1,845,012,518
|
PR_kwDOJ0Z1Ps5XoVCp
| 316
|
fix a typo in the tweetwriter example Modelfile
|
{
"login": "soroushj",
"id": 4595459,
"node_id": "MDQ6VXNlcjQ1OTU0NTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/4595459?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/soroushj",
"html_url": "https://github.com/soroushj",
"followers_url": "https://api.github.com/users/soroushj/followers",
"following_url": "https://api.github.com/users/soroushj/following{/other_user}",
"gists_url": "https://api.github.com/users/soroushj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/soroushj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/soroushj/subscriptions",
"organizations_url": "https://api.github.com/users/soroushj/orgs",
"repos_url": "https://api.github.com/users/soroushj/repos",
"events_url": "https://api.github.com/users/soroushj/events{/privacy}",
"received_events_url": "https://api.github.com/users/soroushj/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-08-10T11:44:23
| 2023-08-10T15:23:24
| 2023-08-10T14:19:53
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/316",
"html_url": "https://github.com/ollama/ollama/pull/316",
"diff_url": "https://github.com/ollama/ollama/pull/316.diff",
"patch_url": "https://github.com/ollama/ollama/pull/316.patch",
"merged_at": "2023-08-10T14:19:53"
}
| null |
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/316/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/316/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7650
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7650/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7650/comments
|
https://api.github.com/repos/ollama/ollama/issues/7650/events
|
https://github.com/ollama/ollama/issues/7650
| 2,655,039,375
|
I_kwDOJ0Z1Ps6eQK-P
| 7,650
|
AMD Radeon 780M GPU (Pop OS !) System 76
|
{
"login": "ihgumilar",
"id": 49016400,
"node_id": "MDQ6VXNlcjQ5MDE2NDAw",
"avatar_url": "https://avatars.githubusercontent.com/u/49016400?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ihgumilar",
"html_url": "https://github.com/ihgumilar",
"followers_url": "https://api.github.com/users/ihgumilar/followers",
"following_url": "https://api.github.com/users/ihgumilar/following{/other_user}",
"gists_url": "https://api.github.com/users/ihgumilar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ihgumilar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ihgumilar/subscriptions",
"organizations_url": "https://api.github.com/users/ihgumilar/orgs",
"repos_url": "https://api.github.com/users/ihgumilar/repos",
"events_url": "https://api.github.com/users/ihgumilar/events{/privacy}",
"received_events_url": "https://api.github.com/users/ihgumilar/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5755339642,
"node_id": "LA_kwDOJ0Z1Ps8AAAABVwuDeg",
"url": "https://api.github.com/repos/ollama/ollama/labels/linux",
"name": "linux",
"color": "516E70",
"default": false,
"description": ""
},
{
"id": 6433346500,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf3UTxA",
"url": "https://api.github.com/repos/ollama/ollama/labels/amd",
"name": "amd",
"color": "000000",
"default": false,
"description": "Issues relating to AMD GPUs and ROCm"
},
{
"id": 6677745918,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgZQ_g",
"url": "https://api.github.com/repos/ollama/ollama/labels/gpu",
"name": "gpu",
"color": "76C49E",
"default": false,
"description": ""
}
] |
open
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 37
| 2024-11-13T10:46:05
| 2024-11-14T19:32:25
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Hi,
I would like to ask your help.
I am running Ollama with the following GPU, but it seems that it is not picking up my GPU. Is there any advice ?
AMD Ryzen™ 7 7840U processor.
When I **run ollama serve**, it gives me this error. Any advice ?
Thanks
```
2024/11/13 17:40:14 routes.go:1189: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION:11.0.0 HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11435 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/ihshan/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2024-11-13T17:40:14.880+07:00 level=INFO source=images.go:755 msg="total blobs: 0"
time=2024-11-13T17:40:14.880+07:00 level=INFO source=images.go:762 msg="total unused blobs removed: 0"
time=2024-11-13T17:40:14.881+07:00 level=INFO source=routes.go:1240 msg="Listening on 127.0.0.1:11435 (version 0.4.1)"
time=2024-11-13T17:40:14.881+07:00 level=INFO source=common.go:135 msg="extracting embedded files" dir=/tmp/ollama1477910346/runners
time=2024-11-13T17:40:14.949+07:00 level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[rocm cpu cpu_avx cpu_avx2 cuda_v11 cuda_v12]"
time=2024-11-13T17:40:14.949+07:00 level=INFO source=gpu.go:221 msg="looking for compatible GPUs"
time=2024-11-13T17:40:16.902+07:00 level=INFO source=gpu.go:610 msg="no nvidia devices detected by library /usr/lib/x86_64-linux-gnu/libcuda.so.560.35.03"
time=2024-11-13T17:40:22.056+07:00 level=WARN source=amd_linux.go:61 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
time=2024-11-13T17:40:22.057+07:00 level=INFO source=amd_linux.go:296 msg="unsupported Radeon iGPU detected skipping" id=0 total="512.0 MiB"
time=2024-11-13T17:40:22.057+07:00 level=INFO source=amd_linux.go:399 msg="no compatible amdgpu devices detected"
time=2024-11-13T17:40:22.057+07:00 level=INFO source=gpu.go:386 msg="no compatible GPUs were discovered"
time=2024-11-13T17:40:22.057+07:00 level=INFO source=types.go:123 msg="inference compute" id=0 library=cpu variant=avx2 compute="" driver=0.0 name="" total="30.6 GiB" available="23.7 GiB"
```
### OS
Linux
### GPU
AMD
### CPU
Other
### Ollama version
0.4.1
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7650/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7650/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/3566
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3566/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3566/comments
|
https://api.github.com/repos/ollama/ollama/issues/3566/events
|
https://github.com/ollama/ollama/pull/3566
| 2,234,479,985
|
PR_kwDOJ0Z1Ps5sL_Ol
| 3,566
|
Handle very slow model loads
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-04-09T23:36:02
| 2024-04-09T23:53:52
| 2024-04-09T23:53:49
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3566",
"html_url": "https://github.com/ollama/ollama/pull/3566",
"diff_url": "https://github.com/ollama/ollama/pull/3566.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3566.patch",
"merged_at": "2024-04-09T23:53:49"
}
|
During testing, we're seeing some models take over 3 minutes.
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3566/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3566/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4775
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4775/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4775/comments
|
https://api.github.com/repos/ollama/ollama/issues/4775/events
|
https://github.com/ollama/ollama/issues/4775
| 2,329,413,441
|
I_kwDOJ0Z1Ps6K2AdB
| 4,775
|
Error: llama runner process has terminated: exit status 1
|
{
"login": "BAK-HOME",
"id": 145625297,
"node_id": "U_kgDOCK4Q0Q",
"avatar_url": "https://avatars.githubusercontent.com/u/145625297?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BAK-HOME",
"html_url": "https://github.com/BAK-HOME",
"followers_url": "https://api.github.com/users/BAK-HOME/followers",
"following_url": "https://api.github.com/users/BAK-HOME/following{/other_user}",
"gists_url": "https://api.github.com/users/BAK-HOME/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BAK-HOME/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BAK-HOME/subscriptions",
"organizations_url": "https://api.github.com/users/BAK-HOME/orgs",
"repos_url": "https://api.github.com/users/BAK-HOME/repos",
"events_url": "https://api.github.com/users/BAK-HOME/events{/privacy}",
"received_events_url": "https://api.github.com/users/BAK-HOME/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5755339642,
"node_id": "LA_kwDOJ0Z1Ps8AAAABVwuDeg",
"url": "https://api.github.com/repos/ollama/ollama/labels/linux",
"name": "linux",
"color": "516E70",
"default": false,
"description": ""
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info",
"name": "needs more info",
"color": "BA8041",
"default": false,
"description": "More information is needed to assist"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 7
| 2024-06-02T01:06:41
| 2024-11-05T23:15:16
| 2024-11-05T23:15:16
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I have tried several versions before this problem occurs, I would like to ask who encountered such a problem, can you help solve it.
Error: llama runner process has terminated: exit status 1
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.40
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4775/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4775/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2522
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2522/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2522/comments
|
https://api.github.com/repos/ollama/ollama/issues/2522/events
|
https://github.com/ollama/ollama/issues/2522
| 2,137,445,173
|
I_kwDOJ0Z1Ps5_ZtM1
| 2,522
|
Clicking view logs menu item multiple times causes it to stop working on Ollama Windows preview
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg",
"url": "https://api.github.com/repos/ollama/ollama/labels/windows",
"name": "windows",
"color": "0052CC",
"default": false,
"description": ""
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 3
| 2024-02-15T21:05:30
| 2024-02-17T01:23:38
| 2024-02-17T01:23:38
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
```
time=2024-02-15T21:04:25.135Z level=DEBUG source=logging_windows.go:12 msg="viewing logs with start C:\\Users\\jeff\\AppData\\Local\\Ollama"
time=2024-02-15T21:04:32.644Z level=DEBUG source=logging_windows.go:12 msg="viewing logs with start C:\\Users\\jeff\\AppData\\Local\\Ollama"
```
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2522/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2522/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/734
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/734/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/734/comments
|
https://api.github.com/repos/ollama/ollama/issues/734/events
|
https://github.com/ollama/ollama/issues/734
| 1,931,662,834
|
I_kwDOJ0Z1Ps5zItXy
| 734
|
Need an option with low memory of GPU
|
{
"login": "tacsotai",
"id": 80247372,
"node_id": "MDQ6VXNlcjgwMjQ3Mzcy",
"avatar_url": "https://avatars.githubusercontent.com/u/80247372?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tacsotai",
"html_url": "https://github.com/tacsotai",
"followers_url": "https://api.github.com/users/tacsotai/followers",
"following_url": "https://api.github.com/users/tacsotai/following{/other_user}",
"gists_url": "https://api.github.com/users/tacsotai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tacsotai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tacsotai/subscriptions",
"organizations_url": "https://api.github.com/users/tacsotai/orgs",
"repos_url": "https://api.github.com/users/tacsotai/repos",
"events_url": "https://api.github.com/users/tacsotai/events{/privacy}",
"received_events_url": "https://api.github.com/users/tacsotai/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2023-10-08T06:09:57
| 2023-10-08T07:38:44
| 2023-10-08T07:38:44
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I tried your great program "ollama".
I was succeeded with CPU, but unfortunately my linux machine not have enough memory.
So, could you prepare an option with low memory of GPU ?
```
$ ollama serve
2023/10/08 06:05:12 images.go:996: total blobs: 17
2023/10/08 06:05:12 images.go:1003: total unused blobs removed: 0
2023/10/08 06:05:12 routes.go:572: Listening on 127.0.0.1:11434
2023/10/08 06:05:44 llama.go:239: 6144 MiB VRAM available, loading up to 54 GPU layers
2023/10/08 06:05:44 llama.go:313: starting llama runner
2023/10/08 06:05:44 llama.go:349: waiting for llama runner to start responding
ggml_init_cublas: found 1 CUDA devices:
Device 0: NVIDIA GeForce RTX 3060 Laptop GPU, compute capability 8.6
{"timestamp":1696745144,"level":"INFO","function":"main","line":1190,"message":"build info","build":1009,"commit":"9e232f0"}
{"timestamp":1696745144,"level":"INFO","function":"main","line":1192,"message":"system info","n_threads":6,"total_threads":12,"system_info":"AVX = 1 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | VSX = 0 | "}
llama.cpp: loading model from /home/tac/.ollama/models/blobs/sha256:b5749cc827d33b7cb4c8869cede7b296a0a28d9e5d1982705c2ba4c603258159
llama_model_load_internal: format = ggjt v3 (latest)
llama_model_load_internal: n_vocab = 32000
llama_model_load_internal: n_ctx = 2048
llama_model_load_internal: n_embd = 4096
llama_model_load_internal: n_mult = 256
llama_model_load_internal: n_head = 32
llama_model_load_internal: n_head_kv = 32
llama_model_load_internal: n_layer = 32
llama_model_load_internal: n_rot = 128
llama_model_load_internal: n_gqa = 1
llama_model_load_internal: rnorm_eps = 5.0e-06
llama_model_load_internal: n_ff = 11008
llama_model_load_internal: freq_base = 10000.0
llama_model_load_internal: freq_scale = 1
llama_model_load_internal: ftype = 2 (mostly Q4_0)
llama_model_load_internal: model size = 7B
llama_model_load_internal: ggml ctx size = 0.08 MB
llama_model_load_internal: using CUDA for GPU acceleration
llama_model_load_internal: mem required = 468.40 MB (+ 1024.00 MB per state)
llama_model_load_internal: allocating batch_size x (512 kB + n_ctx x 128 B) = 384 MB VRAM for the scratch buffer
llama_model_load_internal: offloading 32 repeating layers to GPU
llama_model_load_internal: offloading non-repeating layers to GPU
llama_model_load_internal: offloading v cache to GPU
llama_model_load_internal: offloading k cache to GPU
llama_model_load_internal: offloaded 35/35 layers to GPU
llama_model_load_internal: total VRAM used: 4954 MB
llama_new_context_with_model: kv self size = 1024.00 MB
llama server listening at http://127.0.0.1:52159
{"timestamp":1696745144,"level":"INFO","function":"main","line":1443,"message":"HTTP server listening","hostname":"127.0.0.1","port":52159}
{"timestamp":1696745144,"level":"INFO","function":"log_server_request","line":1157,"message":"request","remote_addr":"127.0.0.1","remote_port":51346,"status":200,"method":"HEAD","path":"/","params":{}}
2023/10/08 06:05:44 llama.go:365: llama runner started in 0.802513 seconds
{"timestamp":1696745144,"level":"INFO","function":"log_server_request","line":1157,"message":"request","remote_addr":"127.0.0.1","remote_port":51346,"status":200,"method":"POST","path":"/tokenize","params":{}}
{"timestamp":1696745145,"level":"INFO","function":"log_server_request","line":1157,"message":"request","remote_addr":"127.0.0.1","remote_port":51346,"status":200,"method":"POST","path":"/tokenize","params":{}}
CUDA error 2 at /go/src/github.com/jmorganca/ollama/llm/llama.cpp/ggml/ggml-cuda.cu:4856: out of memory
[GIN] 2023/10/08 - 06:05:45 | 200 | 2.741464312s | 127.0.0.1 | POST "/api/generate"
2023/10/08 06:05:45 llama.go:323: llama runner exited with error: exit status 1
```
```
$ nvidia-smi
Sun Oct 8 06:04:18 2023
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.104.12 Driver Version: 535.104.12 CUDA Version: 12.2 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 NVIDIA GeForce RTX 3060 ... On | 00000000:01:00.0 Off | N/A |
| N/A 38C P0 N/A / 80W | 2MiB / 6144MiB | 0% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
+---------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
| No running processes found |
+---------------------------------------------------------------------------------------+
```
|
{
"login": "tacsotai",
"id": 80247372,
"node_id": "MDQ6VXNlcjgwMjQ3Mzcy",
"avatar_url": "https://avatars.githubusercontent.com/u/80247372?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tacsotai",
"html_url": "https://github.com/tacsotai",
"followers_url": "https://api.github.com/users/tacsotai/followers",
"following_url": "https://api.github.com/users/tacsotai/following{/other_user}",
"gists_url": "https://api.github.com/users/tacsotai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tacsotai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tacsotai/subscriptions",
"organizations_url": "https://api.github.com/users/tacsotai/orgs",
"repos_url": "https://api.github.com/users/tacsotai/repos",
"events_url": "https://api.github.com/users/tacsotai/events{/privacy}",
"received_events_url": "https://api.github.com/users/tacsotai/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/734/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/734/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5031
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5031/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5031/comments
|
https://api.github.com/repos/ollama/ollama/issues/5031/events
|
https://github.com/ollama/ollama/pull/5031
| 2,351,967,559
|
PR_kwDOJ0Z1Ps5yaGcf
| 5,031
|
fix: multibyte utf16
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-06-13T20:08:58
| 2024-06-13T20:14:56
| 2024-06-13T20:14:55
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5031",
"html_url": "https://github.com/ollama/ollama/pull/5031",
"diff_url": "https://github.com/ollama/ollama/pull/5031.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5031.patch",
"merged_at": "2024-06-13T20:14:55"
}
|
follow up to #5025 and #4715 which fixes multibyte runes for utf16
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5031/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5031/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6286
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6286/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6286/comments
|
https://api.github.com/repos/ollama/ollama/issues/6286/events
|
https://github.com/ollama/ollama/issues/6286
| 2,458,052,205
|
I_kwDOJ0Z1Ps6SguZt
| 6,286
|
Context window size cannot be changed
|
{
"login": "mihaelagrigore",
"id": 38474985,
"node_id": "MDQ6VXNlcjM4NDc0OTg1",
"avatar_url": "https://avatars.githubusercontent.com/u/38474985?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mihaelagrigore",
"html_url": "https://github.com/mihaelagrigore",
"followers_url": "https://api.github.com/users/mihaelagrigore/followers",
"following_url": "https://api.github.com/users/mihaelagrigore/following{/other_user}",
"gists_url": "https://api.github.com/users/mihaelagrigore/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mihaelagrigore/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mihaelagrigore/subscriptions",
"organizations_url": "https://api.github.com/users/mihaelagrigore/orgs",
"repos_url": "https://api.github.com/users/mihaelagrigore/repos",
"events_url": "https://api.github.com/users/mihaelagrigore/events{/privacy}",
"received_events_url": "https://api.github.com/users/mihaelagrigore/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 21
| 2024-08-09T14:26:58
| 2024-10-17T07:52:29
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I see this issue has been partially reported, but none of the previous reports seem to be extensive in their tests of possible methods to set this option.
The problem:
Ollama server truncates the input to 2048 tokens regardless of the chat completion API used.
My setup:
I tried Ollama on my local computer with Windows and CPU only
As well as on a Linux machine, CPU only
I tried several models: llama3, gemma2, mistral
I tried several APIs: ollama's, Langchain, OpenAI
I see Ollama server starts by default on my two machines with a context window size of 8192.
`
level=INFO source=server.go:617 msg="waiting for server to become available" status="llm server loading model"
llama_new_context_with_model: n_ctx = 8192
`
But just in case, I also tried:
`/set parameter num_ctx 8192`
Which, as a side note, on Linux machine ends up setting a window size of 4*8192, so I had to restart the server.
The code I use for getting a chat completion:
`completion = self.llm_client.chat(model=self.model, messages=[`
` {"role": "system", "content": self.context},`
` {"role": "user", "content": question}`
` ],`
` options=dict(temperature=temperature, n_ctx=n_ctx)`
`)`
This is ignored regardless of which API I use (ollama, langchain or openAI) or on which machine I run (Windows or Linux).
The server logs show:
`INFO [update_slots] input truncated | n_ctx=2048 n_erase=3157 n_keep=4 n_left=2044 n_shift=1022 tid="139731362228096" timestamp=1723208612`
The only way I can get ollama to use a context window of a given size is by not using an API and just making the call directly through the **requests** library. But this is much slower than using langchain API (which seems to be the fastest of the 3).
`url = base_url + "/api/chat"`
`model = self.model`
`payload = {
"model": model,
"messages": [
{"role": "system", "content": self.context},
{"role": "user", "content": question}
],
"stream": False,
"options": {
"num_ctx": n_ctx,
"temperature": self.temperature,
"max_tokens": self.max_tokens
}
}
headers = {
"Content-Type": "application/json"
}
response = requests.post(url, data=json.dumps(payload), headers=headers)
response.raise_for_status()`
### OS
Linux, Windows
### GPU
_No response_
### CPU
_No response_
### Ollama version
0.3.4
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6286/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6286/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/4134
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4134/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4134/comments
|
https://api.github.com/repos/ollama/ollama/issues/4134/events
|
https://github.com/ollama/ollama/issues/4134
| 2,278,239,364
|
I_kwDOJ0Z1Ps6HyyyE
| 4,134
|
WithSecure quarantined ollama_llama_server.exe as harmful file / Malware
|
{
"login": "sjdevcode",
"id": 168860269,
"node_id": "U_kgDOChCabQ",
"avatar_url": "https://avatars.githubusercontent.com/u/168860269?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sjdevcode",
"html_url": "https://github.com/sjdevcode",
"followers_url": "https://api.github.com/users/sjdevcode/followers",
"following_url": "https://api.github.com/users/sjdevcode/following{/other_user}",
"gists_url": "https://api.github.com/users/sjdevcode/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sjdevcode/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sjdevcode/subscriptions",
"organizations_url": "https://api.github.com/users/sjdevcode/orgs",
"repos_url": "https://api.github.com/users/sjdevcode/repos",
"events_url": "https://api.github.com/users/sjdevcode/events{/privacy}",
"received_events_url": "https://api.github.com/users/sjdevcode/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg",
"url": "https://api.github.com/repos/ollama/ollama/labels/windows",
"name": "windows",
"color": "0052CC",
"default": false,
"description": ""
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 7
| 2024-05-03T18:49:42
| 2024-05-28T21:01:51
| 2024-05-28T21:01:51
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
After updating Ollama to version 0.1.33 WithSecure Elements identified ollama_llama_server.exe as a harmful file and put it in quarantine. It classified it as "Category: Malware and Type: Exploit".
It's about ollama_llama_server.exe in the \ollama_runners\cpu_avx folder. The executables in the other ollama_runners folders are unaffected.
I assume it's a false positive. However, a solution is highly appreciated.
### OS
Windows
### GPU
AMD
### CPU
AMD
### Ollama version
0.1.33
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4134/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4134/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6599
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6599/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6599/comments
|
https://api.github.com/repos/ollama/ollama/issues/6599/events
|
https://github.com/ollama/ollama/issues/6599
| 2,501,898,827
|
I_kwDOJ0Z1Ps6VH_JL
| 6,599
|
Unable to resolve Cuda-drivers on RHEL8.9
|
{
"login": "DanielPradoPino",
"id": 26769287,
"node_id": "MDQ6VXNlcjI2NzY5Mjg3",
"avatar_url": "https://avatars.githubusercontent.com/u/26769287?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DanielPradoPino",
"html_url": "https://github.com/DanielPradoPino",
"followers_url": "https://api.github.com/users/DanielPradoPino/followers",
"following_url": "https://api.github.com/users/DanielPradoPino/following{/other_user}",
"gists_url": "https://api.github.com/users/DanielPradoPino/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DanielPradoPino/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DanielPradoPino/subscriptions",
"organizations_url": "https://api.github.com/users/DanielPradoPino/orgs",
"repos_url": "https://api.github.com/users/DanielPradoPino/repos",
"events_url": "https://api.github.com/users/DanielPradoPino/events{/privacy}",
"received_events_url": "https://api.github.com/users/DanielPradoPino/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 1
| 2024-09-03T04:34:31
| 2024-09-09T11:15:30
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
While in RHEL8.9 Ollama Installation cannot finish due to an issue while installing cuda-drivers.
Nvidia repository is successfully installed and I can see cuda-drivers are listed there but when triggering repolist only cuda-drivers-fabricmanager are listed.
I an running crazy with this issue, I have Ollama installed in other similar instances but since the latest update it seems I cannot install it for some reason.
Hoping anyone here had the same issue and can guide me to fix it.
`>>> Installing CUDA driver...
Updating Subscription Management repositories.
Unable to read consumer identity
Last metadata expiration check: 0:00:06 ago on Tue 03 Sep 2024 08:27:42 AM +04.
All matches were filtered out by modular filtering for argument: cuda-drivers
Error: Unable to find a match: cuda-drivers`
Thanks in advance.
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.3.7
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6599/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6599/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/7952
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7952/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7952/comments
|
https://api.github.com/repos/ollama/ollama/issues/7952/events
|
https://github.com/ollama/ollama/issues/7952
| 2,721,015,206
|
I_kwDOJ0Z1Ps6iL2Wm
| 7,952
|
Problems (with nvidia-smi) after upgrading to 0.4.7 (from 0.3 series)
|
{
"login": "stronk7",
"id": 167147,
"node_id": "MDQ6VXNlcjE2NzE0Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/167147?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stronk7",
"html_url": "https://github.com/stronk7",
"followers_url": "https://api.github.com/users/stronk7/followers",
"following_url": "https://api.github.com/users/stronk7/following{/other_user}",
"gists_url": "https://api.github.com/users/stronk7/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stronk7/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stronk7/subscriptions",
"organizations_url": "https://api.github.com/users/stronk7/orgs",
"repos_url": "https://api.github.com/users/stronk7/repos",
"events_url": "https://api.github.com/users/stronk7/events{/privacy}",
"received_events_url": "https://api.github.com/users/stronk7/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-12-05T17:31:25
| 2025-01-16T16:11:11
| 2025-01-16T16:11:10
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Hi,
while testing the new 0.4.7 series, everything seems to be working ok (Mac), but I've detected a problem when running on Ubuntu 24.04, with docker.
And, more specifically, the problem is with `nvidia-smi` because, unless I'm wrong, the GPU is being used normally and not the CPU.
With 0.3.13, I get this (all correct):

But once I switch to 0.4.7, I get this:

Note that, while the "total" memory usage displays ok (more or less similar to the 0.3.x one), the memory in the processes is not shown at all, although the processes themselves are.
I've looked to logs and everything looks ok there. And the only changing piece is the ollama docker version.
I was planning to go playing with the new Flash attention and K/V improvements and that piece of information is vital to be able to compare with different models and context sizes, so it would be great to get it back (if it's something somehow related with Ollama).
So, that's the reason for reporting it. Thanks for all the hard work, you rock!
Ciao :-)
### OS
Docker
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.4.7
|
{
"login": "stronk7",
"id": 167147,
"node_id": "MDQ6VXNlcjE2NzE0Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/167147?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stronk7",
"html_url": "https://github.com/stronk7",
"followers_url": "https://api.github.com/users/stronk7/followers",
"following_url": "https://api.github.com/users/stronk7/following{/other_user}",
"gists_url": "https://api.github.com/users/stronk7/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stronk7/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stronk7/subscriptions",
"organizations_url": "https://api.github.com/users/stronk7/orgs",
"repos_url": "https://api.github.com/users/stronk7/repos",
"events_url": "https://api.github.com/users/stronk7/events{/privacy}",
"received_events_url": "https://api.github.com/users/stronk7/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7952/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7952/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3368
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3368/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3368/comments
|
https://api.github.com/repos/ollama/ollama/issues/3368/events
|
https://github.com/ollama/ollama/issues/3368
| 2,210,089,380
|
I_kwDOJ0Z1Ps6Du0mk
| 3,368
|
Reranking models
|
{
"login": "YuanfengZhang",
"id": 71358306,
"node_id": "MDQ6VXNlcjcxMzU4MzA2",
"avatar_url": "https://avatars.githubusercontent.com/u/71358306?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/YuanfengZhang",
"html_url": "https://github.com/YuanfengZhang",
"followers_url": "https://api.github.com/users/YuanfengZhang/followers",
"following_url": "https://api.github.com/users/YuanfengZhang/following{/other_user}",
"gists_url": "https://api.github.com/users/YuanfengZhang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/YuanfengZhang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/YuanfengZhang/subscriptions",
"organizations_url": "https://api.github.com/users/YuanfengZhang/orgs",
"repos_url": "https://api.github.com/users/YuanfengZhang/repos",
"events_url": "https://api.github.com/users/YuanfengZhang/events{/privacy}",
"received_events_url": "https://api.github.com/users/YuanfengZhang/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 34
| 2024-03-27T07:41:15
| 2025-01-23T19:37:42
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What model would you like?
Till now, ollama supports LLM and embedding models. I wonder if it could support popular reranking models later?
Such as:
1. [BAAI/bge-reranker-large](https://huggingface.co/BAAI/bge-reranker-large)
2. [mixedbread-ai/mxbai-rerank-large-v1](https://huggingface.co/mixedbread-ai/mxbai-rerank-large-v1)
3. [amberoad/bert-multilingual-passage-reranking-msmarco](https://huggingface.co/amberoad/bert-multilingual-passage-reranking-msmarco)
Thx.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3368/reactions",
"total_count": 154,
"+1": 113,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 38,
"rocket": 3,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3368/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/1280
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1280/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1280/comments
|
https://api.github.com/repos/ollama/ollama/issues/1280/events
|
https://github.com/ollama/ollama/pull/1280
| 2,011,208,708
|
PR_kwDOJ0Z1Ps5gYfhr
| 1,280
|
fix: disable ':' in tag names
|
{
"login": "tjbck",
"id": 25473318,
"node_id": "MDQ6VXNlcjI1NDczMzE4",
"avatar_url": "https://avatars.githubusercontent.com/u/25473318?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tjbck",
"html_url": "https://github.com/tjbck",
"followers_url": "https://api.github.com/users/tjbck/followers",
"following_url": "https://api.github.com/users/tjbck/following{/other_user}",
"gists_url": "https://api.github.com/users/tjbck/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tjbck/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tjbck/subscriptions",
"organizations_url": "https://api.github.com/users/tjbck/orgs",
"repos_url": "https://api.github.com/users/tjbck/repos",
"events_url": "https://api.github.com/users/tjbck/events{/privacy}",
"received_events_url": "https://api.github.com/users/tjbck/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-11-26T21:14:26
| 2023-11-29T18:33:45
| 2023-11-29T18:33:45
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1280",
"html_url": "https://github.com/ollama/ollama/pull/1280",
"diff_url": "https://github.com/ollama/ollama/pull/1280.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1280.patch",
"merged_at": "2023-11-29T18:33:45"
}
|
Resolves #1247
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1280/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1280/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/100
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/100/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/100/comments
|
https://api.github.com/repos/ollama/ollama/issues/100/events
|
https://github.com/ollama/ollama/pull/100
| 1,810,601,534
|
PR_kwDOJ0Z1Ps5V0glw
| 100
|
skip files in the list if we can't get the correct model path
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-07-18T19:38:43
| 2023-07-18T19:39:08
| 2023-07-18T19:39:08
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/100",
"html_url": "https://github.com/ollama/ollama/pull/100",
"diff_url": "https://github.com/ollama/ollama/pull/100.diff",
"patch_url": "https://github.com/ollama/ollama/pull/100.patch",
"merged_at": "2023-07-18T19:39:08"
}
| null |
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/100/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/100/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5043
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5043/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5043/comments
|
https://api.github.com/repos/ollama/ollama/issues/5043/events
|
https://github.com/ollama/ollama/pull/5043
| 2,352,815,843
|
PR_kwDOJ0Z1Ps5yc9AC
| 5,043
|
Adds an uninstall script to the installer
|
{
"login": "nibrahim",
"id": 69051,
"node_id": "MDQ6VXNlcjY5MDUx",
"avatar_url": "https://avatars.githubusercontent.com/u/69051?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nibrahim",
"html_url": "https://github.com/nibrahim",
"followers_url": "https://api.github.com/users/nibrahim/followers",
"following_url": "https://api.github.com/users/nibrahim/following{/other_user}",
"gists_url": "https://api.github.com/users/nibrahim/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nibrahim/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nibrahim/subscriptions",
"organizations_url": "https://api.github.com/users/nibrahim/orgs",
"repos_url": "https://api.github.com/users/nibrahim/repos",
"events_url": "https://api.github.com/users/nibrahim/events{/privacy}",
"received_events_url": "https://api.github.com/users/nibrahim/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 4
| 2024-06-14T08:17:31
| 2024-09-05T05:36:07
| 2024-09-05T05:14:16
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5043",
"html_url": "https://github.com/ollama/ollama/pull/5043",
"diff_url": "https://github.com/ollama/ollama/pull/5043.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5043.patch",
"merged_at": null
}
|
A new script called ollama_uninstall.sh gets created as part of the installation process on Linux. Running this will remove the ollama installation.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5043/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5043/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7633
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7633/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7633/comments
|
https://api.github.com/repos/ollama/ollama/issues/7633/events
|
https://github.com/ollama/ollama/pull/7633
| 2,653,024,204
|
PR_kwDOJ0Z1Ps6Bq-wq
| 7,633
|
runner.go: Fix off-by-one for num predicted
|
{
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users/jessegross/followers",
"following_url": "https://api.github.com/users/jessegross/following{/other_user}",
"gists_url": "https://api.github.com/users/jessegross/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jessegross/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jessegross/subscriptions",
"organizations_url": "https://api.github.com/users/jessegross/orgs",
"repos_url": "https://api.github.com/users/jessegross/repos",
"events_url": "https://api.github.com/users/jessegross/events{/privacy}",
"received_events_url": "https://api.github.com/users/jessegross/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-11-12T18:43:14
| 2024-11-12T19:35:59
| 2024-11-12T19:35:57
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7633",
"html_url": "https://github.com/ollama/ollama/pull/7633",
"diff_url": "https://github.com/ollama/ollama/pull/7633.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7633.patch",
"merged_at": "2024-11-12T19:35:57"
}
| null |
{
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users/jessegross/followers",
"following_url": "https://api.github.com/users/jessegross/following{/other_user}",
"gists_url": "https://api.github.com/users/jessegross/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jessegross/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jessegross/subscriptions",
"organizations_url": "https://api.github.com/users/jessegross/orgs",
"repos_url": "https://api.github.com/users/jessegross/repos",
"events_url": "https://api.github.com/users/jessegross/events{/privacy}",
"received_events_url": "https://api.github.com/users/jessegross/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7633/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7633/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6553
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6553/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6553/comments
|
https://api.github.com/repos/ollama/ollama/issues/6553/events
|
https://github.com/ollama/ollama/issues/6553
| 2,494,210,904
|
I_kwDOJ0Z1Ps6UqqNY
| 6,553
|
Cannot set custom folder for storing models
|
{
"login": "anonymux1",
"id": 138056943,
"node_id": "U_kgDOCDqU7w",
"avatar_url": "https://avatars.githubusercontent.com/u/138056943?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anonymux1",
"html_url": "https://github.com/anonymux1",
"followers_url": "https://api.github.com/users/anonymux1/followers",
"following_url": "https://api.github.com/users/anonymux1/following{/other_user}",
"gists_url": "https://api.github.com/users/anonymux1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/anonymux1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anonymux1/subscriptions",
"organizations_url": "https://api.github.com/users/anonymux1/orgs",
"repos_url": "https://api.github.com/users/anonymux1/repos",
"events_url": "https://api.github.com/users/anonymux1/events{/privacy}",
"received_events_url": "https://api.github.com/users/anonymux1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-08-29T11:47:49
| 2024-09-02T12:03:19
| 2024-08-29T15:11:42
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I have run sudo systemctl edit ollama.service, and added the following lines using the text editor.
[Service]
Environment = OLLAMA_MODELS = "/home/<username>/AI/ollama_models"
i ran systemctl daemon-reload
systemctl restart ollama
Also rebooted, but models are still being stored in /usr/share/ollama/.ollama/models
### OS
Linux
### GPU
AMD
### CPU
Intel
### Ollama version
0.3.8
|
{
"login": "anonymux1",
"id": 138056943,
"node_id": "U_kgDOCDqU7w",
"avatar_url": "https://avatars.githubusercontent.com/u/138056943?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anonymux1",
"html_url": "https://github.com/anonymux1",
"followers_url": "https://api.github.com/users/anonymux1/followers",
"following_url": "https://api.github.com/users/anonymux1/following{/other_user}",
"gists_url": "https://api.github.com/users/anonymux1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/anonymux1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anonymux1/subscriptions",
"organizations_url": "https://api.github.com/users/anonymux1/orgs",
"repos_url": "https://api.github.com/users/anonymux1/repos",
"events_url": "https://api.github.com/users/anonymux1/events{/privacy}",
"received_events_url": "https://api.github.com/users/anonymux1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6553/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6553/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1239
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1239/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1239/comments
|
https://api.github.com/repos/ollama/ollama/issues/1239/events
|
https://github.com/ollama/ollama/pull/1239
| 2,006,261,843
|
PR_kwDOJ0Z1Ps5gIG3t
| 1,239
|
Update README.md - Community Integrations - Obsidian BMO Chatbot plugin
|
{
"login": "longy2k",
"id": 40724177,
"node_id": "MDQ6VXNlcjQwNzI0MTc3",
"avatar_url": "https://avatars.githubusercontent.com/u/40724177?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/longy2k",
"html_url": "https://github.com/longy2k",
"followers_url": "https://api.github.com/users/longy2k/followers",
"following_url": "https://api.github.com/users/longy2k/following{/other_user}",
"gists_url": "https://api.github.com/users/longy2k/gists{/gist_id}",
"starred_url": "https://api.github.com/users/longy2k/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/longy2k/subscriptions",
"organizations_url": "https://api.github.com/users/longy2k/orgs",
"repos_url": "https://api.github.com/users/longy2k/repos",
"events_url": "https://api.github.com/users/longy2k/events{/privacy}",
"received_events_url": "https://api.github.com/users/longy2k/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-11-22T12:42:36
| 2023-11-22T19:32:31
| 2023-11-22T19:32:30
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1239",
"html_url": "https://github.com/ollama/ollama/pull/1239",
"diff_url": "https://github.com/ollama/ollama/pull/1239.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1239.patch",
"merged_at": "2023-11-22T19:32:30"
}
|
The simplicity and speed of Ollama is amazing!
I would like to add Obsidian's "BMO Chatbot" plugin to the 'Community Integrations' section :)
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1239/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1239/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6141
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6141/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6141/comments
|
https://api.github.com/repos/ollama/ollama/issues/6141/events
|
https://github.com/ollama/ollama/issues/6141
| 2,444,742,320
|
I_kwDOJ0Z1Ps6Rt86w
| 6,141
|
Ollama stopped a available="", not loading
|
{
"login": "rohithbojja",
"id": 119781796,
"node_id": "U_kgDOByO5pA",
"avatar_url": "https://avatars.githubusercontent.com/u/119781796?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rohithbojja",
"html_url": "https://github.com/rohithbojja",
"followers_url": "https://api.github.com/users/rohithbojja/followers",
"following_url": "https://api.github.com/users/rohithbojja/following{/other_user}",
"gists_url": "https://api.github.com/users/rohithbojja/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rohithbojja/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rohithbojja/subscriptions",
"organizations_url": "https://api.github.com/users/rohithbojja/orgs",
"repos_url": "https://api.github.com/users/rohithbojja/repos",
"events_url": "https://api.github.com/users/rohithbojja/events{/privacy}",
"received_events_url": "https://api.github.com/users/rohithbojja/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 5
| 2024-08-02T11:17:54
| 2024-08-06T10:18:47
| 2024-08-02T21:01:00
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
time=2024-08-02T16:41:38.633+05:30 level=INFO source=images.go:781 msg="total blobs: 9"
time=2024-08-02T16:41:38.633+05:30 level=INFO source=images.go:788 msg="total unused blobs removed: 0"
time=2024-08-02T16:41:38.633+05:30 level=INFO source=routes.go:1156 msg="Listening on 127.0.0.1:11434 (version 0.3.2)"
time=2024-08-02T16:41:38.633+05:30 level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama1292947772/runners
time=2024-08-02T16:41:43.479+05:30 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11 rocm_v60102]"
time=2024-08-02T16:41:43.479+05:30 level=INFO source=gpu.go:205 msg="looking for compatible GPUs"
time=2024-08-02T16:41:43.484+05:30 level=INFO source=gpu.go:346 msg="no compatible GPUs were discovered"
time=2024-08-02T16:41:43.484+05:30 level=INFO source=types.go:105 msg="inference compute" id=0 library=cpu compute="" driver=0.0 name="" total="31.4 GiB" available="27.4 GiB"
### OS
_No response_
### GPU
_No response_
### CPU
_No response_
### Ollama version
_No response_
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6141/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6141/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7899
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7899/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7899/comments
|
https://api.github.com/repos/ollama/ollama/issues/7899/events
|
https://github.com/ollama/ollama/pull/7899
| 2,708,116,806
|
PR_kwDOJ0Z1Ps6DpDXD
| 7,899
|
ci: skip go build for tests
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-11-30T22:11:35
| 2024-12-05T05:22:39
| 2024-12-05T05:22:37
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7899",
"html_url": "https://github.com/ollama/ollama/pull/7899",
"diff_url": "https://github.com/ollama/ollama/pull/7899.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7899.patch",
"merged_at": "2024-12-05T05:22:37"
}
|
`go build` largely repeats what's already happening in `go test`, and by reducing to `go test` my hope is we can speed it up even more
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7899/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7899/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6133
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6133/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6133/comments
|
https://api.github.com/repos/ollama/ollama/issues/6133/events
|
https://github.com/ollama/ollama/pull/6133
| 2,443,736,748
|
PR_kwDOJ0Z1Ps53MMJo
| 6,133
|
Adjust arm cuda repo paths
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-08-02T00:24:03
| 2024-08-08T19:33:38
| 2024-08-08T19:33:35
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6133",
"html_url": "https://github.com/ollama/ollama/pull/6133",
"diff_url": "https://github.com/ollama/ollama/pull/6133.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6133.patch",
"merged_at": "2024-08-08T19:33:35"
}
|
Ubuntu distros fail to install cuda drivers since aarch64 isn't valid
https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/
Fixes #5797
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6133/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6133/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6724
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6724/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6724/comments
|
https://api.github.com/repos/ollama/ollama/issues/6724/events
|
https://github.com/ollama/ollama/issues/6724
| 2,515,931,561
|
I_kwDOJ0Z1Ps6V9hGp
| 6,724
|
Tools Tag with "ollama show" command
|
{
"login": "LilPiep",
"id": 81217865,
"node_id": "MDQ6VXNlcjgxMjE3ODY1",
"avatar_url": "https://avatars.githubusercontent.com/u/81217865?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LilPiep",
"html_url": "https://github.com/LilPiep",
"followers_url": "https://api.github.com/users/LilPiep/followers",
"following_url": "https://api.github.com/users/LilPiep/following{/other_user}",
"gists_url": "https://api.github.com/users/LilPiep/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LilPiep/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LilPiep/subscriptions",
"organizations_url": "https://api.github.com/users/LilPiep/orgs",
"repos_url": "https://api.github.com/users/LilPiep/repos",
"events_url": "https://api.github.com/users/LilPiep/events{/privacy}",
"received_events_url": "https://api.github.com/users/LilPiep/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-09-10T09:38:19
| 2024-09-10T11:50:06
| 2024-09-10T11:50:05
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hey there,
It would be wonderfull to have the opportunity to check if a model is Tool compatible or not after pulling its manifest. It's pretty clear on the online model library but once the model is pulled, the information is lost.
Thanks for your attention :)

|
{
"login": "LilPiep",
"id": 81217865,
"node_id": "MDQ6VXNlcjgxMjE3ODY1",
"avatar_url": "https://avatars.githubusercontent.com/u/81217865?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LilPiep",
"html_url": "https://github.com/LilPiep",
"followers_url": "https://api.github.com/users/LilPiep/followers",
"following_url": "https://api.github.com/users/LilPiep/following{/other_user}",
"gists_url": "https://api.github.com/users/LilPiep/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LilPiep/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LilPiep/subscriptions",
"organizations_url": "https://api.github.com/users/LilPiep/orgs",
"repos_url": "https://api.github.com/users/LilPiep/repos",
"events_url": "https://api.github.com/users/LilPiep/events{/privacy}",
"received_events_url": "https://api.github.com/users/LilPiep/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6724/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6724/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6782
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6782/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6782/comments
|
https://api.github.com/repos/ollama/ollama/issues/6782/events
|
https://github.com/ollama/ollama/issues/6782
| 2,523,742,481
|
I_kwDOJ0Z1Ps6WbUER
| 6,782
|
Windows Portable Mode
|
{
"login": "SmilerRyan",
"id": 14893385,
"node_id": "MDQ6VXNlcjE0ODkzMzg1",
"avatar_url": "https://avatars.githubusercontent.com/u/14893385?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SmilerRyan",
"html_url": "https://github.com/SmilerRyan",
"followers_url": "https://api.github.com/users/SmilerRyan/followers",
"following_url": "https://api.github.com/users/SmilerRyan/following{/other_user}",
"gists_url": "https://api.github.com/users/SmilerRyan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SmilerRyan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SmilerRyan/subscriptions",
"organizations_url": "https://api.github.com/users/SmilerRyan/orgs",
"repos_url": "https://api.github.com/users/SmilerRyan/repos",
"events_url": "https://api.github.com/users/SmilerRyan/events{/privacy}",
"received_events_url": "https://api.github.com/users/SmilerRyan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 10
| 2024-09-13T02:24:22
| 2025-01-29T09:15:33
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I would like to see a Full Portable version of Ollama for Windows, not just having the binary files without running the setup.
My proposal is simply to have the same files as the installer version, and also include a portable.txt file to indicate that it is a portable install and to directly save Ollama settings, history, models etc into a data folder inside the portable build instead of AppData and the User Home folder.
For updates clicking the notification can just open the link to the zip file for manual installation/updates or automatically download and on click open in the default zip program.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6782/reactions",
"total_count": 14,
"+1": 14,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6782/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/7644
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7644/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7644/comments
|
https://api.github.com/repos/ollama/ollama/issues/7644/events
|
https://github.com/ollama/ollama/issues/7644
| 2,654,656,460
|
I_kwDOJ0Z1Ps6eOtfM
| 7,644
|
Please add more models
|
{
"login": "smileyboy2019",
"id": 59221294,
"node_id": "MDQ6VXNlcjU5MjIxMjk0",
"avatar_url": "https://avatars.githubusercontent.com/u/59221294?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/smileyboy2019",
"html_url": "https://github.com/smileyboy2019",
"followers_url": "https://api.github.com/users/smileyboy2019/followers",
"following_url": "https://api.github.com/users/smileyboy2019/following{/other_user}",
"gists_url": "https://api.github.com/users/smileyboy2019/gists{/gist_id}",
"starred_url": "https://api.github.com/users/smileyboy2019/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/smileyboy2019/subscriptions",
"organizations_url": "https://api.github.com/users/smileyboy2019/orgs",
"repos_url": "https://api.github.com/users/smileyboy2019/repos",
"events_url": "https://api.github.com/users/smileyboy2019/events{/privacy}",
"received_events_url": "https://api.github.com/users/smileyboy2019/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-11-13T08:29:11
| 2024-11-13T19:46:02
| 2024-11-13T19:45:52
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Please add more models
Qwen2-VL-7B-Instruct
Pixtral-12B-2409
Molmo-7B-O-0924
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7644/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7644/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7632
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7632/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7632/comments
|
https://api.github.com/repos/ollama/ollama/issues/7632/events
|
https://github.com/ollama/ollama/pull/7632
| 2,652,926,419
|
PR_kwDOJ0Z1Ps6BqtDy
| 7,632
|
Install support for jetpacks
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-11-12T17:56:12
| 2024-11-16T00:47:57
| 2024-11-16T00:47:54
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7632",
"html_url": "https://github.com/ollama/ollama/pull/7632",
"diff_url": "https://github.com/ollama/ollama/pull/7632.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7632.patch",
"merged_at": "2024-11-16T00:47:54"
}
|
Follow up to #7217 - merge after release
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7632/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7632/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7175
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7175/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7175/comments
|
https://api.github.com/repos/ollama/ollama/issues/7175/events
|
https://github.com/ollama/ollama/issues/7175
| 2,582,133,118
|
I_kwDOJ0Z1Ps6Z6Dl-
| 7,175
|
Layer-wise Inferencing from ram/low vram mode?
|
{
"login": "AncientMystic",
"id": 62780271,
"node_id": "MDQ6VXNlcjYyNzgwMjcx",
"avatar_url": "https://avatars.githubusercontent.com/u/62780271?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AncientMystic",
"html_url": "https://github.com/AncientMystic",
"followers_url": "https://api.github.com/users/AncientMystic/followers",
"following_url": "https://api.github.com/users/AncientMystic/following{/other_user}",
"gists_url": "https://api.github.com/users/AncientMystic/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AncientMystic/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AncientMystic/subscriptions",
"organizations_url": "https://api.github.com/users/AncientMystic/orgs",
"repos_url": "https://api.github.com/users/AncientMystic/repos",
"events_url": "https://api.github.com/users/AncientMystic/events{/privacy}",
"received_events_url": "https://api.github.com/users/AncientMystic/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 4
| 2024-10-11T20:26:39
| 2024-11-17T14:31:56
| 2024-11-17T14:31:55
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Would it be possible to add something similar to Layer-wise Inferencing (possibly from cpu/ram instead of disk so it is not extremely slow) to ollama similar to airllm?
It seems like being able to use this feature for ollama to switch out layers to VRAM (preferably from memory) for processing could lead to a nice performance boost, most modern systems should be able to load requested layers into vram nearly instantly from ram.
This could enable the use of far larger models and increase efficiency of parallel models within ollama and it seems it would be more efficient to always load x number of layers to the gpu rather than just loading how ever many of the first layers will fit on the gpu filling vram completely then leaving spill over to be loaded to the cpu/ram and getting held back by the cpu waiting for it to process the spill over layers much more slowly. Which seems like an inefficient approach that isn't really fully utilising the gpu unless you have enough vram to fit an entire model.
This would especially be useful to be able to specify how many layers can be loaded at a given time, if ollama is free to load multiple layers at once to vram in a batch queue setup while keeping them in ram for nearly instant access, it seems much more efficient on systems without extremely high amounts of vram, although it would still take a good amount of ram but having 32-64GB+ ram is a lot more common and less expensive than having 24-32GB+ vram for fairly large models. with this common 2-4gb vram gpu's would become a lot more useful to ollama.
This i feel would also work nicely with the K/V quantisation PR since that minimises vram usage of the K/V cache and would allow much higher context sizes, etc.
so in combination this could be a nice way to load extremely large models and context sizes on low vram if you have a lot of ram at least.
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7175/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7175/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1917
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1917/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1917/comments
|
https://api.github.com/repos/ollama/ollama/issues/1917/events
|
https://github.com/ollama/ollama/issues/1917
| 2,075,705,116
|
I_kwDOJ0Z1Ps57uL8c
| 1,917
|
GPU still used when offloading zero layers
|
{
"login": "coder543",
"id": 726063,
"node_id": "MDQ6VXNlcjcyNjA2Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/726063?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/coder543",
"html_url": "https://github.com/coder543",
"followers_url": "https://api.github.com/users/coder543/followers",
"following_url": "https://api.github.com/users/coder543/following{/other_user}",
"gists_url": "https://api.github.com/users/coder543/gists{/gist_id}",
"starred_url": "https://api.github.com/users/coder543/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/coder543/subscriptions",
"organizations_url": "https://api.github.com/users/coder543/orgs",
"repos_url": "https://api.github.com/users/coder543/repos",
"events_url": "https://api.github.com/users/coder543/events{/privacy}",
"received_events_url": "https://api.github.com/users/coder543/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 4
| 2024-01-11T04:13:06
| 2024-01-11T23:10:56
| 2024-01-11T22:56:50
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
To try to work around https://github.com/jmorganca/ollama/issues/1907, I decided to create a Modelfile that offloads zero layers. I noticed that it still takes up a few gigabytes of RAM on the GPU and spins up the GPU, even though I can't imagine _what_ it is doing on the GPU when no layers are running on the GPU.
```
Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_vocab: special tokens definition check successful ( 259/32000 ).
Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: format = GGUF V3 (latest)
Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: arch = llama
Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: vocab type = SPM
Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: n_vocab = 32000
Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: n_merges = 0
Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: n_ctx_train = 32768
Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: n_embd = 4096
Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: n_head = 32
Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: n_head_kv = 8
Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: n_layer = 32
Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: n_rot = 128
Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: n_gqa = 4
Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: f_norm_eps = 0.0e+00
Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: f_norm_rms_eps = 1.0e-05
Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: f_clamp_kqv = 0.0e+00
Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00
Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: n_ff = 14336
Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: n_expert = 8
Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: n_expert_used = 2
Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: rope scaling = linear
Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: freq_base_train = 1000000.0
Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: freq_scale_train = 1
Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: n_yarn_orig_ctx = 32768
Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: rope_finetuned = unknown
Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: model type = 7B
Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: model ftype = Q3_K - Small
Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: model params = 46.70 B
Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: model size = 18.90 GiB (3.48 BPW)
Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: general.name = mistralai
Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: BOS token = 1 '<s>'
Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: EOS token = 2 '</s>'
Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: UNK token = 0 '<unk>'
Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: LF token = 13 '<0x0A>'
Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_tensors: ggml ctx size = 0.38 MiB
Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_tensors: using CUDA for GPU acceleration
Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_tensors: mem required = 19351.65 MiB
Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_tensors: offloading 0 repeating layers to GPU
Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_tensors: offloaded 0/33 layers to GPU
Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_tensors: VRAM used: 0.00 MiB
Jan 11 04:10:06 cognicore ollama[3082453]: ....................................................................................................
Jan 11 04:10:06 cognicore ollama[3082453]: llama_new_context_with_model: n_ctx = 20000
Jan 11 04:10:06 cognicore ollama[3082453]: llama_new_context_with_model: freq_base = 1000000.0
Jan 11 04:10:06 cognicore ollama[3082453]: llama_new_context_with_model: freq_scale = 1
Jan 11 04:10:07 cognicore ollama[3082453]: llama_new_context_with_model: KV self size = 2500.00 MiB, K (f16): 1250.00 MiB, V (f16): 1250.00 MiB
Jan 11 04:10:07 cognicore ollama[3082453]: llama_build_graph: non-view tensors processed: 1124/1124
Jan 11 04:10:07 cognicore ollama[3082453]: llama_new_context_with_model: compute buffer total size = 1344.29 MiB
Jan 11 04:10:07 cognicore ollama[3082453]: llama_new_context_with_model: VRAM scratch buffer: 1341.10 MiB
Jan 11 04:10:07 cognicore ollama[3082453]: llama_new_context_with_model: total VRAM used: 1341.10 MiB (model: 0.00 MiB, context: 1341.10 MiB)
Jan 11 04:10:07 cognicore ollama[3082453]: 2024/01/11 04:10:07 ext_server_common.go:144: Starting internal llama main loop
Jan 11 04:10:07 cognicore ollama[3082453]: 2024/01/11 04:10:07 ext_server_common.go:158: loaded 0 images
```
```
Thu Jan 11 04:12:12 2024
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.129.03 Driver Version: 535.129.03 CUDA Version: 12.2 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 NVIDIA GeForce RTX 3090 Off | 00000000:01:00.0 Off | N/A |
| 49% 58C P2 126W / 420W | 2944MiB / 24576MiB | 6% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
+---------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
| 0 N/A N/A 3082453 C /usr/local/bin/ollama 2930MiB |
+---------------------------------------------------------------------------------------+
```
The entire Modelfile:
```
FROM mixtral:8x7b-instruct-v0.1-q3_K_S
PARAMETER num_gpu 0
```
I believe in previous versions of ollama, it would revert to a CPU-only mode when it realized no layers were being offloaded.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1917/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1917/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1028
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1028/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1028/comments
|
https://api.github.com/repos/ollama/ollama/issues/1028/events
|
https://github.com/ollama/ollama/pull/1028
| 1,980,976,174
|
PR_kwDOJ0Z1Ps5eyUTz
| 1,028
|
WIP: Apply a patch for building with CUDA on Linux
|
{
"login": "xyproto",
"id": 52813,
"node_id": "MDQ6VXNlcjUyODEz",
"avatar_url": "https://avatars.githubusercontent.com/u/52813?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xyproto",
"html_url": "https://github.com/xyproto",
"followers_url": "https://api.github.com/users/xyproto/followers",
"following_url": "https://api.github.com/users/xyproto/following{/other_user}",
"gists_url": "https://api.github.com/users/xyproto/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xyproto/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xyproto/subscriptions",
"organizations_url": "https://api.github.com/users/xyproto/orgs",
"repos_url": "https://api.github.com/users/xyproto/repos",
"events_url": "https://api.github.com/users/xyproto/events{/privacy}",
"received_events_url": "https://api.github.com/users/xyproto/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2023-11-07T09:59:45
| 2023-11-13T18:58:52
| 2023-11-07T23:44:23
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1028",
"html_url": "https://github.com/ollama/ollama/pull/1028",
"diff_url": "https://github.com/ollama/ollama/pull/1028.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1028.patch",
"merged_at": null
}
|
Might fix #1024, maybe.
The patch is from a llama.cpp commit: https://github.com/ggerganov/llama.cpp/commit/2833a6f63c1b87c7f4ac574bcf7a15a2f3bf3ede
|
{
"login": "xyproto",
"id": 52813,
"node_id": "MDQ6VXNlcjUyODEz",
"avatar_url": "https://avatars.githubusercontent.com/u/52813?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xyproto",
"html_url": "https://github.com/xyproto",
"followers_url": "https://api.github.com/users/xyproto/followers",
"following_url": "https://api.github.com/users/xyproto/following{/other_user}",
"gists_url": "https://api.github.com/users/xyproto/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xyproto/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xyproto/subscriptions",
"organizations_url": "https://api.github.com/users/xyproto/orgs",
"repos_url": "https://api.github.com/users/xyproto/repos",
"events_url": "https://api.github.com/users/xyproto/events{/privacy}",
"received_events_url": "https://api.github.com/users/xyproto/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1028/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1028/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/357
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/357/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/357/comments
|
https://api.github.com/repos/ollama/ollama/issues/357/events
|
https://github.com/ollama/ollama/issues/357
| 1,852,552,749
|
I_kwDOJ0Z1Ps5ua7Yt
| 357
|
Support multi-line input in CLI
|
{
"login": "charlesverdad",
"id": 382186,
"node_id": "MDQ6VXNlcjM4MjE4Ng==",
"avatar_url": "https://avatars.githubusercontent.com/u/382186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/charlesverdad",
"html_url": "https://github.com/charlesverdad",
"followers_url": "https://api.github.com/users/charlesverdad/followers",
"following_url": "https://api.github.com/users/charlesverdad/following{/other_user}",
"gists_url": "https://api.github.com/users/charlesverdad/gists{/gist_id}",
"starred_url": "https://api.github.com/users/charlesverdad/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/charlesverdad/subscriptions",
"organizations_url": "https://api.github.com/users/charlesverdad/orgs",
"repos_url": "https://api.github.com/users/charlesverdad/repos",
"events_url": "https://api.github.com/users/charlesverdad/events{/privacy}",
"received_events_url": "https://api.github.com/users/charlesverdad/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396191,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aXw",
"url": "https://api.github.com/repos/ollama/ollama/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] |
closed
| false
| null |
[] | null | 5
| 2023-08-16T05:45:21
| 2024-11-07T12:25:26
| 2023-08-17T14:17:27
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I'm trying to copy-paste a multi-line query to ollama, but it treats my newlines as an end to my question.
```
❯ ollama run llama2
>>> I have something like this:
Sure, please provide the code you have so far, and I will be happy to assist you in resolving any issues or answering any questions you may have. everybody has made mistakes in their coding at some point, and it's nothing to be ashamed of.
>>>
>>> ```
Thank you for sharing your code with me! However, I notice that there are a few syntax errors in the code you provided. Here are the issues I found:
1. `if` statement without an condition: You have an `if` statement without any condition. An `if` statement should always have a condition to check whether the statement inside the `if` block should be executed or not. For example, you could replace the `if` statement with `if (x > 0)` to make^C
```
It would be great to make the user experience a bit better by allowing multi-line queries straight from the CLI. I'm not sure how to implement this in terminal but I remember ipython is able to do this.
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/357/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/357/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5952
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5952/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5952/comments
|
https://api.github.com/repos/ollama/ollama/issues/5952/events
|
https://github.com/ollama/ollama/issues/5952
| 2,430,065,277
|
I_kwDOJ0Z1Ps6Q19p9
| 5,952
|
find system prompt encapsulation error in mistral-nemo 12b
|
{
"login": "map9",
"id": 38238468,
"node_id": "MDQ6VXNlcjM4MjM4NDY4",
"avatar_url": "https://avatars.githubusercontent.com/u/38238468?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/map9",
"html_url": "https://github.com/map9",
"followers_url": "https://api.github.com/users/map9/followers",
"following_url": "https://api.github.com/users/map9/following{/other_user}",
"gists_url": "https://api.github.com/users/map9/gists{/gist_id}",
"starred_url": "https://api.github.com/users/map9/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/map9/subscriptions",
"organizations_url": "https://api.github.com/users/map9/orgs",
"repos_url": "https://api.github.com/users/map9/repos",
"events_url": "https://api.github.com/users/map9/events{/privacy}",
"received_events_url": "https://api.github.com/users/map9/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 4
| 2024-07-25T14:05:56
| 2024-07-25T15:24:43
| 2024-07-25T15:00:15
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I used autogen + Ollama + mistral-nemo 12b model,
I find Ollama missed system message or lost question message.
maybe mistral-nemo 12b model template defined error.
case 1:
-----------------------------------
extractor_system_message = "...extractor_system_message..."
extractor = AssistantAgent(
"Extractor",
system_message = extractor_system_message,
llm_config = llm_config,
human_input_mode = "NEVER",
)
messages = [{"content": "...message...", "role": "user", "name": "Initializer"}]
ollama output:
time=2024-07-25T21:44:27.533+08:00 level=DEBUG source=routes.go:1337 msg="chat request" images=0 prompt="[INST]...message...[/INST]"
-------------------------------
error is lost system prompt
case 2:
-----------------------------------
extractor_system_message = "...extractor_system_message..."
extractor = AssistantAgent(
"Extractor",
system_message = extractor_system_message,
llm_config = llm_config,
human_input_mode = "NEVER",
)
messages = [{"content": "...message1...", "role": "user", "name": "Initializer"},
{"content": "...message2...", "role": "user", "name": "Extractor"},
{"content": "...message3...", "role": "user", "name": "Editor"}]
ollama output:
time=2024-07-25T21:44:39.481+08:00 level=DEBUG source=routes.go:1337 msg="chat request" images=0 prompt="[INST] ...extractor_system_message...\n\n\n...message3...[/INST]"
-------------------------------
error is lost message1, message2.
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.2.8
|
{
"login": "map9",
"id": 38238468,
"node_id": "MDQ6VXNlcjM4MjM4NDY4",
"avatar_url": "https://avatars.githubusercontent.com/u/38238468?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/map9",
"html_url": "https://github.com/map9",
"followers_url": "https://api.github.com/users/map9/followers",
"following_url": "https://api.github.com/users/map9/following{/other_user}",
"gists_url": "https://api.github.com/users/map9/gists{/gist_id}",
"starred_url": "https://api.github.com/users/map9/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/map9/subscriptions",
"organizations_url": "https://api.github.com/users/map9/orgs",
"repos_url": "https://api.github.com/users/map9/repos",
"events_url": "https://api.github.com/users/map9/events{/privacy}",
"received_events_url": "https://api.github.com/users/map9/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5952/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5952/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5728
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5728/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5728/comments
|
https://api.github.com/repos/ollama/ollama/issues/5728/events
|
https://github.com/ollama/ollama/issues/5728
| 2,411,981,789
|
I_kwDOJ0Z1Ps6Pw-vd
| 5,728
|
Prompt Tokens for Image Chat
|
{
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjhan/followers",
"following_url": "https://api.github.com/users/royjhan/following{/other_user}",
"gists_url": "https://api.github.com/users/royjhan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/royjhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/royjhan/subscriptions",
"organizations_url": "https://api.github.com/users/royjhan/orgs",
"repos_url": "https://api.github.com/users/royjhan/repos",
"events_url": "https://api.github.com/users/royjhan/events{/privacy}",
"received_events_url": "https://api.github.com/users/royjhan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-07-16T20:23:10
| 2024-08-13T17:49:12
| 2024-08-13T17:49:12
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
<img width="793" alt="Screenshot 2024-07-16 at 1 22 43 PM" src="https://github.com/user-attachments/assets/4d743995-26b6-463d-8848-38cc9623dfe3">
Image Chat returns 1 for prompt tokens
### OS
_No response_
### GPU
_No response_
### CPU
_No response_
### Ollama version
_No response_
|
{
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjhan/followers",
"following_url": "https://api.github.com/users/royjhan/following{/other_user}",
"gists_url": "https://api.github.com/users/royjhan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/royjhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/royjhan/subscriptions",
"organizations_url": "https://api.github.com/users/royjhan/orgs",
"repos_url": "https://api.github.com/users/royjhan/repos",
"events_url": "https://api.github.com/users/royjhan/events{/privacy}",
"received_events_url": "https://api.github.com/users/royjhan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5728/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5728/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6305
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6305/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6305/comments
|
https://api.github.com/repos/ollama/ollama/issues/6305/events
|
https://github.com/ollama/ollama/pull/6305
| 2,459,369,125
|
PR_kwDOJ0Z1Ps54BcYY
| 6,305
|
add integration obook-summary
|
{
"login": "cognitivetech",
"id": 55156785,
"node_id": "MDQ6VXNlcjU1MTU2Nzg1",
"avatar_url": "https://avatars.githubusercontent.com/u/55156785?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cognitivetech",
"html_url": "https://github.com/cognitivetech",
"followers_url": "https://api.github.com/users/cognitivetech/followers",
"following_url": "https://api.github.com/users/cognitivetech/following{/other_user}",
"gists_url": "https://api.github.com/users/cognitivetech/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cognitivetech/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cognitivetech/subscriptions",
"organizations_url": "https://api.github.com/users/cognitivetech/orgs",
"repos_url": "https://api.github.com/users/cognitivetech/repos",
"events_url": "https://api.github.com/users/cognitivetech/events{/privacy}",
"received_events_url": "https://api.github.com/users/cognitivetech/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-08-11T01:38:11
| 2024-08-11T01:43:09
| 2024-08-11T01:43:09
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6305",
"html_url": "https://github.com/ollama/ollama/pull/6305",
"diff_url": "https://github.com/ollama/ollama/pull/6305.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6305.patch",
"merged_at": "2024-08-11T01:43:09"
}
|
an app which automatically splits e-books by section and chunks those sections one at a time. saved to csv, then you can also ask the same question of the entire book, one chunk at a time, in addition to the bulleted notes core-functionality.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6305/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6305/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/803
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/803/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/803/comments
|
https://api.github.com/repos/ollama/ollama/issues/803/events
|
https://github.com/ollama/ollama/issues/803
| 1,945,048,182
|
I_kwDOJ0Z1Ps5z7xR2
| 803
|
Feature request: pull multiple models with ollama pull
|
{
"login": "rickknowles-cognitant",
"id": 37247203,
"node_id": "MDQ6VXNlcjM3MjQ3MjAz",
"avatar_url": "https://avatars.githubusercontent.com/u/37247203?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rickknowles-cognitant",
"html_url": "https://github.com/rickknowles-cognitant",
"followers_url": "https://api.github.com/users/rickknowles-cognitant/followers",
"following_url": "https://api.github.com/users/rickknowles-cognitant/following{/other_user}",
"gists_url": "https://api.github.com/users/rickknowles-cognitant/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rickknowles-cognitant/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rickknowles-cognitant/subscriptions",
"organizations_url": "https://api.github.com/users/rickknowles-cognitant/orgs",
"repos_url": "https://api.github.com/users/rickknowles-cognitant/repos",
"events_url": "https://api.github.com/users/rickknowles-cognitant/events{/privacy}",
"received_events_url": "https://api.github.com/users/rickknowles-cognitant/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 3
| 2023-10-16T11:59:55
| 2024-09-16T13:09:21
| 2023-10-25T19:46:07
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Would it be possible to request a feature allowing you to do the following on the command line:
```ollama pull mistral falcon orca-mini```
instead of having to do:
```
ollama pull mistral
ollama pull falcon
ollama pull orca-mini
```
Not a huge deal but it feels fairly natural to do this sort of approach for anyone using dev-ops or scripting heavy deployments. Thanks
(EDIT: at the moment it simply ignores the falcon and orca-mini in the first example and reports success, which is arguably a small bug)
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/803/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/803/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/28
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/28/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/28/comments
|
https://api.github.com/repos/ollama/ollama/issues/28/events
|
https://github.com/ollama/ollama/issues/28
| 1,783,019,757
|
I_kwDOJ0Z1Ps5qRrjt
| 28
|
autocomplete for `llama run`
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 0
| 2023-06-30T18:56:07
| 2023-07-01T21:51:54
| 2023-07-01T21:51:54
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Example: `ollama run or<tab>` should show orca, etc.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/28/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/28/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2629
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2629/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2629/comments
|
https://api.github.com/repos/ollama/ollama/issues/2629/events
|
https://github.com/ollama/ollama/pull/2629
| 2,146,346,567
|
PR_kwDOJ0Z1Ps5ngEqZ
| 2,629
|
Configure `OLLAMA_ORIGINS` via settings.json
|
{
"login": "lovincyrus",
"id": 1021101,
"node_id": "MDQ6VXNlcjEwMjExMDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1021101?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lovincyrus",
"html_url": "https://github.com/lovincyrus",
"followers_url": "https://api.github.com/users/lovincyrus/followers",
"following_url": "https://api.github.com/users/lovincyrus/following{/other_user}",
"gists_url": "https://api.github.com/users/lovincyrus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lovincyrus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lovincyrus/subscriptions",
"organizations_url": "https://api.github.com/users/lovincyrus/orgs",
"repos_url": "https://api.github.com/users/lovincyrus/repos",
"events_url": "https://api.github.com/users/lovincyrus/events{/privacy}",
"received_events_url": "https://api.github.com/users/lovincyrus/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-02-21T10:12:44
| 2024-08-05T20:00:35
| 2024-08-05T20:00:35
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/2629",
"html_url": "https://github.com/ollama/ollama/pull/2629",
"diff_url": "https://github.com/ollama/ollama/pull/2629.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2629.patch",
"merged_at": null
}
|
Took a stab at these issues https://github.com/ollama/ollama/issues/2335, https://github.com/ollama/ollama/issues/2369
Added settings menu item in the Electron tray application. Also, hoisted the `OLLAMA_ORIGINS` environment variable to the settings.json file, ensuring routes.go retrieves origins from the file rather than the environment variable.
Open for suggestions.


|
{
"login": "lovincyrus",
"id": 1021101,
"node_id": "MDQ6VXNlcjEwMjExMDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1021101?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lovincyrus",
"html_url": "https://github.com/lovincyrus",
"followers_url": "https://api.github.com/users/lovincyrus/followers",
"following_url": "https://api.github.com/users/lovincyrus/following{/other_user}",
"gists_url": "https://api.github.com/users/lovincyrus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lovincyrus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lovincyrus/subscriptions",
"organizations_url": "https://api.github.com/users/lovincyrus/orgs",
"repos_url": "https://api.github.com/users/lovincyrus/repos",
"events_url": "https://api.github.com/users/lovincyrus/events{/privacy}",
"received_events_url": "https://api.github.com/users/lovincyrus/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2629/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2629/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5247
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5247/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5247/comments
|
https://api.github.com/repos/ollama/ollama/issues/5247/events
|
https://github.com/ollama/ollama/issues/5247
| 2,369,087,234
|
I_kwDOJ0Z1Ps6NNWcC
| 5,247
|
Recoll index RAG
|
{
"login": "AncientMystic",
"id": 62780271,
"node_id": "MDQ6VXNlcjYyNzgwMjcx",
"avatar_url": "https://avatars.githubusercontent.com/u/62780271?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AncientMystic",
"html_url": "https://github.com/AncientMystic",
"followers_url": "https://api.github.com/users/AncientMystic/followers",
"following_url": "https://api.github.com/users/AncientMystic/following{/other_user}",
"gists_url": "https://api.github.com/users/AncientMystic/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AncientMystic/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AncientMystic/subscriptions",
"organizations_url": "https://api.github.com/users/AncientMystic/orgs",
"repos_url": "https://api.github.com/users/AncientMystic/repos",
"events_url": "https://api.github.com/users/AncientMystic/events{/privacy}",
"received_events_url": "https://api.github.com/users/AncientMystic/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 2
| 2024-06-24T02:56:33
| 2024-06-30T14:34:20
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Would it be possible in any way to use the text database index created by the software Recoll with ollama?
Recoll indexes an extremely wide variety of text documents into a database that is then searchable via the software, making a veritable search engine out of your documents. It is one of my favourite softwares, along with ollama.
While it would be an advanced feature, could it be possible to link ollama to recoll and either RAG digest the database created to enhance model responses (potentially significantly) or utilise the search functionality of recoll to automatically pull a list of files and indexes available based on keywords to find relevant documents in order to enhance responses based upon these related files suggested by recoll?
Recolls text database is multitudes smaller than documents such as pdf and being plain text it is the fastest form for RAG digestion it seems, so this would be a way to enhance responses easily with minimal resource usage, the small size of the database those with large amounts of ram could even keep the whole thing in ram, It would also allow for quick and easy model enhancement making even fairly small low vram models far more effective and efficient.
It would then matter more on how well a model could respond over how much data is packed into it (which becomes difficult and each response can go either way), we do not need to keep the whole internet inside a model but merely have a model good enough to formulate responses and give it access to a wide range of documents (which can be more dependable than random data from the internet anyways) it can then reference to pull data to form a response.
(P.s. recoll has webui, gui and cli, is GPL and works on mac, linux or windows plus uses fairly common tools to do what it does which should make it all easier at least to allow ollama to interact with it/its database
Something as simple as an environment variable pointing to recolls database location and then a way to add a string to the model file or it being automatically used would probably work great)
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5247/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5247/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/6500
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6500/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6500/comments
|
https://api.github.com/repos/ollama/ollama/issues/6500/events
|
https://github.com/ollama/ollama/issues/6500
| 2,485,184,552
|
I_kwDOJ0Z1Ps6UIOgo
| 6,500
|
ibm-granite/granite-20b-functioncalling
|
{
"login": "andsty",
"id": 138453484,
"node_id": "U_kgDOCECh7A",
"avatar_url": "https://avatars.githubusercontent.com/u/138453484?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/andsty",
"html_url": "https://github.com/andsty",
"followers_url": "https://api.github.com/users/andsty/followers",
"following_url": "https://api.github.com/users/andsty/following{/other_user}",
"gists_url": "https://api.github.com/users/andsty/gists{/gist_id}",
"starred_url": "https://api.github.com/users/andsty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/andsty/subscriptions",
"organizations_url": "https://api.github.com/users/andsty/orgs",
"repos_url": "https://api.github.com/users/andsty/repos",
"events_url": "https://api.github.com/users/andsty/events{/privacy}",
"received_events_url": "https://api.github.com/users/andsty/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-08-25T11:09:21
| 2024-10-24T03:37:53
| 2024-10-24T03:37:52
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Can someone please add ibm-granite/granite-20b-functioncalling in ollama library?
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6500/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6500/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/851
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/851/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/851/comments
|
https://api.github.com/repos/ollama/ollama/issues/851/events
|
https://github.com/ollama/ollama/issues/851
| 1,954,458,776
|
I_kwDOJ0Z1Ps50fqyY
| 851
|
macOS: Installing CLI from DMG should NOT require administrator privileges
|
{
"login": "coolaj86",
"id": 122831,
"node_id": "MDQ6VXNlcjEyMjgzMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/122831?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/coolaj86",
"html_url": "https://github.com/coolaj86",
"followers_url": "https://api.github.com/users/coolaj86/followers",
"following_url": "https://api.github.com/users/coolaj86/following{/other_user}",
"gists_url": "https://api.github.com/users/coolaj86/gists{/gist_id}",
"starred_url": "https://api.github.com/users/coolaj86/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/coolaj86/subscriptions",
"organizations_url": "https://api.github.com/users/coolaj86/orgs",
"repos_url": "https://api.github.com/users/coolaj86/repos",
"events_url": "https://api.github.com/users/coolaj86/events{/privacy}",
"received_events_url": "https://api.github.com/users/coolaj86/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 5
| 2023-10-20T14:54:48
| 2024-06-25T06:04:52
| 2023-10-25T19:07:16
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
As a matter of security, would you adjust the Mac installer to install to the standard user location of `~/.local/bin/` and not require administrator privileges?
I'm not that familiar with DMG installers, but I can provide shell script examples (or write whatever is needed in full) for ensuring that the executable is installed properly with the correct PATH across the various shells (sh, bash, zsh, fish, etc) without requiring admin privileges.
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/851/reactions",
"total_count": 6,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/851/timeline
| null |
not_planned
| false
|
https://api.github.com/repos/ollama/ollama/issues/7812
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7812/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7812/comments
|
https://api.github.com/repos/ollama/ollama/issues/7812/events
|
https://github.com/ollama/ollama/issues/7812
| 2,687,186,734
|
I_kwDOJ0Z1Ps6gKzcu
| 7,812
|
fetching a list of available models for download?
|
{
"login": "itsPreto",
"id": 45348368,
"node_id": "MDQ6VXNlcjQ1MzQ4MzY4",
"avatar_url": "https://avatars.githubusercontent.com/u/45348368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/itsPreto",
"html_url": "https://github.com/itsPreto",
"followers_url": "https://api.github.com/users/itsPreto/followers",
"following_url": "https://api.github.com/users/itsPreto/following{/other_user}",
"gists_url": "https://api.github.com/users/itsPreto/gists{/gist_id}",
"starred_url": "https://api.github.com/users/itsPreto/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/itsPreto/subscriptions",
"organizations_url": "https://api.github.com/users/itsPreto/orgs",
"repos_url": "https://api.github.com/users/itsPreto/repos",
"events_url": "https://api.github.com/users/itsPreto/events{/privacy}",
"received_events_url": "https://api.github.com/users/itsPreto/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-11-24T06:19:26
| 2024-11-24T21:09:00
| 2024-11-24T21:09:00
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
is there any way to fetch a list of models from the ollama registry or something?
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7812/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7812/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/184
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/184/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/184/comments
|
https://api.github.com/repos/ollama/ollama/issues/184/events
|
https://github.com/ollama/ollama/issues/184
| 1,817,201,183
|
I_kwDOJ0Z1Ps5sUEof
| 184
|
Dictionary of common errors
|
{
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.github.com/users/technovangelist/followers",
"following_url": "https://api.github.com/users/technovangelist/following{/other_user}",
"gists_url": "https://api.github.com/users/technovangelist/gists{/gist_id}",
"starred_url": "https://api.github.com/users/technovangelist/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/technovangelist/subscriptions",
"organizations_url": "https://api.github.com/users/technovangelist/orgs",
"repos_url": "https://api.github.com/users/technovangelist/repos",
"events_url": "https://api.github.com/users/technovangelist/events{/privacy}",
"received_events_url": "https://api.github.com/users/technovangelist/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396191,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aXw",
"url": "https://api.github.com/repos/ollama/ollama/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] |
closed
| false
| null |
[] | null | 1
| 2023-07-23T16:55:06
| 2023-09-07T11:18:48
| 2023-09-07T11:18:47
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Ideally our errors should make sense. But sometimes that’s hard to figure out. Perhaps also have a dictionary or glossary of common errors and how to solve.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/184/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/184/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6381
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6381/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6381/comments
|
https://api.github.com/repos/ollama/ollama/issues/6381/events
|
https://github.com/ollama/ollama/pull/6381
| 2,469,052,749
|
PR_kwDOJ0Z1Ps54g6nC
| 6,381
|
fix: Add tooltip to system tray icon
|
{
"login": "eust-w",
"id": 39115651,
"node_id": "MDQ6VXNlcjM5MTE1NjUx",
"avatar_url": "https://avatars.githubusercontent.com/u/39115651?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eust-w",
"html_url": "https://github.com/eust-w",
"followers_url": "https://api.github.com/users/eust-w/followers",
"following_url": "https://api.github.com/users/eust-w/following{/other_user}",
"gists_url": "https://api.github.com/users/eust-w/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eust-w/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eust-w/subscriptions",
"organizations_url": "https://api.github.com/users/eust-w/orgs",
"repos_url": "https://api.github.com/users/eust-w/repos",
"events_url": "https://api.github.com/users/eust-w/events{/privacy}",
"received_events_url": "https://api.github.com/users/eust-w/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-08-15T22:00:34
| 2024-08-15T22:31:15
| 2024-08-15T22:31:15
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6381",
"html_url": "https://github.com/ollama/ollama/pull/6381",
"diff_url": "https://github.com/ollama/ollama/pull/6381.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6381.patch",
"merged_at": "2024-08-15T22:31:15"
}
|
- Updated setIcon method to include tooltip text for the system tray icon.
- Added NIF_TIP flag and set the tooltip text using UTF16 encoding.
Resolves: #6372
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6381/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6381/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/8502
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8502/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8502/comments
|
https://api.github.com/repos/ollama/ollama/issues/8502/events
|
https://github.com/ollama/ollama/issues/8502
| 2,799,404,429
|
I_kwDOJ0Z1Ps6m24WN
| 8,502
|
Requesting support for DeepSeek-R1-Distill series models
|
{
"login": "CberYellowstone",
"id": 37031767,
"node_id": "MDQ6VXNlcjM3MDMxNzY3",
"avatar_url": "https://avatars.githubusercontent.com/u/37031767?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CberYellowstone",
"html_url": "https://github.com/CberYellowstone",
"followers_url": "https://api.github.com/users/CberYellowstone/followers",
"following_url": "https://api.github.com/users/CberYellowstone/following{/other_user}",
"gists_url": "https://api.github.com/users/CberYellowstone/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CberYellowstone/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CberYellowstone/subscriptions",
"organizations_url": "https://api.github.com/users/CberYellowstone/orgs",
"repos_url": "https://api.github.com/users/CberYellowstone/repos",
"events_url": "https://api.github.com/users/CberYellowstone/events{/privacy}",
"received_events_url": "https://api.github.com/users/CberYellowstone/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
closed
| false
| null |
[] | null | 7
| 2025-01-20T14:23:40
| 2025-01-24T09:29:26
| 2025-01-24T09:29:25
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |

source: https://github.com/deepseek-ai/DeepSeek-R1#deepseek-r1-distill-models
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8502/reactions",
"total_count": 41,
"+1": 17,
"-1": 0,
"laugh": 0,
"hooray": 5,
"confused": 0,
"heart": 6,
"rocket": 6,
"eyes": 7
}
|
https://api.github.com/repos/ollama/ollama/issues/8502/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3556
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3556/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3556/comments
|
https://api.github.com/repos/ollama/ollama/issues/3556/events
|
https://github.com/ollama/ollama/issues/3556
| 2,233,381,743
|
I_kwDOJ0Z1Ps6FHrNv
| 3,556
|
CodeGemma by Google
|
{
"login": "smortezah",
"id": 19313488,
"node_id": "MDQ6VXNlcjE5MzEzNDg4",
"avatar_url": "https://avatars.githubusercontent.com/u/19313488?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/smortezah",
"html_url": "https://github.com/smortezah",
"followers_url": "https://api.github.com/users/smortezah/followers",
"following_url": "https://api.github.com/users/smortezah/following{/other_user}",
"gists_url": "https://api.github.com/users/smortezah/gists{/gist_id}",
"starred_url": "https://api.github.com/users/smortezah/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/smortezah/subscriptions",
"organizations_url": "https://api.github.com/users/smortezah/orgs",
"repos_url": "https://api.github.com/users/smortezah/repos",
"events_url": "https://api.github.com/users/smortezah/events{/privacy}",
"received_events_url": "https://api.github.com/users/smortezah/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-04-09T12:56:18
| 2024-04-10T18:21:21
| 2024-04-09T13:44:33
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What model would you like?
CodeGemma by Google has just been released:
https://huggingface.co/collections/google/codegemma-release-66152ac7b683e2667abdee11
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3556/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3556/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3161
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3161/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3161/comments
|
https://api.github.com/repos/ollama/ollama/issues/3161/events
|
https://github.com/ollama/ollama/pull/3161
| 2,187,778,939
|
PR_kwDOJ0Z1Ps5ptPix
| 3,161
|
llm,readline: use errors.Is instead of simple == check
|
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-03-15T05:54:24
| 2024-03-15T14:14:13
| 2024-03-15T14:14:13
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3161",
"html_url": "https://github.com/ollama/ollama/pull/3161",
"diff_url": "https://github.com/ollama/ollama/pull/3161.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3161.patch",
"merged_at": "2024-03-15T14:14:12"
}
|
This fixes some brittle, simple equality checks to use errors.Is. Since go1.13, errors.Is is the idiomatic way to check for errors.
|
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3161/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3161/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6562
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6562/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6562/comments
|
https://api.github.com/repos/ollama/ollama/issues/6562/events
|
https://github.com/ollama/ollama/pull/6562
| 2,495,539,586
|
PR_kwDOJ0Z1Ps55422X
| 6,562
|
remove any unneeded build artifacts
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-08-29T20:41:43
| 2024-08-30T16:40:52
| 2024-08-30T16:40:50
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6562",
"html_url": "https://github.com/ollama/ollama/pull/6562",
"diff_url": "https://github.com/ollama/ollama/pull/6562.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6562.patch",
"merged_at": "2024-08-30T16:40:50"
}
|
metal lib is embedded so the file isn't necessary. this shaves off roughly 50KB
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6562/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6562/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3169
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3169/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3169/comments
|
https://api.github.com/repos/ollama/ollama/issues/3169/events
|
https://github.com/ollama/ollama/pull/3169
| 2,189,022,296
|
PR_kwDOJ0Z1Ps5pxkU1
| 3,169
|
feat: timeout between token generation
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-03-15T16:22:28
| 2024-05-09T18:19:02
| 2024-05-09T18:19:02
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | true
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3169",
"html_url": "https://github.com/ollama/ollama/pull/3169",
"diff_url": "https://github.com/ollama/ollama/pull/3169.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3169.patch",
"merged_at": null
}
|
- if 30 seconds pass since the last token generation abort the request
- stop the llama thread to flush any accumulated context
This is an attempt to mitigate server hangs as seen in #2805. It is not a complete solution since we still need to address the root cause of the hangs, but it will make them recoverable.
TODO:
- [ ] reproduce hang and validate this allows the server to recover
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3169/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3169/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2685
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2685/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2685/comments
|
https://api.github.com/repos/ollama/ollama/issues/2685/events
|
https://github.com/ollama/ollama/issues/2685
| 2,149,436,210
|
I_kwDOJ0Z1Ps6AHcsy
| 2,685
|
v0.1.26 and v0.1.25 do not use AMD GPU on Linux
|
{
"login": "TimTheBig",
"id": 132001783,
"node_id": "U_kgDOB94v9w",
"avatar_url": "https://avatars.githubusercontent.com/u/132001783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TimTheBig",
"html_url": "https://github.com/TimTheBig",
"followers_url": "https://api.github.com/users/TimTheBig/followers",
"following_url": "https://api.github.com/users/TimTheBig/following{/other_user}",
"gists_url": "https://api.github.com/users/TimTheBig/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TimTheBig/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TimTheBig/subscriptions",
"organizations_url": "https://api.github.com/users/TimTheBig/orgs",
"repos_url": "https://api.github.com/users/TimTheBig/repos",
"events_url": "https://api.github.com/users/TimTheBig/events{/privacy}",
"received_events_url": "https://api.github.com/users/TimTheBig/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 8
| 2024-02-22T16:17:45
| 2024-02-23T17:09:21
| 2024-02-23T17:09:21
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
v0.1.26 and v0.1.25 do not use GPU(7900xtx) on [Nobara Linux 39](https://nobaraproject.org) when I use the install script. https://github.com/ollama/ollama/issues/2502#issuecomment-1949514130
|
{
"login": "TimTheBig",
"id": 132001783,
"node_id": "U_kgDOB94v9w",
"avatar_url": "https://avatars.githubusercontent.com/u/132001783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TimTheBig",
"html_url": "https://github.com/TimTheBig",
"followers_url": "https://api.github.com/users/TimTheBig/followers",
"following_url": "https://api.github.com/users/TimTheBig/following{/other_user}",
"gists_url": "https://api.github.com/users/TimTheBig/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TimTheBig/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TimTheBig/subscriptions",
"organizations_url": "https://api.github.com/users/TimTheBig/orgs",
"repos_url": "https://api.github.com/users/TimTheBig/repos",
"events_url": "https://api.github.com/users/TimTheBig/events{/privacy}",
"received_events_url": "https://api.github.com/users/TimTheBig/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2685/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2685/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8116
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8116/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8116/comments
|
https://api.github.com/repos/ollama/ollama/issues/8116/events
|
https://github.com/ollama/ollama/issues/8116
| 2,742,123,379
|
I_kwDOJ0Z1Ps6jcXtz
| 8,116
|
doc to use go example and apis
|
{
"login": "malv-c",
"id": 19170213,
"node_id": "MDQ6VXNlcjE5MTcwMjEz",
"avatar_url": "https://avatars.githubusercontent.com/u/19170213?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/malv-c",
"html_url": "https://github.com/malv-c",
"followers_url": "https://api.github.com/users/malv-c/followers",
"following_url": "https://api.github.com/users/malv-c/following{/other_user}",
"gists_url": "https://api.github.com/users/malv-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/malv-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/malv-c/subscriptions",
"organizations_url": "https://api.github.com/users/malv-c/orgs",
"repos_url": "https://api.github.com/users/malv-c/repos",
"events_url": "https://api.github.com/users/malv-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/malv-c/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 5
| 2024-12-16T11:30:11
| 2024-12-23T08:13:00
| 2024-12-23T08:13:00
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
to use this i don't see how any doc
ie : https://github.com/ollama/ollama/blob/main/examples/go-http-generate/main.go
import (
"bytes"
"fmt"
"io"
"log"
"net/http"
"os"
)
where is the doc ?
was it used for llama3.2 before as it now refuse to open address ?
with "os" can i use linux software ?
if not how can i enable llm to do it ?
thanks all
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8116/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8116/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/871
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/871/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/871/comments
|
https://api.github.com/repos/ollama/ollama/issues/871/events
|
https://github.com/ollama/ollama/pull/871
| 1,955,599,796
|
PR_kwDOJ0Z1Ps5dc1Bf
| 871
|
fix: Add support for legacy CPU (no AVX2/FMA) on Linux
|
{
"login": "reynaldichernando",
"id": 12949382,
"node_id": "MDQ6VXNlcjEyOTQ5Mzgy",
"avatar_url": "https://avatars.githubusercontent.com/u/12949382?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/reynaldichernando",
"html_url": "https://github.com/reynaldichernando",
"followers_url": "https://api.github.com/users/reynaldichernando/followers",
"following_url": "https://api.github.com/users/reynaldichernando/following{/other_user}",
"gists_url": "https://api.github.com/users/reynaldichernando/gists{/gist_id}",
"starred_url": "https://api.github.com/users/reynaldichernando/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/reynaldichernando/subscriptions",
"organizations_url": "https://api.github.com/users/reynaldichernando/orgs",
"repos_url": "https://api.github.com/users/reynaldichernando/repos",
"events_url": "https://api.github.com/users/reynaldichernando/events{/privacy}",
"received_events_url": "https://api.github.com/users/reynaldichernando/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 4
| 2023-10-21T17:47:40
| 2023-10-27T19:31:20
| 2023-10-27T19:31:19
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/871",
"html_url": "https://github.com/ollama/ollama/pull/871",
"diff_url": "https://github.com/ollama/ollama/pull/871.diff",
"patch_url": "https://github.com/ollama/ollama/pull/871.patch",
"merged_at": null
}
|
Fixes the illegal instruction error when running with CPU without AVX2 or FMA, by building another set of ollama runner with `-DLLAMA_AVX2=off -DLLAMA_FMA=off`.
By default, upon running the cmake for ggml/gguf, it will have these arguments set to ON. Setting it to OFF, allows older CPU that don't have these instruction to be able to run the llama.cpp.
fixes #644
Some sources for the AVX2 and FMA compatibility:
- [CPUs_with_AVX2](https://en.wikipedia.org/wiki/Advanced_Vector_Extensions#CPUs_with_AVX2)
- [CPUs_with_FMA3](https://en.wikipedia.org/wiki/FMA_instruction_set#CPUs_with_FMA3)
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/871/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/871/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1587
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1587/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1587/comments
|
https://api.github.com/repos/ollama/ollama/issues/1587/events
|
https://github.com/ollama/ollama/issues/1587
| 2,047,499,153
|
I_kwDOJ0Z1Ps56CluR
| 1,587
|
Missing "ollama avail" command to show available models
|
{
"login": "dennisorlando",
"id": 47061464,
"node_id": "MDQ6VXNlcjQ3MDYxNDY0",
"avatar_url": "https://avatars.githubusercontent.com/u/47061464?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dennisorlando",
"html_url": "https://github.com/dennisorlando",
"followers_url": "https://api.github.com/users/dennisorlando/followers",
"following_url": "https://api.github.com/users/dennisorlando/following{/other_user}",
"gists_url": "https://api.github.com/users/dennisorlando/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dennisorlando/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dennisorlando/subscriptions",
"organizations_url": "https://api.github.com/users/dennisorlando/orgs",
"repos_url": "https://api.github.com/users/dennisorlando/repos",
"events_url": "https://api.github.com/users/dennisorlando/events{/privacy}",
"received_events_url": "https://api.github.com/users/dennisorlando/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 4
| 2023-12-18T21:37:21
| 2024-01-10T15:52:32
| 2023-12-19T18:54:30
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Self descriptory; I have to go to this github page to look at what models are available, which appear to not be all of them
|
{
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.github.com/users/technovangelist/followers",
"following_url": "https://api.github.com/users/technovangelist/following{/other_user}",
"gists_url": "https://api.github.com/users/technovangelist/gists{/gist_id}",
"starred_url": "https://api.github.com/users/technovangelist/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/technovangelist/subscriptions",
"organizations_url": "https://api.github.com/users/technovangelist/orgs",
"repos_url": "https://api.github.com/users/technovangelist/repos",
"events_url": "https://api.github.com/users/technovangelist/events{/privacy}",
"received_events_url": "https://api.github.com/users/technovangelist/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1587/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1587/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2254
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2254/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2254/comments
|
https://api.github.com/repos/ollama/ollama/issues/2254/events
|
https://github.com/ollama/ollama/issues/2254
| 2,105,502,080
|
I_kwDOJ0Z1Ps59f2mA
| 2,254
|
No response from ollama
|
{
"login": "caibirdme",
"id": 8054803,
"node_id": "MDQ6VXNlcjgwNTQ4MDM=",
"avatar_url": "https://avatars.githubusercontent.com/u/8054803?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/caibirdme",
"html_url": "https://github.com/caibirdme",
"followers_url": "https://api.github.com/users/caibirdme/followers",
"following_url": "https://api.github.com/users/caibirdme/following{/other_user}",
"gists_url": "https://api.github.com/users/caibirdme/gists{/gist_id}",
"starred_url": "https://api.github.com/users/caibirdme/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/caibirdme/subscriptions",
"organizations_url": "https://api.github.com/users/caibirdme/orgs",
"repos_url": "https://api.github.com/users/caibirdme/repos",
"events_url": "https://api.github.com/users/caibirdme/events{/privacy}",
"received_events_url": "https://api.github.com/users/caibirdme/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 7
| 2024-01-29T13:27:18
| 2024-09-22T19:57:31
| 2024-02-20T04:09:34
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
No response from ollama
```
curl -X POST -d '{"model":"llama2", "messages":[{"role":"user","content":"why the weather in winter is so cold?"}], "stream":false}' 127.0.0.1:11434/api/chat
```
Here's the `ollama list`
```
llama2:latest 78e26419b446 3.8 GB 4 hours ago
llava:latest cd3274b81a85 4.5 GB 56 minutes ago
```
And when I use top to see the cpu&mem usage, ollama seems not working, the cpu&mem is very low
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2254/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2254/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3977
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3977/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3977/comments
|
https://api.github.com/repos/ollama/ollama/issues/3977/events
|
https://github.com/ollama/ollama/issues/3977
| 2,266,955,135
|
I_kwDOJ0Z1Ps6HHv1_
| 3,977
|
api/create inserts escape quotes \" for the last PARAMETER stop.
|
{
"login": "chigkim",
"id": 22120994,
"node_id": "MDQ6VXNlcjIyMTIwOTk0",
"avatar_url": "https://avatars.githubusercontent.com/u/22120994?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chigkim",
"html_url": "https://github.com/chigkim",
"followers_url": "https://api.github.com/users/chigkim/followers",
"following_url": "https://api.github.com/users/chigkim/following{/other_user}",
"gists_url": "https://api.github.com/users/chigkim/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chigkim/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chigkim/subscriptions",
"organizations_url": "https://api.github.com/users/chigkim/orgs",
"repos_url": "https://api.github.com/users/chigkim/repos",
"events_url": "https://api.github.com/users/chigkim/events{/privacy}",
"received_events_url": "https://api.github.com/users/chigkim/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-04-27T10:31:28
| 2024-05-03T14:51:20
| 2024-05-03T00:04:48
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Hi,
If you run the following python code to copy llama3 to test, it creates a modelfile with escape qotes for the last PARAMETER stop.
If I use `ollama create test -f test.modelfile`, it works fine.
First I thought [ollama/ollama-python](https://github.com/ollama/ollama-python/issues/136) was the problem, but I tried with just python requests library, and it had the same problem.
```python
from ollama import Client
import re
client = Client(host='http://localhost:11434')
modelfile = client.show('llama3')['modelfile']
print('Original modelfile:\n', modelfile)
from_str = re.search('# (FROM.*?\n)', modelfile)[1]
modelfile = re.sub('FROM /Users.*?\n', from_str, modelfile)
print('Modified modelfile:\n', modelfile)
client.create('test', modelfile=modelfile, stream=False)
response = client.generate(model='test', prompt='hello!')
print('Response:\n', response['response'])
modelfile = client.show('test')['modelfile']
print('new modelfile:\n', modelfile)
client.delete('test')
```
Here's the output.
```bash
Original modelfile:
# Modelfile generated by "ollama show"
# To build a new Modelfile based on this one, replace the FROM line with:
# FROM llama3:latest
FROM /Users/cgk/.ollama/models/blobs/sha256-00e1317cbf74d901080d7100f57580ba8dd8de57203072dc6f668324ba545f29
TEMPLATE """{{ if .System }}<|start_header_id|>system<|end_header_id|>
{{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|>
{{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|>
{{ .Response }}<|eot_id|>"""
PARAMETER num_keep 24
PARAMETER stop "<|start_header_id|>"
PARAMETER stop "<|end_header_id|>"
PARAMETER stop "<|eot_id|>"
Modified modelfile:
# Modelfile generated by "ollama show"
# To build a new Modelfile based on this one, replace the FROM line with:
# FROM llama3:latest
FROM llama3:latest
TEMPLATE """{{ if .System }}<|start_header_id|>system<|end_header_id|>
{{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|>
{{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|>
{{ .Response }}<|eot_id|>"""
PARAMETER num_keep 24
PARAMETER stop "<|start_header_id|>"
PARAMETER stop "<|end_header_id|>"
PARAMETER stop "<|eot_id|>"
Response:
Hello there! It's nice to meet you. Is there something I can help you with, or would you like to chat?<|eot_id|>
new modelfile:
# Modelfile generated by "ollama show"
# To build a new Modelfile based on this one, replace the FROM line with:
# FROM test:latest
FROM llama3:latest
TEMPLATE """{{ if .System }}<|start_header_id|>system<|end_header_id|>
{{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|>
{{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|>
{{ .Response }}<|eot_id|>"""
PARAMETER num_keep 24
PARAMETER stop "<|start_header_id|>"
PARAMETER stop "<|end_header_id|>"
PARAMETER stop "\"<|eot_id|>\""
```
If you look at the last line, there's the escape quotes \". Not sure why only the last line has that.
Also if you look at the response string, it ends with <|eot_id|> because the last PARAMETER stop string in the new modelfile doesn't work.
The following python code that uses requests library has the same problem.
```python
import requests
url ='http://localhost:11434/api/create'
data = {'name': 'test', 'modelfile': modelfile, 'stream': False}
response = requests.post(url, json=data)
```
Could you look into this?
Thanks so much!
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.1.32
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3977/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3977/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4401
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4401/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4401/comments
|
https://api.github.com/repos/ollama/ollama/issues/4401/events
|
https://github.com/ollama/ollama/pull/4401
| 2,292,610,596
|
PR_kwDOJ0Z1Ps5vP2UJ
| 4,401
|
update llama.cpp submodule to support jina embeddings v2
|
{
"login": "JoanFM",
"id": 19825685,
"node_id": "MDQ6VXNlcjE5ODI1Njg1",
"avatar_url": "https://avatars.githubusercontent.com/u/19825685?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JoanFM",
"html_url": "https://github.com/JoanFM",
"followers_url": "https://api.github.com/users/JoanFM/followers",
"following_url": "https://api.github.com/users/JoanFM/following{/other_user}",
"gists_url": "https://api.github.com/users/JoanFM/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JoanFM/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JoanFM/subscriptions",
"organizations_url": "https://api.github.com/users/JoanFM/orgs",
"repos_url": "https://api.github.com/users/JoanFM/repos",
"events_url": "https://api.github.com/users/JoanFM/events{/privacy}",
"received_events_url": "https://api.github.com/users/JoanFM/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-05-13T11:56:25
| 2024-05-14T06:41:40
| 2024-05-14T06:41:40
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4401",
"html_url": "https://github.com/ollama/ollama/pull/4401",
"diff_url": "https://github.com/ollama/ollama/pull/4401.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4401.patch",
"merged_at": null
}
|
Update the `lama.cpp` submodule so that `ollama` can run `Jina Embeddings V2` after it has been added to `llama.cpp`
|
{
"login": "JoanFM",
"id": 19825685,
"node_id": "MDQ6VXNlcjE5ODI1Njg1",
"avatar_url": "https://avatars.githubusercontent.com/u/19825685?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JoanFM",
"html_url": "https://github.com/JoanFM",
"followers_url": "https://api.github.com/users/JoanFM/followers",
"following_url": "https://api.github.com/users/JoanFM/following{/other_user}",
"gists_url": "https://api.github.com/users/JoanFM/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JoanFM/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JoanFM/subscriptions",
"organizations_url": "https://api.github.com/users/JoanFM/orgs",
"repos_url": "https://api.github.com/users/JoanFM/repos",
"events_url": "https://api.github.com/users/JoanFM/events{/privacy}",
"received_events_url": "https://api.github.com/users/JoanFM/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4401/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4401/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7509
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7509/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7509/comments
|
https://api.github.com/repos/ollama/ollama/issues/7509/events
|
https://github.com/ollama/ollama/issues/7509
| 2,635,375,713
|
I_kwDOJ0Z1Ps6dFKRh
| 7,509
|
Support partial loads of LLaMA 3.2 Vision 11b on 6G GPUs
|
{
"login": "Romultra",
"id": 65618486,
"node_id": "MDQ6VXNlcjY1NjE4NDg2",
"avatar_url": "https://avatars.githubusercontent.com/u/65618486?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Romultra",
"html_url": "https://github.com/Romultra",
"followers_url": "https://api.github.com/users/Romultra/followers",
"following_url": "https://api.github.com/users/Romultra/following{/other_user}",
"gists_url": "https://api.github.com/users/Romultra/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Romultra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Romultra/subscriptions",
"organizations_url": "https://api.github.com/users/Romultra/orgs",
"repos_url": "https://api.github.com/users/Romultra/repos",
"events_url": "https://api.github.com/users/Romultra/events{/privacy}",
"received_events_url": "https://api.github.com/users/Romultra/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 12
| 2024-11-05T12:54:20
| 2025-01-12T01:11:03
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
**Description:**
I encountered an issue where the **LLaMA 3.2 Vision 11b** model loads entirely in CPU RAM, without utilizing the GPU memory as expected. The issue occurs on my Windows-based laptop with 6GB VRAM, where models that exceed GPU memory capacity should load the rest into system RAM while still leveraging the GPU.
**Steps to Reproduce:**
1. Run **LLaMA 3.2 Vision 11b** with `ollama` on a system with limited VRAM (6 GB in my case).
2. Check the memory allocation using the `ollama ps` command.
**Expected Behavior:**
When running models larger than available VRAM, the model should partially load into VRAM and utilize system RAM for the remainder. This behavior works as intended for other models (e.g., **Llama 3.1**), which utilize the GPU and offload excess data to RAM.
**Actual Behavior:**
When running **Llama 3.2 Vision**, the entire model loads into the CPU RAM, as shown in the output of the `ollama ps` command. Additionally, the Task Manager indicates no significant GPU or VRAM usage, confirming that the model is not utilizing the GPU at all.
**Laptop Specifications:**
- **CPU**: AMD Ryzen 9 7940HS
- **RAM**: 16 GB
- **GPU**: NVIDIA RTX 4050 Mobile 6 GB VRAM
- **Ollama Version**: Pre-release 0.4.0-rc8
**Supporting Evidence:**
1. Screenshot of `ollama ps` showing **LLaMA 3.1** partially loading into VRAM (expected behavior):

2. Screenshot of `ollama ps` showing **LLaMA 3.2 Vision 11b** loaded fully into CPU RAM:

**Further Testing**:
On my **desktop** with higher VRAM (24GB):
**Specs**:
- **Processor**: Ryzen 7 7800X3D
- **Memory**: 64 GB RAM
- **GPU**: NVIDIA RTX 4090 24GB VRAM
- **Ollama Version**: Pre-release 0.4.0-rc8
Running the **LLaMA 3.2 Vision 11b** model on the desktop:
- The model loaded entirely in the GPU VRAM as expected.
- Screenshot of `ollama ps` for this case:

Running the **LLaMA 3.2 Vision 90b** model on the desktop (which exceeds 24GB VRAM):
- The model loaded partially into GPU and partially into CPU RAM, which is correct.
- Screenshot of `ollama ps` for this case:

**Note**: Both machines are running Windows, and GPU drivers are up to date.
**Conclusion:**
The behavior seems specific to running the **LLaMA 3.2 Vision 11b** model on systems with VRAM insufficient to load the entire model, where the expected split between VRAM and RAM doesn't occur.
### OS
Windows
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.4.0-rc8
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7509/reactions",
"total_count": 6,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7509/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/7997
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7997/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7997/comments
|
https://api.github.com/repos/ollama/ollama/issues/7997/events
|
https://github.com/ollama/ollama/issues/7997
| 2,725,326,484
|
I_kwDOJ0Z1Ps6icS6U
| 7,997
|
Support loading models from multiple locations
|
{
"login": "i0ntempest",
"id": 16017904,
"node_id": "MDQ6VXNlcjE2MDE3OTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/16017904?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/i0ntempest",
"html_url": "https://github.com/i0ntempest",
"followers_url": "https://api.github.com/users/i0ntempest/followers",
"following_url": "https://api.github.com/users/i0ntempest/following{/other_user}",
"gists_url": "https://api.github.com/users/i0ntempest/gists{/gist_id}",
"starred_url": "https://api.github.com/users/i0ntempest/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/i0ntempest/subscriptions",
"organizations_url": "https://api.github.com/users/i0ntempest/orgs",
"repos_url": "https://api.github.com/users/i0ntempest/repos",
"events_url": "https://api.github.com/users/i0ntempest/events{/privacy}",
"received_events_url": "https://api.github.com/users/i0ntempest/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 2
| 2024-12-08T15:14:31
| 2024-12-20T21:02:56
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Model files adds up real fast, and my internal disk is near full after pulling a few 72b models. It would be great if `OLLAMA_MODELS` can be a colon separated string with multiple paths. Then the `pull` and `run` command should automatically decide which folder to put new models to and allow overwritting with a switch.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7997/reactions",
"total_count": 5,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7997/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/6741
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6741/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6741/comments
|
https://api.github.com/repos/ollama/ollama/issues/6741/events
|
https://github.com/ollama/ollama/issues/6741
| 2,518,383,052
|
I_kwDOJ0Z1Ps6WG3nM
| 6,741
|
Llama 3.1 70b 128k context not fitting 96Gb
|
{
"login": "dmatora",
"id": 647062,
"node_id": "MDQ6VXNlcjY0NzA2Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/647062?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dmatora",
"html_url": "https://github.com/dmatora",
"followers_url": "https://api.github.com/users/dmatora/followers",
"following_url": "https://api.github.com/users/dmatora/following{/other_user}",
"gists_url": "https://api.github.com/users/dmatora/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dmatora/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dmatora/subscriptions",
"organizations_url": "https://api.github.com/users/dmatora/orgs",
"repos_url": "https://api.github.com/users/dmatora/repos",
"events_url": "https://api.github.com/users/dmatora/events{/privacy}",
"received_events_url": "https://api.github.com/users/dmatora/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg",
"url": "https://api.github.com/repos/ollama/ollama/labels/nvidia",
"name": "nvidia",
"color": "8CDB00",
"default": false,
"description": "Issues relating to Nvidia GPUs and CUDA"
},
{
"id": 6849881759,
"node_id": "LA_kwDOJ0Z1Ps8AAAABmEjmnw",
"url": "https://api.github.com/repos/ollama/ollama/labels/memory",
"name": "memory",
"color": "5017EA",
"default": false,
"description": ""
}
] |
open
| false
| null |
[] | null | 2
| 2024-09-11T03:38:29
| 2024-09-11T17:32:19
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Not only it doesn't fit 96Gb (offloading only 10 layers out of 81), but processing actual ~128k request crashes with `CUDA error: out of memory` on 160Gb (will all layers offloaded)
As mentioned here https://github.com/ollama/ollama/issues/6279#issuecomment-2342546437_
this is obviously a bug
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.3.10
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6741/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6741/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/8518
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8518/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8518/comments
|
https://api.github.com/repos/ollama/ollama/issues/8518/events
|
https://github.com/ollama/ollama/issues/8518
| 2,801,928,484
|
I_kwDOJ0Z1Ps6nAgkk
| 8,518
|
cannot find ROCM files/tools in docker image
|
{
"login": "nicoKoehler",
"id": 53008522,
"node_id": "MDQ6VXNlcjUzMDA4NTIy",
"avatar_url": "https://avatars.githubusercontent.com/u/53008522?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nicoKoehler",
"html_url": "https://github.com/nicoKoehler",
"followers_url": "https://api.github.com/users/nicoKoehler/followers",
"following_url": "https://api.github.com/users/nicoKoehler/following{/other_user}",
"gists_url": "https://api.github.com/users/nicoKoehler/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nicoKoehler/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nicoKoehler/subscriptions",
"organizations_url": "https://api.github.com/users/nicoKoehler/orgs",
"repos_url": "https://api.github.com/users/nicoKoehler/repos",
"events_url": "https://api.github.com/users/nicoKoehler/events{/privacy}",
"received_events_url": "https://api.github.com/users/nicoKoehler/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 0
| 2025-01-21T13:55:49
| 2025-01-21T13:55:49
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I was trying to figure out which ROCM version the image was using (trying to get faster-whisper to run in docker too, so I thought this could be a lead) and could not find anything. Where would they be? Why arent there any of the typical files and folders?
System: Docker on Debian 12.5, 16GB, AMD RX 550 4GB
Docker image: ollama/ollama:rocm
Docker-compose
```
services:
ollama:
image: ollama/ollama:rocm
container_name: ollama
environment:
OLLAMA_MODELS: /usr/share/ollama
HSA_OVERRIDE_GFX_VERSION: "11.0.0"
HIP_VISIBLE_DEVICES: "0"
devices:
- "/dev/kfd"
- "/dev/dri"
security_opt:
- seccomp:unconfined
cap_add:
- SYS_PTRACE
ipc: host
group_add:
- video
volumes:
- /home/username/.ollama:/root/.ollama
- /home/username/ollama/models:/usr/share/ollama
ports:
- "11434:11434"
```
Running
`find / -type d -name "*rocm*" 2>/dev/null`
Turned up `/usr/lib/ollama/runners/rocm_avx` but could not find anything there.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8518/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8518/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/5826
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5826/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5826/comments
|
https://api.github.com/repos/ollama/ollama/issues/5826/events
|
https://github.com/ollama/ollama/issues/5826
| 2,421,274,073
|
I_kwDOJ0Z1Ps6QUbXZ
| 5,826
|
Azurefile (NFS) causes very slow model loads - Mixtral 22B isn't loaded on an A100 (80GB VRAM)
|
{
"login": "juangon",
"id": 1306127,
"node_id": "MDQ6VXNlcjEzMDYxMjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1306127?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/juangon",
"html_url": "https://github.com/juangon",
"followers_url": "https://api.github.com/users/juangon/followers",
"following_url": "https://api.github.com/users/juangon/following{/other_user}",
"gists_url": "https://api.github.com/users/juangon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/juangon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/juangon/subscriptions",
"organizations_url": "https://api.github.com/users/juangon/orgs",
"repos_url": "https://api.github.com/users/juangon/repos",
"events_url": "https://api.github.com/users/juangon/events{/privacy}",
"received_events_url": "https://api.github.com/users/juangon/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] |
closed
| false
| null |
[] | null | 10
| 2024-07-21T07:44:23
| 2024-07-23T07:26:39
| 2024-07-23T07:26:39
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Trying to load Mixtral 8x22B model using an A100 GPU as a deployment in Kubernetes, but it isn't loaded after 6 minutes.
Mistral 7B model is loaded fine.
Here is the debug log:
`time=2024-07-21T07:39:07.407Z level=DEBUG source=gpu.go:358 msg="updating system memory data" before.total="216.3 GiB" before.free="212.5 GiB" before.free_swap="0 B" now.total="216.3 GiB" now.free="212.5 GiB" now.free_swap="0 B"
CUDA driver version: 12.4
time=2024-07-21T07:39:07.762Z level=DEBUG source=gpu.go:406 msg="updating cuda memory data" gpu=GPU-4101ce7d-41d5-c2ee-2fe9-927eb4440974 name="NVIDIA A100 80GB PCIe" overhead="0 B" before.total="79.1 GiB" before.free="78.7 GiB" now.total="79.1 GiB" now.free="78.7 GiB" now.used="426.1 MiB"
releasing cuda driver library
time=2024-07-21T07:39:07.768Z level=DEBUG source=sched.go:214 msg="loading first model" model=/root/.ollama/models/blobs/sha256-85bbeb31e9a57b841db2386003d8b057acbe0dce01e1939711cd533ccbc69bca
time=2024-07-21T07:39:07.768Z level=DEBUG source=memory.go:101 msg=evaluating library=cuda gpu_count=1 available="[78.7 GiB]"
time=2024-07-21T07:39:07.768Z level=INFO source=sched.go:701 msg="new model will fit in available VRAM in single GPU, loading" model=/root/.ollama/models/blobs/sha256-85bbeb31e9a57b841db2386003d8b057acbe0dce01e1939711cd533ccbc69bca gpu=GPU-4101ce7d-41d5-c2ee-2fe9-927eb4440974 parallel=4 available=84527415296 required="67.3 GiB"
time=2024-07-21T07:39:07.769Z level=DEBUG source=server.go:100 msg="system memory" total="216.3 GiB" free="212.5 GiB" free_swap="0 B"
time=2024-07-21T07:39:07.769Z level=DEBUG source=memory.go:101 msg=evaluating library=cuda gpu_count=1 available="[78.7 GiB]"
time=2024-07-21T07:39:07.769Z level=INFO source=memory.go:309 msg="offload to cuda" layers.requested=-1 layers.model=57 layers.offload=57 layers.split="" memory.available="[78.7 GiB]" memory.required.full="67.3 GiB" memory.required.partial="67.3 GiB" memory.required.kv="1.8 GiB" memory.required.allocations="[67.3 GiB]" memory.weights.total="64.7 GiB" memory.weights.repeating="64.5 GiB" memory.weights.nonrepeating="157.5 MiB" memory.graph.full="832.0 MiB" memory.graph.partial="1.1 GiB"
time=2024-07-21T07:39:07.769Z level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama232821427/runners/cpu/ollama_llama_server
time=2024-07-21T07:39:07.769Z level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama232821427/runners/cpu_avx/ollama_llama_server
time=2024-07-21T07:39:07.769Z level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama232821427/runners/cpu_avx2/ollama_llama_server
time=2024-07-21T07:39:07.769Z level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama232821427/runners/cuda_v11/ollama_llama_server
time=2024-07-21T07:39:07.769Z level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama232821427/runners/rocm_v60102/ollama_llama_server
time=2024-07-21T07:39:07.770Z level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama232821427/runners/cpu/ollama_llama_server
time=2024-07-21T07:39:07.770Z level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama232821427/runners/cpu_avx/ollama_llama_server
time=2024-07-21T07:39:07.770Z level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama232821427/runners/cpu_avx2/ollama_llama_server
time=2024-07-21T07:39:07.770Z level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama232821427/runners/cuda_v11/ollama_llama_server
time=2024-07-21T07:39:07.770Z level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama232821427/runners/rocm_v60102/ollama_llama_server
time=2024-07-21T07:39:07.770Z level=INFO source=server.go:383 msg="starting llama server" cmd="/tmp/ollama232821427/runners/cuda_v11/ollama_llama_server --model /root/.ollama/models/blobs/sha256-85bbeb31e9a57b841db2386003d8b057acbe0dce01e1939711cd533ccbc69bca --ctx-size 8192 --batch-size 512 --embedding --log-disable --n-gpu-layers 57 --verbose --parallel 4 --port 37297"
time=2024-07-21T07:39:07.770Z level=DEBUG source=server.go:398 msg=subprocess environment="[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin LD_LIBRARY_PATH=/tmp/ollama232821427/runners/cuda_v11:/tmp/ollama232821427/runners:/usr/local/nvidia/lib:/usr/local/nvidia/lib64 CUDA_VISIBLE_DEVICES=GPU-4101ce7d-41d5-c2ee-2fe9-927eb4440974]"
time=2024-07-21T07:39:07.770Z level=INFO source=sched.go:437 msg="loaded runners" count=1
time=2024-07-21T07:39:07.770Z level=INFO source=server.go:571 msg="waiting for llama runner to start responding"
time=2024-07-21T07:39:07.770Z level=INFO source=server.go:612 msg="waiting for server to become available" status="llm server error"
INFO [main] build info | build=1 commit="a8db2a9" tid="139752639410176" timestamp=1721547547
INFO [main] system info | n_threads=24 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 0 | " tid="139752639410176" timestamp=1721547547 total_threads=24
INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="23" port="37297" tid="139752639410176" timestamp=1721547547
llama_model_loader: loaded meta data with 28 key-value pairs and 563 tensors from /root/.ollama/models/blobs/sha256-85bbeb31e9a57b841db2386003d8b057acbe0dce01e1939711cd533ccbc69bca (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.name str = Mixtral-8x22B-Instruct-v0.1
llama_model_loader: - kv 2: llama.block_count u32 = 56
llama_model_loader: - kv 3: llama.context_length u32 = 65536
llama_model_loader: - kv 4: llama.embedding_length u32 = 6144
llama_model_loader: - kv 5: llama.feed_forward_length u32 = 16384
llama_model_loader: - kv 6: llama.attention.head_count u32 = 48
llama_model_loader: - kv 7: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 8: llama.rope.freq_base f32 = 1000000.000000
llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 10: llama.expert_count u32 = 8
llama_model_loader: - kv 11: llama.expert_used_count u32 = 2
llama_model_loader: - kv 12: general.file_type u32 = 12
llama_model_loader: - kv 13: llama.vocab_size u32 = 32768
llama_model_loader: - kv 14: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 15: tokenizer.ggml.model str = llama
llama_model_loader: - kv 16: tokenizer.ggml.tokens arr[str,32768] = ["<unk>", "<s>", "</s>", "[INST]", "[...
llama_model_loader: - kv 17: tokenizer.ggml.scores arr[f32,32768] = [-1000.000000, -1000.000000, -1000.00...
llama_model_loader: - kv 18: tokenizer.ggml.token_type arr[i32,32768] = [3, 3, 3, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 19: tokenizer.ggml.bos_token_id u32 = 1
llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 2
llama_model_loader: - kv 21: tokenizer.ggml.unknown_token_id u32 = 0
llama_model_loader: - kv 22: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 23: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 24: tokenizer.chat_template.tool_use str = {{bos_token}}{% set user_messages = m...
llama_model_loader: - kv 25: tokenizer.chat_templates arr[str,1] = ["tool_use"]
llama_model_loader: - kv 26: tokenizer.chat_template str = {{bos_token}}{% for message in messag...
llama_model_loader: - kv 27: general.quantization_version u32 = 2
llama_model_loader: - type f32: 113 tensors
llama_model_loader: - type f16: 56 tensors
llama_model_loader: - type q8_0: 112 tensors
llama_model_loader: - type q3_K: 169 tensors
llama_model_loader: - type q4_K: 53 tensors
llama_model_loader: - type q5_K: 59 tensors
llama_model_loader: - type q6_K: 1 tensors
llm_load_vocab: special tokens cache size = 259
llm_load_vocab: token to piece cache size = 0.1732 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = SPM
llm_load_print_meta: n_vocab = 32768
llm_load_print_meta: n_merges = 0
llm_load_print_meta: vocab_only = 0
llm_load_print_meta: n_ctx_train = 65536
llm_load_print_meta: n_embd = 6144
llm_load_print_meta: n_layer = 56
llm_load_print_meta: n_head = 48
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_swa = 0
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 6
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 16384
llm_load_print_meta: n_expert = 8
llm_load_print_meta: n_expert_used = 2
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 65536
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: model type = 8x22B
llm_load_print_meta: model ftype = Q3_K - Medium
llm_load_print_meta: model params = 140.63 B
llm_load_print_meta: model size = 63.14 GiB (3.86 BPW)
llm_load_print_meta: general.name = Mixtral-8x22B-Instruct-v0.1
llm_load_print_meta: BOS token = 1 '<s>'
llm_load_print_meta: EOS token = 2 '</s>'
llm_load_print_meta: UNK token = 0 '<unk>'
llm_load_print_meta: LF token = 781 '<0x0A>'
llm_load_print_meta: max token length = 48
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
Device 0: NVIDIA A100 80GB PCIe, compute capability 8.0, VMM: yes
time=2024-07-21T07:39:08.021Z level=INFO source=server.go:612 msg="waiting for server to become available" status="llm server loading model"
llm_load_tensors: ggml ctx size = 0.51 MiB
`
### OS
Linux
### GPU
Nvidia
### CPU
_No response_
### Ollama version
2.6 (docker image 2.7)
|
{
"login": "juangon",
"id": 1306127,
"node_id": "MDQ6VXNlcjEzMDYxMjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1306127?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/juangon",
"html_url": "https://github.com/juangon",
"followers_url": "https://api.github.com/users/juangon/followers",
"following_url": "https://api.github.com/users/juangon/following{/other_user}",
"gists_url": "https://api.github.com/users/juangon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/juangon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/juangon/subscriptions",
"organizations_url": "https://api.github.com/users/juangon/orgs",
"repos_url": "https://api.github.com/users/juangon/repos",
"events_url": "https://api.github.com/users/juangon/events{/privacy}",
"received_events_url": "https://api.github.com/users/juangon/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5826/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5826/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1382
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1382/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1382/comments
|
https://api.github.com/repos/ollama/ollama/issues/1382/events
|
https://github.com/ollama/ollama/issues/1382
| 2,024,910,714
|
I_kwDOJ0Z1Ps54sa96
| 1,382
|
litellm leaves defunct processes behind
|
{
"login": "iplayfast",
"id": 751306,
"node_id": "MDQ6VXNlcjc1MTMwNg==",
"avatar_url": "https://avatars.githubusercontent.com/u/751306?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iplayfast",
"html_url": "https://github.com/iplayfast",
"followers_url": "https://api.github.com/users/iplayfast/followers",
"following_url": "https://api.github.com/users/iplayfast/following{/other_user}",
"gists_url": "https://api.github.com/users/iplayfast/gists{/gist_id}",
"starred_url": "https://api.github.com/users/iplayfast/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iplayfast/subscriptions",
"organizations_url": "https://api.github.com/users/iplayfast/orgs",
"repos_url": "https://api.github.com/users/iplayfast/repos",
"events_url": "https://api.github.com/users/iplayfast/events{/privacy}",
"received_events_url": "https://api.github.com/users/iplayfast/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 4
| 2023-12-04T22:42:10
| 2023-12-08T23:21:41
| 2023-12-06T20:01:33
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I'm not sure who's at fault here.
https://github.com/BerriAI/litellm/issues/992
litellm -m ollama/alfred
litellm -m ollama/mistral
run an autogen application that uses these guys
The autogen get's stuck, so you must ctrl-c out.
The ollama models you started are now defunct
If on a linux system you do
ps aux | grep ollama
it will show things like:
ps aux | grep ollama
ollama 1581 0.0 0.0 4365940 17680 ? Ssl Dec01 0:47 /usr/local/bin/ollama serve
chris 735058 0.1 0.0 2740828 16376 pts/6 Sl+ Dec03 1:46 ollama run starling-lm
chris 1237946 0.4 0.0 2814560 16280 pts/8 Sl+ 12:11 1:25 ollama run orca2:13b
chris 1290228 0.2 0.0 299108 123796 pts/9 Sl 17:14 0:04 /home/chris/anaconda3/envs/autogen/bin/python /home/chris/anaconda3/envs/autogen/bin/litellm -m ollama/alfred --port 9000
chris 1290229 0.2 0.0 371844 123444 pts/9 Sl 17:14 0:04 /home/chris/anaconda3/envs/autogen/bin/python /home/chris/anaconda3/envs/autogen/bin/litellm -m ollama/DeepSeek-Coder --port 9001
chris 1290230 0.2 0.0 224088 123660 pts/9 S 17:14 0:04 /home/chris/anaconda3/envs/autogen/bin/python /home/chris/anaconda3/envs/autogen/bin/litellm -m ollama/starling-lm --port 9002
chris 1290243 0.0 0.0 0 0 pts/9 Z 17:14 0:00 [ollama]
chris 1290244 0.0 0.0 0 0 pts/9 Z 17:14 0:00 [ollama]
chris 1290245 0.0 0.0 0 0 pts/9 Z 17:14 0:00 [ollama]
chris 1290540 0.4 0.1 1501464 155380 pts/12 Sl+ 17:18 0:05 /home/chris/anaconda3/envs/panel/bin/python3.11 /home/chris/anaconda3/envs/panel/bin/panel serve panel_autogenollama.py
ollama 1291438 1592 3.9 5711180 5139892 ? S<l 17:24 246:30 /tmp/ollama236051357/llama.cpp/gguf/build/cpu/bin/ollama-runner --model /usr/share/ollama/.ollama/models/blobs/sha256:92da6238854f2fa902d8b2ad79d548536af1d3ab06821f323bd5bbcea2013276 --ctx-size 2048 --batch-size 512 --n-gpu-layers 110 --embedding --port 54099
chris 1367452 0.0 0.0 9528 2400 pts/13 S+ 17:40 0:00 grep --color=auto ollama
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1382/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1382/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5523
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5523/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5523/comments
|
https://api.github.com/repos/ollama/ollama/issues/5523/events
|
https://github.com/ollama/ollama/pull/5523
| 2,393,814,997
|
PR_kwDOJ0Z1Ps50mdwU
| 5,523
|
sched: don't error if paging to disk on Windows and macOS
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-07-07T01:04:14
| 2024-07-08T16:49:49
| 2024-07-07T02:01:53
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5523",
"html_url": "https://github.com/ollama/ollama/pull/5523",
"diff_url": "https://github.com/ollama/ollama/pull/5523.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5523.patch",
"merged_at": "2024-07-07T02:01:53"
}
|
macOS and Windows don't error when paging to disk, so loosen this check for now to not return an error to users that could still run the model (albeit a little slowly). It also stops us from double counting memory on Apple Silicon Macs.
In the future, we should still select an upper limit on memory for macOS and Windows to avoid timeouts, etc. This PR is meant to unblock 0.1.49 and doesn't include that yet.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5523/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5523/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/8617
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8617/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8617/comments
|
https://api.github.com/repos/ollama/ollama/issues/8617/events
|
https://github.com/ollama/ollama/issues/8617
| 2,814,009,755
|
I_kwDOJ0Z1Ps6numGb
| 8,617
|
Support Request for jonatasgrosman/wav2vec2-large-xlsr-53-italian
|
{
"login": "raphael10-collab",
"id": 70313067,
"node_id": "MDQ6VXNlcjcwMzEzMDY3",
"avatar_url": "https://avatars.githubusercontent.com/u/70313067?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/raphael10-collab",
"html_url": "https://github.com/raphael10-collab",
"followers_url": "https://api.github.com/users/raphael10-collab/followers",
"following_url": "https://api.github.com/users/raphael10-collab/following{/other_user}",
"gists_url": "https://api.github.com/users/raphael10-collab/gists{/gist_id}",
"starred_url": "https://api.github.com/users/raphael10-collab/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/raphael10-collab/subscriptions",
"organizations_url": "https://api.github.com/users/raphael10-collab/orgs",
"repos_url": "https://api.github.com/users/raphael10-collab/repos",
"events_url": "https://api.github.com/users/raphael10-collab/events{/privacy}",
"received_events_url": "https://api.github.com/users/raphael10-collab/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
open
| false
| null |
[] | null | 3
| 2025-01-27T20:37:55
| 2025-01-27T20:44:21
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
(.venv) raphy@raohy:~/llama.cpp$ git clone https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-italian
Cloning into 'wav2vec2-large-xlsr-53-italian'...
remote: Enumerating objects: 99, done.
remote: Total 99 (delta 0), reused 0 (delta 0), pack-reused 99 (from 1)
Unpacking objects: 100% (99/99), 545.41 KiB | 1.55 MiB/s, done.
Filtering content: 100% (2/2), 2.35 GiB | 92.80 MiB/s, done.
(.venv) raphy@raohy:~/llama.cpp$ ollama create Modelfile
transferring model data
unpacking model metadata
Error: Models based on 'Wav2Vec2ForCTC' are not yet supported
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8617/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8617/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/6106
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6106/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6106/comments
|
https://api.github.com/repos/ollama/ollama/issues/6106/events
|
https://github.com/ollama/ollama/pull/6106
| 2,441,007,324
|
PR_kwDOJ0Z1Ps53CzDK
| 6,106
|
patches: phi3 optional sliding window attention
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-07-31T21:48:27
| 2024-07-31T23:47:39
| 2024-07-31T23:12:06
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6106",
"html_url": "https://github.com/ollama/ollama/pull/6106",
"diff_url": "https://github.com/ollama/ollama/pull/6106.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6106.patch",
"merged_at": "2024-07-31T23:12:06"
}
|
this change allows models that do not set `phi3.attention.sliding_window` to revert to the previous behaviour instead of segfaulting
resolves #5956
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6106/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6106/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1972
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1972/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1972/comments
|
https://api.github.com/repos/ollama/ollama/issues/1972/events
|
https://github.com/ollama/ollama/pull/1972
| 2,080,175,593
|
PR_kwDOJ0Z1Ps5j_xQE
| 1,972
|
use g++ to build `libext_server.so` on linux
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-01-13T08:12:36
| 2024-01-13T15:55:10
| 2024-01-13T08:12:43
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1972",
"html_url": "https://github.com/ollama/ollama/pull/1972",
"diff_url": "https://github.com/ollama/ollama/pull/1972.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1972.patch",
"merged_at": "2024-01-13T08:12:42"
}
|
Fixes the build error:
```
Error: Unable to load dynamic library: Unable to load dynamic server library: /tmp/ollama3730278603/cpu/libext_server.so: undefined symbol: _ZTVN10cxxabiv117class
```
cc @dhiltgen
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1972/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1972/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5239
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5239/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5239/comments
|
https://api.github.com/repos/ollama/ollama/issues/5239/events
|
https://github.com/ollama/ollama/issues/5239
| 2,368,601,525
|
I_kwDOJ0Z1Ps6NLf21
| 5,239
|
Mutli-GPU asymmetric VRAM with smaller first causes ordering bug and incorrect tensor split - cudaMalloc failed: out of memory
|
{
"login": "chrisoutwright",
"id": 27736055,
"node_id": "MDQ6VXNlcjI3NzM2MDU1",
"avatar_url": "https://avatars.githubusercontent.com/u/27736055?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chrisoutwright",
"html_url": "https://github.com/chrisoutwright",
"followers_url": "https://api.github.com/users/chrisoutwright/followers",
"following_url": "https://api.github.com/users/chrisoutwright/following{/other_user}",
"gists_url": "https://api.github.com/users/chrisoutwright/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chrisoutwright/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chrisoutwright/subscriptions",
"organizations_url": "https://api.github.com/users/chrisoutwright/orgs",
"repos_url": "https://api.github.com/users/chrisoutwright/repos",
"events_url": "https://api.github.com/users/chrisoutwright/events{/privacy}",
"received_events_url": "https://api.github.com/users/chrisoutwright/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg",
"url": "https://api.github.com/repos/ollama/ollama/labels/windows",
"name": "windows",
"color": "0052CC",
"default": false,
"description": ""
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg",
"url": "https://api.github.com/repos/ollama/ollama/labels/nvidia",
"name": "nvidia",
"color": "8CDB00",
"default": false,
"description": "Issues relating to Nvidia GPUs and CUDA"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info",
"name": "needs more info",
"color": "BA8041",
"default": false,
"description": "More information is needed to assist"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 8
| 2024-06-23T14:55:56
| 2024-11-05T23:16:40
| 2024-11-05T23:16:40
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
After going to 0.1.45 from 0.1.43 version I get out of memory, I did try as well
Set-ItemProperty -Path 'HKCU:\Environment' -Name 'OLLAMA_SCHED_SPREAD' -Value 1
and
Set-ItemProperty -Path 'HKCU:\Environment' -Name 'CUDA_VISIBLE_DEVICES' -Value "0,1"
But still it is happening.
```
llm_load_print_meta: model ftype = Q6_K
llm_load_print_meta: model params = 22.25 B
llm_load_print_meta: model size = 17.00 GiB (6.56 BPW)
llm_load_print_meta: general.name = Codestral-22B-v0.1
llm_load_print_meta: BOS token = 1 '<s>'
llm_load_print_meta: EOS token = 2 '</s>'
llm_load_print_meta: UNK token = 0 '<unk>'
llm_load_print_meta: LF token = 781 '<0x0A>'
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: CUDA_USE_TENSOR_CORES: yes
ggml_cuda_init: found 2 CUDA devices:
Device 0: NVIDIA GeForce RTX 2080 Ti, compute capability 7.5, VMM: yes
Device 1: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes
llm_load_tensors: ggml ctx size = 0.77 MiB
time=2024-06-23T16:45:00.347+02:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model"
ggml_backend_cuda_buffer_type_alloc_buffer: allocating 11597.72 MiB on device 0: cudaMalloc failed: out of memory
llama_model_load: error loading model: unable to allocate backend buffer
llama_load_model_from_file: exception loading model
time=2024-06-23T16:45:01.388+02:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error"
time=2024-06-23T16:45:01.652+02:00 level=ERROR source=sched.go:388 msg="error loading llama server" error="llama runner process has terminated: exit status 0xc0000409 cudaMalloc failed: out of memory"
[GIN] 2024/06/23 - 16:45:01 | 500 | 1.8154377s | ::1 | POST "/api/chat"
```
What could be the issue? I thought GPU splitting would work out of the box now?
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.45
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5239/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5239/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/121
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/121/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/121/comments
|
https://api.github.com/repos/ollama/ollama/issues/121/events
|
https://github.com/ollama/ollama/issues/121
| 1,811,380,099
|
I_kwDOJ0Z1Ps5r93eD
| 121
|
Performance question?
|
{
"login": "kosecki123",
"id": 5417665,
"node_id": "MDQ6VXNlcjU0MTc2NjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/5417665?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kosecki123",
"html_url": "https://github.com/kosecki123",
"followers_url": "https://api.github.com/users/kosecki123/followers",
"following_url": "https://api.github.com/users/kosecki123/following{/other_user}",
"gists_url": "https://api.github.com/users/kosecki123/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kosecki123/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kosecki123/subscriptions",
"organizations_url": "https://api.github.com/users/kosecki123/orgs",
"repos_url": "https://api.github.com/users/kosecki123/repos",
"events_url": "https://api.github.com/users/kosecki123/events{/privacy}",
"received_events_url": "https://api.github.com/users/kosecki123/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 4
| 2023-07-19T07:56:17
| 2023-08-23T17:43:32
| 2023-08-23T17:43:31
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
This is just request for info rather than a bug.
What's kind of performance / latency on prompts we should expect running on M2 Pro ? Seems like takes up to 10s to generate the answers using `llama2` model. Is that something that can improve in the future?
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/121/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/121/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6148
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6148/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6148/comments
|
https://api.github.com/repos/ollama/ollama/issues/6148/events
|
https://github.com/ollama/ollama/issues/6148
| 2,446,237,133
|
I_kwDOJ0Z1Ps6Rzp3N
| 6,148
|
Model unloaded each request if OLLAMA_NUM_PARALLEL > 1
|
{
"login": "abes200",
"id": 177388421,
"node_id": "U_kgDOCpK7hQ",
"avatar_url": "https://avatars.githubusercontent.com/u/177388421?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abes200",
"html_url": "https://github.com/abes200",
"followers_url": "https://api.github.com/users/abes200/followers",
"following_url": "https://api.github.com/users/abes200/following{/other_user}",
"gists_url": "https://api.github.com/users/abes200/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abes200/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abes200/subscriptions",
"organizations_url": "https://api.github.com/users/abes200/orgs",
"repos_url": "https://api.github.com/users/abes200/repos",
"events_url": "https://api.github.com/users/abes200/events{/privacy}",
"received_events_url": "https://api.github.com/users/abes200/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 29
| 2024-08-03T08:19:14
| 2025-01-03T05:08:47
| 2024-08-19T18:07:23
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I did see an issue this was mentioned in it but it was closed and said as fixed in version 0.2.1
I wasn't having this issue when I was using 0.3.0. I missed a few updates, but updated to the most recent and now if I have OLLAMA_NUM_PARALLEL in my system variables or use it as an option in python using Ollama the model reloads for every request that is sent.
Just to clarify, using Ollama CLI in windows:
Ollama run gemma2 <model is loaded>
send a message <model is unloaded and reloaded then responds>
send another message <model is unloaded and reloaded then responds>
Remove OLLAMA_NUM_PARALLEL from system variables and model loads and responds as normal.
Models seem to take a little more memory to load when I have OLLAMA_NUM_PARALLEL in my system variables than without it.
However whether I have it in or not, I am no longer able to do parallel requests on models. Without it requests are now always queued, with it they are still queued but the models unload and reload before each response.
Have I missed something obvious somewhere? That does happen a lot.
### OS
Windows
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.3.3
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6148/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6148/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4399
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4399/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4399/comments
|
https://api.github.com/repos/ollama/ollama/issues/4399/events
|
https://github.com/ollama/ollama/pull/4399
| 2,292,514,059
|
PR_kwDOJ0Z1Ps5vPgvY
| 4,399
|
fix embedding by adding fixes from llama.cpp upstream
|
{
"login": "deadbeef84",
"id": 961178,
"node_id": "MDQ6VXNlcjk2MTE3OA==",
"avatar_url": "https://avatars.githubusercontent.com/u/961178?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/deadbeef84",
"html_url": "https://github.com/deadbeef84",
"followers_url": "https://api.github.com/users/deadbeef84/followers",
"following_url": "https://api.github.com/users/deadbeef84/following{/other_user}",
"gists_url": "https://api.github.com/users/deadbeef84/gists{/gist_id}",
"starred_url": "https://api.github.com/users/deadbeef84/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/deadbeef84/subscriptions",
"organizations_url": "https://api.github.com/users/deadbeef84/orgs",
"repos_url": "https://api.github.com/users/deadbeef84/repos",
"events_url": "https://api.github.com/users/deadbeef84/events{/privacy}",
"received_events_url": "https://api.github.com/users/deadbeef84/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 11
| 2024-05-13T11:13:07
| 2024-06-09T01:14:27
| 2024-06-09T01:14:15
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4399",
"html_url": "https://github.com/ollama/ollama/pull/4399",
"diff_url": "https://github.com/ollama/ollama/pull/4399.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4399.patch",
"merged_at": null
}
|
Embedding appears broken since v0.1.32
See #3777 #4207 for details.
This PR applies fixes based on https://github.com/ggerganov/llama.cpp/commit/1b67731e184e27a465b8c5476061294a4af668ea#diff-87355a1a297a9f0fdc86af5e2a59cae153290f58d68822cd10c30fee4f7f7076.
I've tested it and embedding vectors looks correct after applying this patch.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4399/reactions",
"total_count": 9,
"+1": 9,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4399/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/8240
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8240/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8240/comments
|
https://api.github.com/repos/ollama/ollama/issues/8240/events
|
https://github.com/ollama/ollama/issues/8240
| 2,758,744,333
|
I_kwDOJ0Z1Ps6kbxkN
| 8,240
|
Realtime API
|
{
"login": "GitOguz",
"id": 23114578,
"node_id": "MDQ6VXNlcjIzMTE0NTc4",
"avatar_url": "https://avatars.githubusercontent.com/u/23114578?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/GitOguz",
"html_url": "https://github.com/GitOguz",
"followers_url": "https://api.github.com/users/GitOguz/followers",
"following_url": "https://api.github.com/users/GitOguz/following{/other_user}",
"gists_url": "https://api.github.com/users/GitOguz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/GitOguz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/GitOguz/subscriptions",
"organizations_url": "https://api.github.com/users/GitOguz/orgs",
"repos_url": "https://api.github.com/users/GitOguz/repos",
"events_url": "https://api.github.com/users/GitOguz/events{/privacy}",
"received_events_url": "https://api.github.com/users/GitOguz/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-12-25T11:24:08
| 2024-12-29T18:37:45
| 2024-12-29T18:37:45
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Please add realtime API capabilities. Websocket/WebRTC.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8240/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8240/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2256
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2256/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2256/comments
|
https://api.github.com/repos/ollama/ollama/issues/2256/events
|
https://github.com/ollama/ollama/pull/2256
| 2,105,955,023
|
PR_kwDOJ0Z1Ps5lWb8C
| 2,256
|
Add container hints for troubleshooting
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-01-29T16:53:54
| 2024-01-30T16:12:52
| 2024-01-30T16:12:48
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/2256",
"html_url": "https://github.com/ollama/ollama/pull/2256",
"diff_url": "https://github.com/ollama/ollama/pull/2256.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2256.patch",
"merged_at": "2024-01-30T16:12:48"
}
|
Some users are new to containers and unsure where the server logs go
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2256/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2256/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4521
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4521/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4521/comments
|
https://api.github.com/repos/ollama/ollama/issues/4521/events
|
https://github.com/ollama/ollama/pull/4521
| 2,304,687,654
|
PR_kwDOJ0Z1Ps5v5B6N
| 4,521
|
implement tunable registry defaults for registry and update mirrors
|
{
"login": "ghost",
"id": 10137,
"node_id": "MDQ6VXNlcjEwMTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghost",
"html_url": "https://github.com/ghost",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.github.com/users/ghost/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghost/subscriptions",
"organizations_url": "https://api.github.com/users/ghost/orgs",
"repos_url": "https://api.github.com/users/ghost/repos",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghost/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-05-19T16:32:32
| 2024-08-09T20:07:31
| 2024-08-09T20:07:31
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4521",
"html_url": "https://github.com/ollama/ollama/pull/4521",
"diff_url": "https://github.com/ollama/ollama/pull/4521.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4521.patch",
"merged_at": null
}
|
# What is the problem this change solves?
In large environments with many cloud instances are running `ollama serve`, accidentally pushing code to run `ollama pull llama3` can result in 100's of cloud instances are trying to download from `ollama.ai`.
The correct change for production should have been `ollama pull https://registry.prod.someside.tld/library/llama3`. The registry mirror at `registry.prod.someside.tld` is necessary to reduce bandwidth costs for high volume data, like an AI model or container image.
Mistakes like this can go unnoticed by novices building scalable infrastructure for their developers, until they get the resulting bill.
Also registry owners often have to implement rate limiting to keep bandwidth costs down. Hitting a rate limit in a production environment often results in an outage. Further making convenient mirroring options desirable.
# What are the changes being made?
- Created a new package called `defaults` to hold tunable values.
- Moved variables related to endpoints to a single package called `github.com/ollama/ollama/types/defaults`
- Exposes control to admins via environment variables.
# Are there any tasks remaining?
I need some guidance on how testing should work for these changes.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4521/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4521/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2736
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2736/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2736/comments
|
https://api.github.com/repos/ollama/ollama/issues/2736/events
|
https://github.com/ollama/ollama/issues/2736
| 2,152,498,068
|
I_kwDOJ0Z1Ps6ATIOU
| 2,736
|
Windows version "/api/generate" 404 not found
|
{
"login": "t41372",
"id": 36402030,
"node_id": "MDQ6VXNlcjM2NDAyMDMw",
"avatar_url": "https://avatars.githubusercontent.com/u/36402030?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/t41372",
"html_url": "https://github.com/t41372",
"followers_url": "https://api.github.com/users/t41372/followers",
"following_url": "https://api.github.com/users/t41372/following{/other_user}",
"gists_url": "https://api.github.com/users/t41372/gists{/gist_id}",
"starred_url": "https://api.github.com/users/t41372/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/t41372/subscriptions",
"organizations_url": "https://api.github.com/users/t41372/orgs",
"repos_url": "https://api.github.com/users/t41372/repos",
"events_url": "https://api.github.com/users/t41372/events{/privacy}",
"received_events_url": "https://api.github.com/users/t41372/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 33
| 2024-02-24T21:59:52
| 2025-01-08T12:47:53
| 2024-03-12T04:34:10
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
<img width="1310" alt="截圖 2024-02-24 下午2 48 29" src="https://github.com/ollama/ollama/assets/36402030/8d1aac17-75f5-4a5c-8f27-a6569db7256c">
<img width="431" alt="截圖 2024-02-24 下午2 54 17" src="https://github.com/ollama/ollama/assets/36402030/99030d6f-9393-4eb5-b617-e04c369fdefe">
The "/api/generate" is not functioning and display 404 on the Windows version (not WSL), despite the Ollama server running and "/" being accessible. The same code works on the Ollama server on my Mac, so I guess the issue is not with my code.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2736/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2736/timeline
| null |
completed
| false
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.