url
stringlengths 51
54
| repository_url
stringclasses 1
value | labels_url
stringlengths 65
68
| comments_url
stringlengths 60
63
| events_url
stringlengths 58
61
| html_url
stringlengths 39
44
| id
int64 1.78B
2.82B
| node_id
stringlengths 18
19
| number
int64 1
8.69k
| title
stringlengths 1
382
| user
dict | labels
listlengths 0
5
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
2
| milestone
null | comments
int64 0
323
| created_at
timestamp[s] | updated_at
timestamp[s] | closed_at
timestamp[s] | author_association
stringclasses 4
values | sub_issues_summary
dict | active_lock_reason
null | draft
bool 2
classes | pull_request
dict | body
stringlengths 2
118k
⌀ | closed_by
dict | reactions
dict | timeline_url
stringlengths 60
63
| performed_via_github_app
null | state_reason
stringclasses 4
values | is_pull_request
bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/ollama/ollama/issues/8382
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8382/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8382/comments
|
https://api.github.com/repos/ollama/ollama/issues/8382/events
|
https://github.com/ollama/ollama/issues/8382
| 2,781,691,733
|
I_kwDOJ0Z1Ps6lzT9V
| 8,382
|
Error: llama runner process has terminated: exit status 2
|
{
"login": "idkwhodatis",
"id": 33296184,
"node_id": "MDQ6VXNlcjMzMjk2MTg0",
"avatar_url": "https://avatars.githubusercontent.com/u/33296184?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/idkwhodatis",
"html_url": "https://github.com/idkwhodatis",
"followers_url": "https://api.github.com/users/idkwhodatis/followers",
"following_url": "https://api.github.com/users/idkwhodatis/following{/other_user}",
"gists_url": "https://api.github.com/users/idkwhodatis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/idkwhodatis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/idkwhodatis/subscriptions",
"organizations_url": "https://api.github.com/users/idkwhodatis/orgs",
"repos_url": "https://api.github.com/users/idkwhodatis/repos",
"events_url": "https://api.github.com/users/idkwhodatis/events{/privacy}",
"received_events_url": "https://api.github.com/users/idkwhodatis/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 2
| 2025-01-11T08:04:51
| 2025-01-11T18:10:17
| 2025-01-11T18:10:17
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Got this error when running `qwq:32b-preview-q4_K_M`, my gpu is 7900xtx with 24G memory
### OS
Windows
### GPU
AMD
### CPU
AMD
### Ollama version
0.5.4
|
{
"login": "idkwhodatis",
"id": 33296184,
"node_id": "MDQ6VXNlcjMzMjk2MTg0",
"avatar_url": "https://avatars.githubusercontent.com/u/33296184?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/idkwhodatis",
"html_url": "https://github.com/idkwhodatis",
"followers_url": "https://api.github.com/users/idkwhodatis/followers",
"following_url": "https://api.github.com/users/idkwhodatis/following{/other_user}",
"gists_url": "https://api.github.com/users/idkwhodatis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/idkwhodatis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/idkwhodatis/subscriptions",
"organizations_url": "https://api.github.com/users/idkwhodatis/orgs",
"repos_url": "https://api.github.com/users/idkwhodatis/repos",
"events_url": "https://api.github.com/users/idkwhodatis/events{/privacy}",
"received_events_url": "https://api.github.com/users/idkwhodatis/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8382/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8382/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4881
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4881/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4881/comments
|
https://api.github.com/repos/ollama/ollama/issues/4881/events
|
https://github.com/ollama/ollama/pull/4881
| 2,339,205,608
|
PR_kwDOJ0Z1Ps5xu5BL
| 4,881
|
Extend api/show and ollama show to return more model info
|
{
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjhan/followers",
"following_url": "https://api.github.com/users/royjhan/following{/other_user}",
"gists_url": "https://api.github.com/users/royjhan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/royjhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/royjhan/subscriptions",
"organizations_url": "https://api.github.com/users/royjhan/orgs",
"repos_url": "https://api.github.com/users/royjhan/repos",
"events_url": "https://api.github.com/users/royjhan/events{/privacy}",
"received_events_url": "https://api.github.com/users/royjhan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjhan/followers",
"following_url": "https://api.github.com/users/royjhan/following{/other_user}",
"gists_url": "https://api.github.com/users/royjhan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/royjhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/royjhan/subscriptions",
"organizations_url": "https://api.github.com/users/royjhan/orgs",
"repos_url": "https://api.github.com/users/royjhan/repos",
"events_url": "https://api.github.com/users/royjhan/events{/privacy}",
"received_events_url": "https://api.github.com/users/royjhan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjhan/followers",
"following_url": "https://api.github.com/users/royjhan/following{/other_user}",
"gists_url": "https://api.github.com/users/royjhan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/royjhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/royjhan/subscriptions",
"organizations_url": "https://api.github.com/users/royjhan/orgs",
"repos_url": "https://api.github.com/users/royjhan/repos",
"events_url": "https://api.github.com/users/royjhan/events{/privacy}",
"received_events_url": "https://api.github.com/users/royjhan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 3
| 2024-06-06T21:48:30
| 2024-06-19T21:19:03
| 2024-06-19T21:19:02
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4881",
"html_url": "https://github.com/ollama/ollama/pull/4881",
"diff_url": "https://github.com/ollama/ollama/pull/4881.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4881.patch",
"merged_at": "2024-06-19T21:19:02"
}
|
Building off of #3899
Resolves #3570, #2732, #3899
|
{
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjhan/followers",
"following_url": "https://api.github.com/users/royjhan/following{/other_user}",
"gists_url": "https://api.github.com/users/royjhan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/royjhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/royjhan/subscriptions",
"organizations_url": "https://api.github.com/users/royjhan/orgs",
"repos_url": "https://api.github.com/users/royjhan/repos",
"events_url": "https://api.github.com/users/royjhan/events{/privacy}",
"received_events_url": "https://api.github.com/users/royjhan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4881/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4881/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5299
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5299/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5299/comments
|
https://api.github.com/repos/ollama/ollama/issues/5299/events
|
https://github.com/ollama/ollama/pull/5299
| 2,375,430,023
|
PR_kwDOJ0Z1Ps5zo8Qw
| 5,299
|
Dev doc
|
{
"login": "aibabelx",
"id": 16663208,
"node_id": "MDQ6VXNlcjE2NjYzMjA4",
"avatar_url": "https://avatars.githubusercontent.com/u/16663208?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aibabelx",
"html_url": "https://github.com/aibabelx",
"followers_url": "https://api.github.com/users/aibabelx/followers",
"following_url": "https://api.github.com/users/aibabelx/following{/other_user}",
"gists_url": "https://api.github.com/users/aibabelx/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aibabelx/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aibabelx/subscriptions",
"organizations_url": "https://api.github.com/users/aibabelx/orgs",
"repos_url": "https://api.github.com/users/aibabelx/repos",
"events_url": "https://api.github.com/users/aibabelx/events{/privacy}",
"received_events_url": "https://api.github.com/users/aibabelx/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-06-26T13:53:53
| 2024-06-26T14:03:29
| 2024-06-26T14:01:04
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5299",
"html_url": "https://github.com/ollama/ollama/pull/5299",
"diff_url": "https://github.com/ollama/ollama/pull/5299.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5299.patch",
"merged_at": null
}
|
ADD Chinese Document
|
{
"login": "aibabelx",
"id": 16663208,
"node_id": "MDQ6VXNlcjE2NjYzMjA4",
"avatar_url": "https://avatars.githubusercontent.com/u/16663208?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aibabelx",
"html_url": "https://github.com/aibabelx",
"followers_url": "https://api.github.com/users/aibabelx/followers",
"following_url": "https://api.github.com/users/aibabelx/following{/other_user}",
"gists_url": "https://api.github.com/users/aibabelx/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aibabelx/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aibabelx/subscriptions",
"organizations_url": "https://api.github.com/users/aibabelx/orgs",
"repos_url": "https://api.github.com/users/aibabelx/repos",
"events_url": "https://api.github.com/users/aibabelx/events{/privacy}",
"received_events_url": "https://api.github.com/users/aibabelx/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5299/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5299/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4502
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4502/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4502/comments
|
https://api.github.com/repos/ollama/ollama/issues/4502/events
|
https://github.com/ollama/ollama/pull/4502
| 2,303,356,252
|
PR_kwDOJ0Z1Ps5v00FE
| 4,502
|
fix quantize file types
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-05-17T18:31:40
| 2024-05-20T23:09:28
| 2024-05-20T23:09:27
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4502",
"html_url": "https://github.com/ollama/ollama/pull/4502",
"diff_url": "https://github.com/ollama/ollama/pull/4502.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4502.patch",
"merged_at": "2024-05-20T23:09:27"
}
|
this fixes the reported file type (quantization). previously this will report f16 or f32 based on the input file despite going through quantization
this changes contains some changes suggested by @pdevine in #4330
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4502/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4502/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3908
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3908/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3908/comments
|
https://api.github.com/repos/ollama/ollama/issues/3908/events
|
https://github.com/ollama/ollama/issues/3908
| 2,263,482,977
|
I_kwDOJ0Z1Ps6G6gJh
| 3,908
|
Issue in running any model
|
{
"login": "jenil0108",
"id": 64329492,
"node_id": "MDQ6VXNlcjY0MzI5NDky",
"avatar_url": "https://avatars.githubusercontent.com/u/64329492?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jenil0108",
"html_url": "https://github.com/jenil0108",
"followers_url": "https://api.github.com/users/jenil0108/followers",
"following_url": "https://api.github.com/users/jenil0108/following{/other_user}",
"gists_url": "https://api.github.com/users/jenil0108/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jenil0108/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jenil0108/subscriptions",
"organizations_url": "https://api.github.com/users/jenil0108/orgs",
"repos_url": "https://api.github.com/users/jenil0108/repos",
"events_url": "https://api.github.com/users/jenil0108/events{/privacy}",
"received_events_url": "https://api.github.com/users/jenil0108/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 3
| 2024-04-25T12:38:12
| 2024-08-02T08:44:25
| 2024-05-06T22:59:33
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Previously I was able to run llama2 normally. But ever since llama3 came i am not able to run llama2 or llama3. On performing ollama run llama2 i get the following error.
`Error: error starting the external llama server: exec: "ollama_llama_server": executable file not found in $PATH`
I looked through some previous issues which stated to `reinstall ollama`, restart ollama via `brew services restart ollama`.
Both give the same error.
I also tried running `ollama serve`, which gives the following error
`Error: listen tcp 127.0.0.1:11434: bind: address already in use.`
Had found the same issue on another bug report which was closed saying issue solved with merge. But still having the same issue.
One suggestion i had seen was setting the ollama_path manually by `export OLLAMA_HOST = localhost: 8.8.8.8`.
After which i could perform `ollama serve`.
But then when i do run the `ollama run llama2` on a separate window
I get the following error
```
pulling manifest
Error: pull model manifest: file does not exist
```
But on doing `ollama list` i clearly see both the models there.
Also tried pulling the models again but still no progress.
I am using Macbook M3 Pro and cli i am using is iterm2.
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.1.32
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3908/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3908/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/786
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/786/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/786/comments
|
https://api.github.com/repos/ollama/ollama/issues/786/events
|
https://github.com/ollama/ollama/issues/786
| 1,942,717,779
|
I_kwDOJ0Z1Ps5zy4VT
| 786
|
Image generation models
|
{
"login": "SabareeshGC",
"id": 114115146,
"node_id": "U_kgDOBs1CSg",
"avatar_url": "https://avatars.githubusercontent.com/u/114115146?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SabareeshGC",
"html_url": "https://github.com/SabareeshGC",
"followers_url": "https://api.github.com/users/SabareeshGC/followers",
"following_url": "https://api.github.com/users/SabareeshGC/following{/other_user}",
"gists_url": "https://api.github.com/users/SabareeshGC/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SabareeshGC/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SabareeshGC/subscriptions",
"organizations_url": "https://api.github.com/users/SabareeshGC/orgs",
"repos_url": "https://api.github.com/users/SabareeshGC/repos",
"events_url": "https://api.github.com/users/SabareeshGC/events{/privacy}",
"received_events_url": "https://api.github.com/users/SabareeshGC/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 18
| 2023-10-13T22:20:35
| 2024-11-25T23:42:27
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
It would be great if we can extend support for text to image models.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/786/reactions",
"total_count": 98,
"+1": 88,
"-1": 1,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 9
}
|
https://api.github.com/repos/ollama/ollama/issues/786/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/2201
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2201/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2201/comments
|
https://api.github.com/repos/ollama/ollama/issues/2201/events
|
https://github.com/ollama/ollama/issues/2201
| 2,101,730,156
|
I_kwDOJ0Z1Ps59Rdts
| 2,201
|
Can Ollama run more than one instance on Ubuntu
|
{
"login": "myrainbowandsky",
"id": 35071732,
"node_id": "MDQ6VXNlcjM1MDcxNzMy",
"avatar_url": "https://avatars.githubusercontent.com/u/35071732?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/myrainbowandsky",
"html_url": "https://github.com/myrainbowandsky",
"followers_url": "https://api.github.com/users/myrainbowandsky/followers",
"following_url": "https://api.github.com/users/myrainbowandsky/following{/other_user}",
"gists_url": "https://api.github.com/users/myrainbowandsky/gists{/gist_id}",
"starred_url": "https://api.github.com/users/myrainbowandsky/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/myrainbowandsky/subscriptions",
"organizations_url": "https://api.github.com/users/myrainbowandsky/orgs",
"repos_url": "https://api.github.com/users/myrainbowandsky/repos",
"events_url": "https://api.github.com/users/myrainbowandsky/events{/privacy}",
"received_events_url": "https://api.github.com/users/myrainbowandsky/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 3
| 2024-01-26T07:35:28
| 2024-01-26T23:51:02
| 2024-01-26T23:51:02
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Since Ubuntu is multi-user operation system.
But I found if sb (not necessarily sudo user) is using Ollama, the other users cannot use it. How to deal with it?
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2201/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2201/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1926
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1926/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1926/comments
|
https://api.github.com/repos/ollama/ollama/issues/1926/events
|
https://github.com/ollama/ollama/issues/1926
| 2,077,070,591
|
I_kwDOJ0Z1Ps57zZT_
| 1,926
|
armv7 support
|
{
"login": "mauryaarun",
"id": 10696598,
"node_id": "MDQ6VXNlcjEwNjk2NTk4",
"avatar_url": "https://avatars.githubusercontent.com/u/10696598?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mauryaarun",
"html_url": "https://github.com/mauryaarun",
"followers_url": "https://api.github.com/users/mauryaarun/followers",
"following_url": "https://api.github.com/users/mauryaarun/following{/other_user}",
"gists_url": "https://api.github.com/users/mauryaarun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mauryaarun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mauryaarun/subscriptions",
"organizations_url": "https://api.github.com/users/mauryaarun/orgs",
"repos_url": "https://api.github.com/users/mauryaarun/repos",
"events_url": "https://api.github.com/users/mauryaarun/events{/privacy}",
"received_events_url": "https://api.github.com/users/mauryaarun/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info",
"name": "needs more info",
"color": "BA8041",
"default": false,
"description": "More information is needed to assist"
}
] |
closed
| false
| null |
[] | null | 3
| 2024-01-11T16:14:40
| 2024-11-20T00:36:36
| 2024-11-20T00:36:36
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I am unable to compile ollama on armv7 cpu android tv using termux. While i compiled it successfully on a smartphone using termux.
some error when compiling in the file ggml.c in llama.
[error.log](https://github.com/jmorganca/ollama/files/13905716/error.log)
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1926/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1926/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3828
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3828/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3828/comments
|
https://api.github.com/repos/ollama/ollama/issues/3828/events
|
https://github.com/ollama/ollama/issues/3828
| 2,256,975,607
|
I_kwDOJ0Z1Ps6Ghrb3
| 3,828
|
Upgrading to v0.1.32 doesn't automatically rename model blobs from `sha256:<BLOB_NAME>` to `sha256-<BLOB_NAME>`
|
{
"login": "swoh816",
"id": 14903586,
"node_id": "MDQ6VXNlcjE0OTAzNTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14903586?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/swoh816",
"html_url": "https://github.com/swoh816",
"followers_url": "https://api.github.com/users/swoh816/followers",
"following_url": "https://api.github.com/users/swoh816/following{/other_user}",
"gists_url": "https://api.github.com/users/swoh816/gists{/gist_id}",
"starred_url": "https://api.github.com/users/swoh816/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/swoh816/subscriptions",
"organizations_url": "https://api.github.com/users/swoh816/orgs",
"repos_url": "https://api.github.com/users/swoh816/repos",
"events_url": "https://api.github.com/users/swoh816/events{/privacy}",
"received_events_url": "https://api.github.com/users/swoh816/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 4
| 2024-04-22T16:27:43
| 2024-04-22T22:57:54
| 2024-04-22T20:50:37
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I recently upgraded Ollama to v0.1.32, and I couldn't launch Ollama with `sudo systemctl start ollama.service` and had an error `ollama ExecStart=/usr/local/bin/ollama serve (code=exited, status=1/FAILURE)`. I found out that the reason is that **the blob names were not changed from `sha256:<BLOB_HASH>` to `sha256-<BLOB_HASH>`**, which is necessary for upgrading to version v0.1.32 in Linux as of https://github.com/ollama/ollama/issues/2032.
I checked the log, and found out that there was a permission issue when changing the blob name:
```
sudo[2809771]: swoh816 : TTY=pts/146 ; PWD=/home/swoh816 ; USER=root ; COMMAND=/usr/bin/install -o0 -g0 -m755 /tmp/tmp.zdpcq2BjgA/ollama /usr/local/bin/ollama
sudo[2809776]: swoh816 : TTY=pts/146 ; PWD=/home/swoh816 ; USER=root ; COMMAND=/usr/sbin/usermod -a -G render ollama
sudo[2809783]: swoh816 : TTY=pts/146 ; PWD=/home/swoh816 ; USER=root ; COMMAND=/usr/sbin/usermod -a -G video ollama
sudo[2809790]: swoh816 : TTY=pts/146 ; PWD=/home/swoh816 ; USER=root ; COMMAND=/usr/sbin/usermod -a -G ollama swoh816
sudo[2809797]: swoh816 : TTY=pts/146 ; PWD=/home/swoh816 ; USER=root ; COMMAND=/usr/bin/tee /etc/systemd/system/ollama.service
sudo[2809858]: swoh816 : TTY=pts/146 ; PWD=/home/swoh816 ; USER=root ; COMMAND=/usr/bin/systemctl enable ollama
sudo[2809885]: swoh816 : TTY=pts/146 ; PWD=/home/swoh816 ; USER=root ; COMMAND=/usr/bin/systemctl restart ollama
ollama[2809890]: Error: rename /usr/share/ollama/.ollama/models/blobs/sha256:0577f52a4edfd5e48bb59c296bb4f40328161ecc3d0aa4398b3cb6b2b7367cac /usr/share/ollama/.ollama/models/blobs/sha256-0577f52a4edfd5e48bb59c296bb4f40328161ecc3d0aa4398b3cb6b2b7367cac: permission denied
systemd[1]: ollama.service: Main process exited, code=exited, status=1/FAILURE
systemd[1]: ollama.service: Failed with result 'exit-code'.
systemd[1]: ollama.service: Scheduled restart job, restart counter is at 1.
```
Given that the error log prints right after restarting ollama (i.e., `systemctl restart ollama`), I suspect the permission error occurs when restarting `ollama`. Probably `ollama serve` (which is triggered when you do `systemctl restart ollama`) is supposed to run a code to rename all `sha:<BLOB_NAME>` formats to `sha-<BLOB_NAME>`, but it fails to do so because of the permission issue?
I tried to figure out by looking into https://github.com/ollama/ollama/blob/62be2050dd83197864d771fe6891fc47486ee6a1/scripts/install.sh#L83 but couldn't find much clue :p Hopefully this issue helps debugging!
### OS
Linux
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.1.32
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3828/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3828/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/261
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/261/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/261/comments
|
https://api.github.com/repos/ollama/ollama/issues/261/events
|
https://github.com/ollama/ollama/issues/261
| 1,833,903,369
|
I_kwDOJ0Z1Ps5tTyUJ
| 261
|
api: make host configurable
|
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 7
| 2023-08-02T21:05:27
| 2023-08-23T17:52:01
| 2023-08-23T17:52:01
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I'd like to run `ollama serve` somewhere other than where I am running my client. I'd like to propose a `OLLAMA_HOST` environment variable which is picked up by `api.NewClient` if no other hosts are specified in the `hosts` param.
I have a patch locally I'm happy to convert to a PR if anyone thinks this has legs the team is open to it.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/261/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/261/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4942
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4942/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4942/comments
|
https://api.github.com/repos/ollama/ollama/issues/4942/events
|
https://github.com/ollama/ollama/pull/4942
| 2,341,982,392
|
PR_kwDOJ0Z1Ps5x4Jnt
| 4,942
|
Add KV overrides to Modelfile PARAMETER.
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-06-09T02:27:25
| 2024-06-28T12:17:38
| 2024-06-28T12:17:37
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4942",
"html_url": "https://github.com/ollama/ollama/pull/4942",
"diff_url": "https://github.com/ollama/ollama/pull/4942.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4942.patch",
"merged_at": null
}
|
Allow Modelfile PARAMETER entries to be passed through to llama.cpp --override-kv command line arguments.
Fixes https://github.com/ollama/ollama/issues/4904.
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4942/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4942/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3440
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3440/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3440/comments
|
https://api.github.com/repos/ollama/ollama/issues/3440/events
|
https://github.com/ollama/ollama/issues/3440
| 2,218,503,785
|
I_kwDOJ0Z1Ps6EO65p
| 3,440
|
Whether Windows 7 is supported?
|
{
"login": "PrimeQH",
"id": 38377233,
"node_id": "MDQ6VXNlcjM4Mzc3MjMz",
"avatar_url": "https://avatars.githubusercontent.com/u/38377233?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PrimeQH",
"html_url": "https://github.com/PrimeQH",
"followers_url": "https://api.github.com/users/PrimeQH/followers",
"following_url": "https://api.github.com/users/PrimeQH/following{/other_user}",
"gists_url": "https://api.github.com/users/PrimeQH/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PrimeQH/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PrimeQH/subscriptions",
"organizations_url": "https://api.github.com/users/PrimeQH/orgs",
"repos_url": "https://api.github.com/users/PrimeQH/repos",
"events_url": "https://api.github.com/users/PrimeQH/events{/privacy}",
"received_events_url": "https://api.github.com/users/PrimeQH/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-04-01T15:29:44
| 2024-04-02T03:03:37
| 2024-04-01T19:45:37
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null | null |
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3440/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3440/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5015
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5015/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5015/comments
|
https://api.github.com/repos/ollama/ollama/issues/5015/events
|
https://github.com/ollama/ollama/issues/5015
| 2,350,414,669
|
I_kwDOJ0Z1Ps6MGHtN
| 5,015
|
Error: llama runner process no longer running: -1 error:check_tensor_dims: tensor 'output.weight' not found
|
{
"login": "isanwenyu",
"id": 5869999,
"node_id": "MDQ6VXNlcjU4Njk5OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/5869999?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/isanwenyu",
"html_url": "https://github.com/isanwenyu",
"followers_url": "https://api.github.com/users/isanwenyu/followers",
"following_url": "https://api.github.com/users/isanwenyu/following{/other_user}",
"gists_url": "https://api.github.com/users/isanwenyu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/isanwenyu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/isanwenyu/subscriptions",
"organizations_url": "https://api.github.com/users/isanwenyu/orgs",
"repos_url": "https://api.github.com/users/isanwenyu/repos",
"events_url": "https://api.github.com/users/isanwenyu/events{/privacy}",
"received_events_url": "https://api.github.com/users/isanwenyu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 4
| 2024-06-13T07:46:31
| 2024-06-18T20:04:49
| 2024-06-18T20:04:28
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
ollama run qwen2 work well
but
ollama run qwen2:1.5b
**Error: llama runner process no longer running: -1 error:check_tensor_dims: tensor 'output.weight' not found**
### OS
macOS
### GPU
Intel
### CPU
Intel
### Ollama version
0.1.32
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5015/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5015/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6543
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6543/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6543/comments
|
https://api.github.com/repos/ollama/ollama/issues/6543/events
|
https://github.com/ollama/ollama/issues/6543
| 2,492,120,359
|
I_kwDOJ0Z1Ps6Uir0n
| 6,543
|
Failed to start docker without `root` access
|
{
"login": "leobenkel",
"id": 4960573,
"node_id": "MDQ6VXNlcjQ5NjA1NzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4960573?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/leobenkel",
"html_url": "https://github.com/leobenkel",
"followers_url": "https://api.github.com/users/leobenkel/followers",
"following_url": "https://api.github.com/users/leobenkel/following{/other_user}",
"gists_url": "https://api.github.com/users/leobenkel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/leobenkel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leobenkel/subscriptions",
"organizations_url": "https://api.github.com/users/leobenkel/orgs",
"repos_url": "https://api.github.com/users/leobenkel/repos",
"events_url": "https://api.github.com/users/leobenkel/events{/privacy}",
"received_events_url": "https://api.github.com/users/leobenkel/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 6677677816,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgVG-A",
"url": "https://api.github.com/repos/ollama/ollama/labels/docker",
"name": "docker",
"color": "0052CC",
"default": false,
"description": "Issues relating to using ollama in containers"
}
] |
closed
| false
| null |
[] | null | 3
| 2024-08-28T13:42:18
| 2024-10-24T03:36:49
| 2024-10-24T03:36:38
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
So I am trying to extend the base ollama image to have a container with a shared storage that is not with root permission.
That way i can pull images, and have the raw file available in a shared folder.
After messing around for a while, i am still stuck there:
```
Couldn't find '/home/ollama/.ollama/id_ed25519'. Generating new private key.
Error: open /home/ollama/.ollama/id_ed25519: permission denied
```
No matter what I do the `.ollama` folder is owned by `root`
The relevant section of my `Dockerfile`:
```
ARG LLM_ENGINE_VERSION
ARG OS_PLATFORM
FROM --platform=${OS_PLATFORM} ollama/ollama:${LLM_ENGINE_VERSION}
# ...
RUN apt-get update && apt-get install -y curl bash
ARG USER_ID
ARG GROUP_ID
ENV OLLAMA_HOST=0.0.0.0
ENV OLLAMA_MAX_LOADED_MODELS=2
ENV OLLAMA_NUM_PARALLEL=3
ENV OLLAMA_NOHISTORY=1
EXPOSE 11434
# ENV OLLAMA_MODELS=/home/ollama/.ollama/models
# ENV OLLAMA_TMPDIR=/home/ollama/.ollama/tmp
ENV USER=ollama
RUN groupadd -r -g $GROUP_ID ollama && useradd --create-home --shell /bin/bash --uid $USER_ID -g ollama ollama
RUN mkdir -p /home/ollama/.ollama && touch /home/ollama/.ollama/.keep
RUN chown -R $USER_ID:$GROUP_ID /home/ollama/ /home/ollama/.ollama
USER $USER_ID:$GROUP_ID
ENTRYPOINT ["/bin/ollama"]
CMD ["serve"]
```
### OS
Docker
### GPU
Other
### CPU
AMD
### Ollama version
0.3.8
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6543/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6543/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6163
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6163/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6163/comments
|
https://api.github.com/repos/ollama/ollama/issues/6163/events
|
https://github.com/ollama/ollama/issues/6163
| 2,447,164,293
|
I_kwDOJ0Z1Ps6R3MOF
| 6,163
|
GPU Usage Never Exceeds 70% When Using LLaMA 3:8B with Ollama
|
{
"login": "drspam1991",
"id": 6633208,
"node_id": "MDQ6VXNlcjY2MzMyMDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/6633208?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/drspam1991",
"html_url": "https://github.com/drspam1991",
"followers_url": "https://api.github.com/users/drspam1991/followers",
"following_url": "https://api.github.com/users/drspam1991/following{/other_user}",
"gists_url": "https://api.github.com/users/drspam1991/gists{/gist_id}",
"starred_url": "https://api.github.com/users/drspam1991/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/drspam1991/subscriptions",
"organizations_url": "https://api.github.com/users/drspam1991/orgs",
"repos_url": "https://api.github.com/users/drspam1991/repos",
"events_url": "https://api.github.com/users/drspam1991/events{/privacy}",
"received_events_url": "https://api.github.com/users/drspam1991/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info",
"name": "needs more info",
"color": "BA8041",
"default": false,
"description": "More information is needed to assist"
}
] |
closed
| false
| null |
[] | null | 7
| 2024-08-04T14:41:41
| 2024-10-24T03:12:09
| 2024-10-24T03:12:09
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
When using the Ollama tool with the LLaMA 3:8B model and all 33 offload layers loaded on the GPU, the GPU usage never goes over 70%. This seems suboptimal and may indicate an issue with how resources are being utilized.

### OS
Linux
### GPU
Nvidia
### CPU
_No response_
### Ollama version
0.2.8
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6163/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6163/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2494
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2494/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2494/comments
|
https://api.github.com/repos/ollama/ollama/issues/2494/events
|
https://github.com/ollama/ollama/issues/2494
| 2,134,522,584
|
I_kwDOJ0Z1Ps5_OjrY
| 2,494
|
Change language in Llava
|
{
"login": "shersoni610",
"id": 57876250,
"node_id": "MDQ6VXNlcjU3ODc2MjUw",
"avatar_url": "https://avatars.githubusercontent.com/u/57876250?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shersoni610",
"html_url": "https://github.com/shersoni610",
"followers_url": "https://api.github.com/users/shersoni610/followers",
"following_url": "https://api.github.com/users/shersoni610/following{/other_user}",
"gists_url": "https://api.github.com/users/shersoni610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shersoni610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shersoni610/subscriptions",
"organizations_url": "https://api.github.com/users/shersoni610/orgs",
"repos_url": "https://api.github.com/users/shersoni610/repos",
"events_url": "https://api.github.com/users/shersoni610/events{/privacy}",
"received_events_url": "https://api.github.com/users/shersoni610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 3
| 2024-02-14T14:49:26
| 2024-03-11T18:35:40
| 2024-03-11T18:35:40
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hello,
I am running "ollama run llava". The output is in Non-English language. How do I change it?
|
{
"login": "hoyyeva",
"id": 63033505,
"node_id": "MDQ6VXNlcjYzMDMzNTA1",
"avatar_url": "https://avatars.githubusercontent.com/u/63033505?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hoyyeva",
"html_url": "https://github.com/hoyyeva",
"followers_url": "https://api.github.com/users/hoyyeva/followers",
"following_url": "https://api.github.com/users/hoyyeva/following{/other_user}",
"gists_url": "https://api.github.com/users/hoyyeva/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hoyyeva/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hoyyeva/subscriptions",
"organizations_url": "https://api.github.com/users/hoyyeva/orgs",
"repos_url": "https://api.github.com/users/hoyyeva/repos",
"events_url": "https://api.github.com/users/hoyyeva/events{/privacy}",
"received_events_url": "https://api.github.com/users/hoyyeva/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2494/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2494/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8420
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8420/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8420/comments
|
https://api.github.com/repos/ollama/ollama/issues/8420/events
|
https://github.com/ollama/ollama/issues/8420
| 2,787,063,564
|
I_kwDOJ0Z1Ps6mHzcM
| 8,420
|
Windows Installer hangs at the end of install
|
{
"login": "Norbz",
"id": 6388929,
"node_id": "MDQ6VXNlcjYzODg5Mjk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6388929?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Norbz",
"html_url": "https://github.com/Norbz",
"followers_url": "https://api.github.com/users/Norbz/followers",
"following_url": "https://api.github.com/users/Norbz/following{/other_user}",
"gists_url": "https://api.github.com/users/Norbz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Norbz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Norbz/subscriptions",
"organizations_url": "https://api.github.com/users/Norbz/orgs",
"repos_url": "https://api.github.com/users/Norbz/repos",
"events_url": "https://api.github.com/users/Norbz/events{/privacy}",
"received_events_url": "https://api.github.com/users/Norbz/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 1
| 2025-01-14T12:55:24
| 2025-01-22T18:29:01
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Hi,
I've encountered a bug while upgrading ollama that also occurs when installing or trying to uninstall.
At the very end of the process, the windows will become disabled (can't move it, can't put in in foreground), witht he icon being higlighted in red on the taskbar, like if a modal was in the way.
Looking for the logs showned nothing. So I looked in the call stack of the install process, and found out that overwolf was causing the hang.
Killing overwolf from the tray icon immediately finishes the install.
Looking at logs, the installation is actually finished, overwolf seems just to be preventing the auto closing of the installer window.

```
2025-01-14 13:42:40.680 Deleting uninstall key left over from previous non administrative install.
2025-01-14 13:42:40.680 Creating new uninstall key: HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Uninstall\{44E83376-CE68-45EB-8FC1-393500EB558C}_is1
2025-01-14 13:42:40.680 Writing uninstall key values.
2025-01-14 13:42:40.680 Detected previous administrative 64-bit install? No
2025-01-14 13:42:40.680 Detected previous administrative 32-bit install? No
2025-01-14 13:42:40.684 Installation process succeeded.
```
I've had help on this on the discord, but as the problem is quite unusual, I've tough it could be interesting to report the issue (that might be on overwolf side) to at least propose the very simple solution on the FAQ.
Let me know if I can provide more logs or information, I'd be glad to help.
### OS
Windows
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.5.5
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8420/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8420/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/428
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/428/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/428/comments
|
https://api.github.com/repos/ollama/ollama/issues/428/events
|
https://github.com/ollama/ollama/pull/428
| 1,868,357,447
|
PR_kwDOJ0Z1Ps5Y3KhT
| 428
|
update upload chunks
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-08-27T04:59:06
| 2023-08-30T14:47:18
| 2023-08-30T14:47:17
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/428",
"html_url": "https://github.com/ollama/ollama/pull/428",
"diff_url": "https://github.com/ollama/ollama/pull/428.diff",
"patch_url": "https://github.com/ollama/ollama/pull/428.patch",
"merged_at": "2023-08-30T14:47:17"
}
|
This PR increases the upload chunk size which will improve throughput. In order to prove more responsive progress bar, this PR changes the file reader back to a pipe. It keeps the main reader as a SectionReader for simplicity.
Minor change to HTTP status code checks: errors states has been loosened to < 400 (http.StatusBadRequest) for success and >= 400 for failures.
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/428/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/428/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4024
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4024/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4024/comments
|
https://api.github.com/repos/ollama/ollama/issues/4024/events
|
https://github.com/ollama/ollama/issues/4024
| 2,268,836,522
|
I_kwDOJ0Z1Ps6HO7Kq
| 4,024
|
ModuleNotFoundError: No module named 'distutils'
|
{
"login": "HougeLangley",
"id": 1161594,
"node_id": "MDQ6VXNlcjExNjE1OTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1161594?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HougeLangley",
"html_url": "https://github.com/HougeLangley",
"followers_url": "https://api.github.com/users/HougeLangley/followers",
"following_url": "https://api.github.com/users/HougeLangley/following{/other_user}",
"gists_url": "https://api.github.com/users/HougeLangley/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HougeLangley/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HougeLangley/subscriptions",
"organizations_url": "https://api.github.com/users/HougeLangley/orgs",
"repos_url": "https://api.github.com/users/HougeLangley/repos",
"events_url": "https://api.github.com/users/HougeLangley/events{/privacy}",
"received_events_url": "https://api.github.com/users/HougeLangley/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-04-29T11:52:28
| 2024-04-29T19:09:41
| 2024-04-29T19:09:40
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
https://github.com/ollama/ollama/blob/main/docs/import.md
```
(ollama) ~/ollama [ pip install -r llm/llama.cpp/requirements.txt main * ] 7:48 下午
Collecting numpy~=1.24.4 (from -r llm/llama.cpp/./requirements/requirements-convert.txt (line 1))
Using cached numpy-1.24.4.tar.gz (10.9 MB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
ERROR: Exception:
Traceback (most recent call last):
File "/home/hougelangley/ollama/lib/python3.12/site-packages/pip/_internal/cli/base_command.py", line 180, in exc_logging_wrapper
status = run_func(*args)
^^^^^^^^^^^^^^^
File "/home/hougelangley/ollama/lib/python3.12/site-packages/pip/_internal/cli/req_command.py", line 245, in wrapper
return func(self, options, args)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hougelangley/ollama/lib/python3.12/site-packages/pip/_internal/commands/install.py", line 377, in run
requirement_set = resolver.resolve(
^^^^^^^^^^^^^^^^^
File "/home/hougelangley/ollama/lib/python3.12/site-packages/pip/_internal/resolution/resolvelib/resolver.py", line 95, in resolve
result = self._result = resolver.resolve(
^^^^^^^^^^^^^^^^^
File "/home/hougelangley/ollama/lib/python3.12/site-packages/pip/_vendor/resolvelib/resolvers.py", line 546, in resolve
state = resolution.resolve(requirements, max_rounds=max_rounds)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hougelangley/ollama/lib/python3.12/site-packages/pip/_vendor/resolvelib/resolvers.py", line 397, in resolve
self._add_to_criteria(self.state.criteria, r, parent=None)
File "/home/hougelangley/ollama/lib/python3.12/site-packages/pip/_vendor/resolvelib/resolvers.py", line 173, in _add_to_criteria
if not criterion.candidates:
^^^^^^^^^^^^^^^^^^^^
File "/home/hougelangley/ollama/lib/python3.12/site-packages/pip/_vendor/resolvelib/structs.py", line 156, in __bool__
return bool(self._sequence)
^^^^^^^^^^^^^^^^^^^^
File "/home/hougelangley/ollama/lib/python3.12/site-packages/pip/_internal/resolution/resolvelib/found_candidates.py", line 155, in __bool__
return any(self)
^^^^^^^^^
File "/home/hougelangley/ollama/lib/python3.12/site-packages/pip/_internal/resolution/resolvelib/found_candidates.py", line 143, in <genexpr>
return (c for c in iterator if id(c) not in self._incompatible_ids)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hougelangley/ollama/lib/python3.12/site-packages/pip/_internal/resolution/resolvelib/found_candidates.py", line 47, in _iter_built
candidate = func()
^^^^^^
File "/home/hougelangley/ollama/lib/python3.12/site-packages/pip/_internal/resolution/resolvelib/factory.py", line 182, in _make_candidate_from_link
base: Optional[BaseCandidate] = self._make_base_candidate_from_link(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hougelangley/ollama/lib/python3.12/site-packages/pip/_internal/resolution/resolvelib/factory.py", line 228, in _make_base_candidate_from_link
self._link_candidate_cache[link] = LinkCandidate(
^^^^^^^^^^^^^^
File "/home/hougelangley/ollama/lib/python3.12/site-packages/pip/_internal/resolution/resolvelib/candidates.py", line 290, in __init__
super().__init__(
File "/home/hougelangley/ollama/lib/python3.12/site-packages/pip/_internal/resolution/resolvelib/candidates.py", line 156, in __init__
self.dist = self._prepare()
^^^^^^^^^^^^^^^
File "/home/hougelangley/ollama/lib/python3.12/site-packages/pip/_internal/resolution/resolvelib/candidates.py", line 222, in _prepare
dist = self._prepare_distribution()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hougelangley/ollama/lib/python3.12/site-packages/pip/_internal/resolution/resolvelib/candidates.py", line 301, in _prepare_distribution
return preparer.prepare_linked_requirement(self._ireq, parallel_builds=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hougelangley/ollama/lib/python3.12/site-packages/pip/_internal/operations/prepare.py", line 525, in prepare_linked_requirement
return self._prepare_linked_requirement(req, parallel_builds)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hougelangley/ollama/lib/python3.12/site-packages/pip/_internal/operations/prepare.py", line 640, in _prepare_linked_requirement
dist = _get_prepared_distribution(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hougelangley/ollama/lib/python3.12/site-packages/pip/_internal/operations/prepare.py", line 71, in _get_prepared_distribution
abstract_dist.prepare_distribution_metadata(
File "/home/hougelangley/ollama/lib/python3.12/site-packages/pip/_internal/distributions/sdist.py", line 54, in prepare_distribution_metadata
self._install_build_reqs(finder)
File "/home/hougelangley/ollama/lib/python3.12/site-packages/pip/_internal/distributions/sdist.py", line 124, in _install_build_reqs
build_reqs = self._get_build_requires_wheel()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hougelangley/ollama/lib/python3.12/site-packages/pip/_internal/distributions/sdist.py", line 101, in _get_build_requires_wheel
return backend.get_requires_for_build_wheel()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hougelangley/ollama/lib/python3.12/site-packages/pip/_internal/utils/misc.py", line 745, in get_requires_for_build_wheel
return super().get_requires_for_build_wheel(config_settings=cs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hougelangley/ollama/lib/python3.12/site-packages/pip/_vendor/pyproject_hooks/_impl.py", line 166, in get_requires_for_build_wheel
return self._call_hook('get_requires_for_build_wheel', {
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hougelangley/ollama/lib/python3.12/site-packages/pip/_vendor/pyproject_hooks/_impl.py", line 321, in _call_hook
raise BackendUnavailable(data.get('traceback', ''))
pip._vendor.pyproject_hooks._impl.BackendUnavailable: Traceback (most recent call last):
File "/home/hougelangley/ollama/lib/python3.12/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 77, in _build_backend
obj = import_module(mod_path)
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/importlib/__init__.py", line 90, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen importlib._bootstrap>", line 1387, in _gcd_import
File "<frozen importlib._bootstrap>", line 1360, in _find_and_load
File "<frozen importlib._bootstrap>", line 1310, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 488, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 1387, in _gcd_import
File "<frozen importlib._bootstrap>", line 1360, in _find_and_load
File "<frozen importlib._bootstrap>", line 1331, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 935, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 995, in exec_module
File "<frozen importlib._bootstrap>", line 488, in _call_with_frames_removed
File "/tmp/pip-build-env-bvkekmy3/overlay/lib/python3.12/site-packages/setuptools/__init__.py", line 10, in <module>
import distutils.core
ModuleNotFoundError: No module named 'distutils'
```
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.32
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4024/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4024/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/722
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/722/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/722/comments
|
https://api.github.com/repos/ollama/ollama/issues/722/events
|
https://github.com/ollama/ollama/pull/722
| 1,930,743,823
|
PR_kwDOJ0Z1Ps5cI7Gc
| 722
|
add feedback for reading model metadata
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-10-06T18:22:24
| 2023-10-06T20:05:33
| 2023-10-06T20:05:33
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/722",
"html_url": "https://github.com/ollama/ollama/pull/722",
"diff_url": "https://github.com/ollama/ollama/pull/722.diff",
"patch_url": "https://github.com/ollama/ollama/pull/722.patch",
"merged_at": "2023-10-06T20:05:33"
}
|
When creating a model from a large base layer (ex: 70B) reading model metadata is slow since the weights file is large.
Without feedback here the creation is in the "looking for model" stage for a long time, which makes it look like something has gone wrong.
New behavior:
```
parsing modelfile
looking for model
⠋ reading model metadata
```
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/722/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/722/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3725
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3725/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3725/comments
|
https://api.github.com/repos/ollama/ollama/issues/3725/events
|
https://github.com/ollama/ollama/pull/3725
| 2,249,805,810
|
PR_kwDOJ0Z1Ps5tAOM6
| 3,725
|
Add env override for opts.NumThread & opts.NumGPU
|
{
"login": "lainedfles",
"id": 126992880,
"node_id": "U_kgDOB5HB8A",
"avatar_url": "https://avatars.githubusercontent.com/u/126992880?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lainedfles",
"html_url": "https://github.com/lainedfles",
"followers_url": "https://api.github.com/users/lainedfles/followers",
"following_url": "https://api.github.com/users/lainedfles/following{/other_user}",
"gists_url": "https://api.github.com/users/lainedfles/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lainedfles/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lainedfles/subscriptions",
"organizations_url": "https://api.github.com/users/lainedfles/orgs",
"repos_url": "https://api.github.com/users/lainedfles/repos",
"events_url": "https://api.github.com/users/lainedfles/events{/privacy}",
"received_events_url": "https://api.github.com/users/lainedfles/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-04-18T05:48:39
| 2024-06-02T07:05:01
| 2024-06-02T07:04:55
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3725",
"html_url": "https://github.com/ollama/ollama/pull/3725",
"diff_url": "https://github.com/ollama/ollama/pull/3725.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3725.patch",
"merged_at": null
}
|
This enables the capability to limit or disable both `NumThread` & `NumGPU` which can be useful for testing and running multiple concurrent instances. `NumThread` limitation is also valuable to reliably enforce core affinity (as with `taskset`) on hybrid architectures like modern Intel & ARM.
- `OLLAMA_MAX_LLM_THREADS` **Maximum number of LLM CPU threads (default is unlimited: 0)**
- `OLLAMA_MAX_GPU_LAYERS` **Maximum number of GPU layers (default is unlimited: -1)**
|
{
"login": "lainedfles",
"id": 126992880,
"node_id": "U_kgDOB5HB8A",
"avatar_url": "https://avatars.githubusercontent.com/u/126992880?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lainedfles",
"html_url": "https://github.com/lainedfles",
"followers_url": "https://api.github.com/users/lainedfles/followers",
"following_url": "https://api.github.com/users/lainedfles/following{/other_user}",
"gists_url": "https://api.github.com/users/lainedfles/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lainedfles/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lainedfles/subscriptions",
"organizations_url": "https://api.github.com/users/lainedfles/orgs",
"repos_url": "https://api.github.com/users/lainedfles/repos",
"events_url": "https://api.github.com/users/lainedfles/events{/privacy}",
"received_events_url": "https://api.github.com/users/lainedfles/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3725/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3725/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/8205
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8205/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8205/comments
|
https://api.github.com/repos/ollama/ollama/issues/8205/events
|
https://github.com/ollama/ollama/issues/8205
| 2,754,318,668
|
I_kwDOJ0Z1Ps6kK5FM
| 8,205
|
docker installation failure due to your installation failure...
|
{
"login": "remco-pc",
"id": 8077908,
"node_id": "MDQ6VXNlcjgwNzc5MDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/8077908?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/remco-pc",
"html_url": "https://github.com/remco-pc",
"followers_url": "https://api.github.com/users/remco-pc/followers",
"following_url": "https://api.github.com/users/remco-pc/following{/other_user}",
"gists_url": "https://api.github.com/users/remco-pc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/remco-pc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/remco-pc/subscriptions",
"organizations_url": "https://api.github.com/users/remco-pc/orgs",
"repos_url": "https://api.github.com/users/remco-pc/repos",
"events_url": "https://api.github.com/users/remco-pc/events{/privacy}",
"received_events_url": "https://api.github.com/users/remco-pc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 0
| 2024-12-21T19:02:55
| 2024-12-21T19:03:20
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
=> [vps2_core 23/28] RUN curl -fsSL https://ollama.com/install.sh | sh 5432.3s
=> => # >>> Installing ollama to /usr/local
=> => # >>> Downloading Linux amd64 bundle
=> => # ############################################# 63.8%
=> => # [output clipped, log limit 2MiB reached]
### OS
Linux, Docker, WSL2
### GPU
_No response_
### CPU
Intel
### Ollama version
?
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8205/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8205/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/1543
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1543/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1543/comments
|
https://api.github.com/repos/ollama/ollama/issues/1543/events
|
https://github.com/ollama/ollama/issues/1543
| 2,043,926,808
|
I_kwDOJ0Z1Ps5509kY
| 1,543
|
Better model quantization defaults from ollama.com
|
{
"login": "knoopx",
"id": 100993,
"node_id": "MDQ6VXNlcjEwMDk5Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/100993?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/knoopx",
"html_url": "https://github.com/knoopx",
"followers_url": "https://api.github.com/users/knoopx/followers",
"following_url": "https://api.github.com/users/knoopx/following{/other_user}",
"gists_url": "https://api.github.com/users/knoopx/gists{/gist_id}",
"starred_url": "https://api.github.com/users/knoopx/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/knoopx/subscriptions",
"organizations_url": "https://api.github.com/users/knoopx/orgs",
"repos_url": "https://api.github.com/users/knoopx/repos",
"events_url": "https://api.github.com/users/knoopx/events{/privacy}",
"received_events_url": "https://api.github.com/users/knoopx/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 6573197867,
"node_id": "LA_kwDOJ0Z1Ps8AAAABh8sKKw",
"url": "https://api.github.com/repos/ollama/ollama/labels/ollama.com",
"name": "ollama.com",
"color": "ffffff",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] | null | 7
| 2023-12-15T15:28:44
| 2024-12-29T19:19:24
| 2024-12-29T19:19:24
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Is there a reason `latest` tag on the model hub points, by default, to the older `q4_0` quants? The newer `k_m/s` are
supposedly better and the size difference is usually just a few hundred megabytes, it would be nice if it defaulted to those instead.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1543/reactions",
"total_count": 5,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/ollama/ollama/issues/1543/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/578
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/578/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/578/comments
|
https://api.github.com/repos/ollama/ollama/issues/578/events
|
https://github.com/ollama/ollama/pull/578
| 1,909,406,505
|
PR_kwDOJ0Z1Ps5bBH2h
| 578
|
switch to forked readline lib which doesn't wreck the repl prompt
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 11
| 2023-09-22T19:17:00
| 2023-09-29T21:41:16
| 2023-09-22T19:17:45
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/578",
"html_url": "https://github.com/ollama/ollama/pull/578",
"diff_url": "https://github.com/ollama/ollama/pull/578.diff",
"patch_url": "https://github.com/ollama/ollama/pull/578.patch",
"merged_at": "2023-09-22T19:17:45"
}
|
There's a bug in the readline library for non-Windows systems which causes the placeholder text to drop a character. This switches us over to a patched version temporarily.
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/578/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/578/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/8061
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8061/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8061/comments
|
https://api.github.com/repos/ollama/ollama/issues/8061/events
|
https://github.com/ollama/ollama/pull/8061
| 2,734,697,757
|
PR_kwDOJ0Z1Ps6E9AvC
| 8,061
|
Refactor fixBlobs to use WalkDir for efficiency instead of Walk
|
{
"login": "Vkanhan",
"id": 158135476,
"node_id": "U_kgDOCWz0tA",
"avatar_url": "https://avatars.githubusercontent.com/u/158135476?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Vkanhan",
"html_url": "https://github.com/Vkanhan",
"followers_url": "https://api.github.com/users/Vkanhan/followers",
"following_url": "https://api.github.com/users/Vkanhan/following{/other_user}",
"gists_url": "https://api.github.com/users/Vkanhan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Vkanhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Vkanhan/subscriptions",
"organizations_url": "https://api.github.com/users/Vkanhan/orgs",
"repos_url": "https://api.github.com/users/Vkanhan/repos",
"events_url": "https://api.github.com/users/Vkanhan/events{/privacy}",
"received_events_url": "https://api.github.com/users/Vkanhan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 0
| 2024-12-12T03:53:03
| 2024-12-21T08:48:15
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/8061",
"html_url": "https://github.com/ollama/ollama/pull/8061",
"diff_url": "https://github.com/ollama/ollama/pull/8061.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8061.patch",
"merged_at": null
}
|
filepath.Walk calls os.Lstat for every file or directory to retrieve os.FileInfo, which can be slower.
filepath.WalkDir avoids unnecessary system calls since it provides a fs.DirEntry, which includes file type information without requiring a stat call.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8061/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8061/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5052
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5052/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5052/comments
|
https://api.github.com/repos/ollama/ollama/issues/5052/events
|
https://github.com/ollama/ollama/pull/5052
| 2,354,232,037
|
PR_kwDOJ0Z1Ps5yh02b
| 5,052
|
DELETE v1/models/{model} OpenAI Compatability
|
{
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjhan/followers",
"following_url": "https://api.github.com/users/royjhan/following{/other_user}",
"gists_url": "https://api.github.com/users/royjhan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/royjhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/royjhan/subscriptions",
"organizations_url": "https://api.github.com/users/royjhan/orgs",
"repos_url": "https://api.github.com/users/royjhan/repos",
"events_url": "https://api.github.com/users/royjhan/events{/privacy}",
"received_events_url": "https://api.github.com/users/royjhan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-06-14T23:29:11
| 2024-08-12T17:27:58
| 2024-08-12T17:27:35
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | true
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5052",
"html_url": "https://github.com/ollama/ollama/pull/5052",
"diff_url": "https://github.com/ollama/ollama/pull/5052.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5052.patch",
"merged_at": null
}
| null |
{
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjhan/followers",
"following_url": "https://api.github.com/users/royjhan/following{/other_user}",
"gists_url": "https://api.github.com/users/royjhan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/royjhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/royjhan/subscriptions",
"organizations_url": "https://api.github.com/users/royjhan/orgs",
"repos_url": "https://api.github.com/users/royjhan/repos",
"events_url": "https://api.github.com/users/royjhan/events{/privacy}",
"received_events_url": "https://api.github.com/users/royjhan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5052/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5052/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/8284
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8284/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8284/comments
|
https://api.github.com/repos/ollama/ollama/issues/8284/events
|
https://github.com/ollama/ollama/pull/8284
| 2,765,544,039
|
PR_kwDOJ0Z1Ps6GjgWv
| 8,284
|
Add CUSTOM_CPU_FLAGS to Dockerfile.
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2025-01-02T07:08:32
| 2025-01-06T17:17:19
| 2025-01-06T17:17:19
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/8284",
"html_url": "https://github.com/ollama/ollama/pull/8284",
"diff_url": "https://github.com/ollama/ollama/pull/8284.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8284.patch",
"merged_at": "2025-01-06T17:17:19"
}
|
Allow docker images to be built with custom flags.
Fixes: #7622
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8284/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8284/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4509
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4509/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4509/comments
|
https://api.github.com/repos/ollama/ollama/issues/4509/events
|
https://github.com/ollama/ollama/issues/4509
| 2,303,817,772
|
I_kwDOJ0Z1Ps6JUXgs
| 4,509
|
API HTTP code: 500, "error":"failed to generate embedding with langchain
|
{
"login": "buaa39055211",
"id": 45760993,
"node_id": "MDQ6VXNlcjQ1NzYwOTkz",
"avatar_url": "https://avatars.githubusercontent.com/u/45760993?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/buaa39055211",
"html_url": "https://github.com/buaa39055211",
"followers_url": "https://api.github.com/users/buaa39055211/followers",
"following_url": "https://api.github.com/users/buaa39055211/following{/other_user}",
"gists_url": "https://api.github.com/users/buaa39055211/gists{/gist_id}",
"starred_url": "https://api.github.com/users/buaa39055211/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/buaa39055211/subscriptions",
"organizations_url": "https://api.github.com/users/buaa39055211/orgs",
"repos_url": "https://api.github.com/users/buaa39055211/repos",
"events_url": "https://api.github.com/users/buaa39055211/events{/privacy}",
"received_events_url": "https://api.github.com/users/buaa39055211/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 7706482389,
"node_id": "LA_kwDOJ0Z1Ps8AAAABy1eW1Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/api",
"name": "api",
"color": "bfdadc",
"default": false,
"description": ""
}
] |
open
| false
| null |
[] | null | 8
| 2024-05-18T03:04:45
| 2024-11-06T17:31:27
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
after version 0.1.32 of Ollama,there always have a bug with the api of embedding
the embedding model I used is "smartcreation/bge-large-zh-v1.5",and dztech/bge-large-zh:v1.5 pulled from ollama
`
from langchain_community.embeddings import OllamaEmbeddings
from langchain_community.document_loaders import (
CSVLoader,
UnstructuredWordDocumentLoader,
)
from langchain_community.vectorstores import Qdrant
from qdrant_client import QdrantClient
base_url="http://127.0.0.1:11434")
embeddings = OllamaEmbeddings(model="dztech/bge-large-zh:v1.5",base_url=base_url)
LOADER_MAPPING = {
".csv": (CSVLoader, {}),
# ".docx": (Docx2txtLoader, {}),
".doc": (UnstructuredWordDocumentLoader, {"mode": "elements"}),
".docx": (UnstructuredWordDocumentLoader, {}),}
def split(uploaded_file_name):
# Create embeddings
print("Creating new vectorstore")
texts = process_documents(uploaded_file_name)
print(f"Creating embeddings. May take some minutes...")
db = Qdrant.from_documents(texts, embedding=embeddings,url='localhost:7541', collection_name=uploaded_file_name)
print(uploaded_file_name)
query = "insert"
docs = db.similarity_search(query)
print(docs[0].page_content)
`
File "/Users/mac/anaconda3/envs/ag2/lib/python3.11/site-packages/langchain_community/vectorstores/qdrant.py", line 2037, in _embed_texts
embeddings = self.embeddings.embed_documents(list(texts))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/mac/anaconda3/envs/ag2/lib/python3.11/site-packages/langchain_community/embeddings/ollama.py", line 204, in embed_documents
embeddings = self._embed(instruction_pairs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/mac/anaconda3/envs/ag2/lib/python3.11/site-packages/langchain_community/embeddings/ollama.py", line 192, in _embed
return [self._process_emb_response(prompt) for prompt in iter_]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/mac/anaconda3/envs/ag2/lib/python3.11/site-packages/langchain_community/embeddings/ollama.py", line 192, in <listcomp>
return [self._process_emb_response(prompt) for prompt in iter_]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/mac/anaconda3/envs/ag2/lib/python3.11/site-packages/langchain_community/embeddings/ollama.py", line 166, in _process_emb_response
raise ValueError(
ValueError: Error raised by inference API HTTP code: 500, {"error":"failed to generate embedding"}
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.33-0.1.38
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4509/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4509/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/5791
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5791/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5791/comments
|
https://api.github.com/repos/ollama/ollama/issues/5791/events
|
https://github.com/ollama/ollama/issues/5791
| 2,418,403,753
|
I_kwDOJ0Z1Ps6QJemp
| 5,791
|
Ability to pass --predict to llama.cpp server in ollama
|
{
"login": "1cekrim",
"id": 48536705,
"node_id": "MDQ6VXNlcjQ4NTM2NzA1",
"avatar_url": "https://avatars.githubusercontent.com/u/48536705?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/1cekrim",
"html_url": "https://github.com/1cekrim",
"followers_url": "https://api.github.com/users/1cekrim/followers",
"following_url": "https://api.github.com/users/1cekrim/following{/other_user}",
"gists_url": "https://api.github.com/users/1cekrim/gists{/gist_id}",
"starred_url": "https://api.github.com/users/1cekrim/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/1cekrim/subscriptions",
"organizations_url": "https://api.github.com/users/1cekrim/orgs",
"repos_url": "https://api.github.com/users/1cekrim/repos",
"events_url": "https://api.github.com/users/1cekrim/events{/privacy}",
"received_events_url": "https://api.github.com/users/1cekrim/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 1
| 2024-07-19T08:54:38
| 2024-07-19T19:59:18
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Due to the structure of the deepseek v2 coder model, there was [a problem with gibberish when k shift occurred.](https://github.com/ggerganov/llama.cpp/issues/8498)
And to solve this, [a patch](https://github.com/ggerganov/llama.cpp/pull/8501) that causes GGML_ASSERT when k shift occurs in the deepseek v2 model has been merged.
Anyway, the easiest way to solve this problem is to pass the `--predict -2` option when running the llama.cpp server. This option limits the number of tokens to predict until the context is full.
It would be a good idea to set the n predict value as an environment variable when serving ollama, or set the value in the Modelfile so that it can be passed as the `--predict` value in `NewLlamaServer`.
Also, if possible, it would be good to apply it to ollama.com's Deepseek V2 models.
- Related issues
- https://github.com/ollama/ollama/issues/5537
- https://github.com/ollama/ollama/issues/5339
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5791/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5791/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/8672
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8672/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8672/comments
|
https://api.github.com/repos/ollama/ollama/issues/8672/events
|
https://github.com/ollama/ollama/pull/8672
| 2,819,339,493
|
PR_kwDOJ0Z1Ps6JbBf1
| 8,672
|
openai: set num_ctx through extra body
|
{
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "https://api.github.com/users/ParthSareen/followers",
"following_url": "https://api.github.com/users/ParthSareen/following{/other_user}",
"gists_url": "https://api.github.com/users/ParthSareen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ParthSareen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ParthSareen/subscriptions",
"organizations_url": "https://api.github.com/users/ParthSareen/orgs",
"repos_url": "https://api.github.com/users/ParthSareen/repos",
"events_url": "https://api.github.com/users/ParthSareen/events{/privacy}",
"received_events_url": "https://api.github.com/users/ParthSareen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 0
| 2025-01-29T21:14:09
| 2025-01-29T21:22:12
| null |
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | true
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/8672",
"html_url": "https://github.com/ollama/ollama/pull/8672",
"diff_url": "https://github.com/ollama/ollama/pull/8672.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8672.patch",
"merged_at": null
}
| null | null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8672/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8672/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6728
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6728/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6728/comments
|
https://api.github.com/repos/ollama/ollama/issues/6728/events
|
https://github.com/ollama/ollama/issues/6728
| 2,516,547,220
|
I_kwDOJ0Z1Ps6V_3aU
| 6,728
|
Add alias of /quit and /exit for /bye.
|
{
"login": "bulrush15",
"id": 7031486,
"node_id": "MDQ6VXNlcjcwMzE0ODY=",
"avatar_url": "https://avatars.githubusercontent.com/u/7031486?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bulrush15",
"html_url": "https://github.com/bulrush15",
"followers_url": "https://api.github.com/users/bulrush15/followers",
"following_url": "https://api.github.com/users/bulrush15/following{/other_user}",
"gists_url": "https://api.github.com/users/bulrush15/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bulrush15/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bulrush15/subscriptions",
"organizations_url": "https://api.github.com/users/bulrush15/orgs",
"repos_url": "https://api.github.com/users/bulrush15/repos",
"events_url": "https://api.github.com/users/bulrush15/events{/privacy}",
"received_events_url": "https://api.github.com/users/bulrush15/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 5
| 2024-09-10T13:52:04
| 2024-10-23T21:40:10
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
For now it might be easier to add an alias in the program itself for /bye and allow the use of /quit and /exit as well.
Or design a way for the user to have an .alias file to define their own aliases.
Thanks! This is a great tool!
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6728/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6728/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/6732
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6732/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6732/comments
|
https://api.github.com/repos/ollama/ollama/issues/6732/events
|
https://github.com/ollama/ollama/pull/6732
| 2,516,946,971
|
PR_kwDOJ0Z1Ps57AnyL
| 6,732
|
add *_proxy to env map for debugging
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-09-10T16:37:53
| 2024-09-10T23:13:27
| 2024-09-10T23:13:26
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6732",
"html_url": "https://github.com/ollama/ollama/pull/6732",
"diff_url": "https://github.com/ollama/ollama/pull/6732.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6732.patch",
"merged_at": "2024-09-10T23:13:26"
}
|
this adds entries into the env map on server start up so it logs http/https/no_proxy and their upper case variants. it allows easier debugging for proxy related issues
e.g.
```
2024/09/10 09:33:32 routes.go:1125: INFO server config env="map[HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/path/to/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: http_proxy: https_proxy: no_proxy:]"
```
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6732/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6732/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7708
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7708/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7708/comments
|
https://api.github.com/repos/ollama/ollama/issues/7708/events
|
https://github.com/ollama/ollama/issues/7708
| 2,666,258,839
|
I_kwDOJ0Z1Ps6e6-GX
| 7,708
|
Error: Head "https://localhost:11434/": http: server gave HTTP response to HTTPS client
|
{
"login": "yipy0005",
"id": 8023685,
"node_id": "MDQ6VXNlcjgwMjM2ODU=",
"avatar_url": "https://avatars.githubusercontent.com/u/8023685?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yipy0005",
"html_url": "https://github.com/yipy0005",
"followers_url": "https://api.github.com/users/yipy0005/followers",
"following_url": "https://api.github.com/users/yipy0005/following{/other_user}",
"gists_url": "https://api.github.com/users/yipy0005/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yipy0005/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yipy0005/subscriptions",
"organizations_url": "https://api.github.com/users/yipy0005/orgs",
"repos_url": "https://api.github.com/users/yipy0005/repos",
"events_url": "https://api.github.com/users/yipy0005/events{/privacy}",
"received_events_url": "https://api.github.com/users/yipy0005/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-11-17T18:34:07
| 2024-11-23T15:45:05
| 2024-11-23T15:45:05
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I was running `ollama ls` on my Macbook running on Sequoia and I saw this error message:
`Error: Head "https://localhost:11434/": http: server gave HTTP response to HTTPS client`
What can I do to resolve this?
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.4.2
|
{
"login": "yipy0005",
"id": 8023685,
"node_id": "MDQ6VXNlcjgwMjM2ODU=",
"avatar_url": "https://avatars.githubusercontent.com/u/8023685?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yipy0005",
"html_url": "https://github.com/yipy0005",
"followers_url": "https://api.github.com/users/yipy0005/followers",
"following_url": "https://api.github.com/users/yipy0005/following{/other_user}",
"gists_url": "https://api.github.com/users/yipy0005/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yipy0005/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yipy0005/subscriptions",
"organizations_url": "https://api.github.com/users/yipy0005/orgs",
"repos_url": "https://api.github.com/users/yipy0005/repos",
"events_url": "https://api.github.com/users/yipy0005/events{/privacy}",
"received_events_url": "https://api.github.com/users/yipy0005/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7708/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7708/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8416
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8416/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8416/comments
|
https://api.github.com/repos/ollama/ollama/issues/8416/events
|
https://github.com/ollama/ollama/issues/8416
| 2,786,358,326
|
I_kwDOJ0Z1Ps6mFHQ2
| 8,416
|
OpenWebUI-Ollama does not fully utilize NVIDIA GPU when context length or parallel session icncreases
|
{
"login": "rpaGuyai",
"id": 154881376,
"node_id": "U_kgDOCTtNYA",
"avatar_url": "https://avatars.githubusercontent.com/u/154881376?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rpaGuyai",
"html_url": "https://github.com/rpaGuyai",
"followers_url": "https://api.github.com/users/rpaGuyai/followers",
"following_url": "https://api.github.com/users/rpaGuyai/following{/other_user}",
"gists_url": "https://api.github.com/users/rpaGuyai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rpaGuyai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rpaGuyai/subscriptions",
"organizations_url": "https://api.github.com/users/rpaGuyai/orgs",
"repos_url": "https://api.github.com/users/rpaGuyai/repos",
"events_url": "https://api.github.com/users/rpaGuyai/events{/privacy}",
"received_events_url": "https://api.github.com/users/rpaGuyai/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 6
| 2025-01-14T06:21:08
| 2025-01-28T21:13:41
| 2025-01-28T21:13:41
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I am hosting OpenWebUI on my server ( specs - AWS G4dn.12xlarge, Memory: 192 GB RAM, GPU: 4 x NVIDIA Tesla T4 GPUs, TITAL 64 gb GPU, 16 GB each).
Issues and scenarios:
I have found a sweet spot to get the optimized result, when the context length is set to 11000 and Environment="OLLAMA_NUM_PARALLEL=10" in ollamma . service file, it works good, utilizing all 4 GPUS and minimum CPU.
However, If I increase either the context length to let say 15000 or num parallel to let say 15, the speed reduces drastically and the load is being shared almost 50- 50 by CPU and GPU, GPU not being utilized fully causing slowness in response when just 5-6 concurrent sessions as there.
If I further increase either od context length to 20K or num parallel to 20, then in such cases and beyond, it stops using GPU and load is fully transferred to CPU which kills the speed totally.
Please can someone help on this, I want to understand what makes the load spill to CPU in such cases and why can't we utilize GPU full after certain extent.
For information - we thought this is the max 12xLarge hardware can support, so we increased to G4dn.metal for testing but the result is same.
Need help from experts please, is it due to some configurations in ollama or openwebui or is it that T4 GPUS in both cases each unit is only 16GB, is it that we need entire GPU memory in just 1GPU to fully utilize it, if the GPU , in my case of 12x Large, it is 64(16*4) split into 4.
Any suggestions or guidance will be very helpful
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.5.5
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8416/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8416/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2130
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2130/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2130/comments
|
https://api.github.com/repos/ollama/ollama/issues/2130/events
|
https://github.com/ollama/ollama/pull/2130
| 2,092,816,158
|
PR_kwDOJ0Z1Ps5kqdNR
| 2,130
|
Make CPU builds parallel and customizable AMD GPUs
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-01-21T22:44:27
| 2024-01-22T00:14:14
| 2024-01-22T00:14:12
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/2130",
"html_url": "https://github.com/ollama/ollama/pull/2130",
"diff_url": "https://github.com/ollama/ollama/pull/2130.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2130.patch",
"merged_at": "2024-01-22T00:14:12"
}
|
The linux build now support parallel CPU builds to speed things up. This also exposes AMD GPU targets as an optional setting for advaced users who want to alter our default set.
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2130/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2130/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3958
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3958/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3958/comments
|
https://api.github.com/repos/ollama/ollama/issues/3958/events
|
https://github.com/ollama/ollama/pull/3958
| 2,266,496,717
|
PR_kwDOJ0Z1Ps5t4ZjX
| 3,958
|
use merge base for diff-tree
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-04-26T20:57:30
| 2024-04-26T21:17:57
| 2024-04-26T21:17:56
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3958",
"html_url": "https://github.com/ollama/ollama/pull/3958",
"diff_url": "https://github.com/ollama/ollama/pull/3958.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3958.patch",
"merged_at": "2024-04-26T21:17:56"
}
|
the diff-tree previously compared the head ref (the latest commit in the PR) against the base ref (the latest commit in the target branch). if the target branch is updated, this comparison will include the new files in the target as well which is wrong.
instead, find and compare the head ref against the merge base of the head and base refs. this should ensure only the changes added in the pr are evaluated
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3958/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3958/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5516
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5516/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5516/comments
|
https://api.github.com/repos/ollama/ollama/issues/5516/events
|
https://github.com/ollama/ollama/issues/5516
| 2,393,634,516
|
I_kwDOJ0Z1Ps6Oq_bU
| 5,516
|
Add GLM-4v-9b
|
{
"login": "ddpasa",
"id": 112642920,
"node_id": "U_kgDOBrbLaA",
"avatar_url": "https://avatars.githubusercontent.com/u/112642920?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ddpasa",
"html_url": "https://github.com/ddpasa",
"followers_url": "https://api.github.com/users/ddpasa/followers",
"following_url": "https://api.github.com/users/ddpasa/following{/other_user}",
"gists_url": "https://api.github.com/users/ddpasa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ddpasa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ddpasa/subscriptions",
"organizations_url": "https://api.github.com/users/ddpasa/orgs",
"repos_url": "https://api.github.com/users/ddpasa/repos",
"events_url": "https://api.github.com/users/ddpasa/events{/privacy}",
"received_events_url": "https://api.github.com/users/ddpasa/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-07-06T17:15:22
| 2024-07-08T22:31:07
| 2024-07-08T22:31:06
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
It's a new state of the art visual language model from the folks that made CogVLM (previous incarnation that was the best visual LM for a while).
https://huggingface.co/THUDM/glm-4v-9b
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5516/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5516/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1266
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1266/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1266/comments
|
https://api.github.com/repos/ollama/ollama/issues/1266/events
|
https://github.com/ollama/ollama/issues/1266
| 2,009,923,164
|
I_kwDOJ0Z1Ps53zP5c
| 1,266
|
Add a stop/restart command
|
{
"login": "davlgd",
"id": 1110600,
"node_id": "MDQ6VXNlcjExMTA2MDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/1110600?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/davlgd",
"html_url": "https://github.com/davlgd",
"followers_url": "https://api.github.com/users/davlgd/followers",
"following_url": "https://api.github.com/users/davlgd/following{/other_user}",
"gists_url": "https://api.github.com/users/davlgd/gists{/gist_id}",
"starred_url": "https://api.github.com/users/davlgd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/davlgd/subscriptions",
"organizations_url": "https://api.github.com/users/davlgd/orgs",
"repos_url": "https://api.github.com/users/davlgd/repos",
"events_url": "https://api.github.com/users/davlgd/events{/privacy}",
"received_events_url": "https://api.github.com/users/davlgd/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 14
| 2023-11-24T15:49:43
| 2024-04-08T14:45:30
| 2024-02-20T01:14:23
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
When I setup/launch `ollama` the manual way, I can launch the server with `serve` command but don't have a easy way to stop/restart it (so I need to kill the process). It would be great to have dedicated command for theses actions.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1266/reactions",
"total_count": 7,
"+1": 7,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1266/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8664
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8664/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8664/comments
|
https://api.github.com/repos/ollama/ollama/issues/8664/events
|
https://github.com/ollama/ollama/issues/8664
| 2,818,425,629
|
I_kwDOJ0Z1Ps6n_cMd
| 8,664
|
Wrong GPU size calculation for the `command-r7b:7b` model
|
{
"login": "vvidovic",
"id": 3177210,
"node_id": "MDQ6VXNlcjMxNzcyMTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/3177210?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vvidovic",
"html_url": "https://github.com/vvidovic",
"followers_url": "https://api.github.com/users/vvidovic/followers",
"following_url": "https://api.github.com/users/vvidovic/following{/other_user}",
"gists_url": "https://api.github.com/users/vvidovic/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vvidovic/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vvidovic/subscriptions",
"organizations_url": "https://api.github.com/users/vvidovic/orgs",
"repos_url": "https://api.github.com/users/vvidovic/repos",
"events_url": "https://api.github.com/users/vvidovic/events{/privacy}",
"received_events_url": "https://api.github.com/users/vvidovic/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6849881759,
"node_id": "LA_kwDOJ0Z1Ps8AAAABmEjmnw",
"url": "https://api.github.com/repos/ollama/ollama/labels/memory",
"name": "memory",
"color": "5017EA",
"default": false,
"description": ""
}
] |
open
| false
| null |
[] | null | 4
| 2025-01-29T14:44:48
| 2025-01-30T07:47:04
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I wasn't able to run `command-r7b:7b` model while all other larger models were running successfully.
After some investigation and trial and error, I realized I could fix this issue by creating a new model that would offload fewer model layers to GPU.
Initial state:
```
$ nvidia-smi
Wed Jan 29 15:33:17 2025
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.183.01 Driver Version: 535.183.01 CUDA Version: 12.2 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 NVIDIA RTX A1000 Laptop GPU Off | 00000000:01:00.0 On | N/A |
| N/A 56C P3 6W / 35W | 149MiB / 4096MiB | 16% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
+---------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
| 0 N/A N/A 4937 G /usr/lib/xorg/Xorg 143MiB |
+---------------------------------------------------------------------------------------+
```
Running model, error produced:
```
$ ollama run command-r7b:7b
Error: llama runner process has terminated: cudaMalloc failed: out of memory
ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 1531936768
llama_new_context_with_model: failed to allocate compute buffers
```
A new model with fewer layers was created using the following modelfile:
```
# ollama create command-r7b-v:7b -f command-r7.modelfile
FROM command-r7b:7b
PARAMETER num_gpu 17
```
Successfully running newly created model:
```
$ ollama run command-r7b-v:7b
>>> /bye
```
Log information for error and success cases produced by `journal -S today` is attached.
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.5.7
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8664/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8664/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/3493
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3493/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3493/comments
|
https://api.github.com/repos/ollama/ollama/issues/3493/events
|
https://github.com/ollama/ollama/issues/3493
| 2,225,936,278
|
I_kwDOJ0Z1Ps6ErReW
| 3,493
|
[WIN11] Ollama extremely slow with Command-r 35b and 3 RTX 4090
|
{
"login": "GlobalAIVision",
"id": 163559315,
"node_id": "U_kgDOCb-3kw",
"avatar_url": "https://avatars.githubusercontent.com/u/163559315?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/GlobalAIVision",
"html_url": "https://github.com/GlobalAIVision",
"followers_url": "https://api.github.com/users/GlobalAIVision/followers",
"following_url": "https://api.github.com/users/GlobalAIVision/following{/other_user}",
"gists_url": "https://api.github.com/users/GlobalAIVision/gists{/gist_id}",
"starred_url": "https://api.github.com/users/GlobalAIVision/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/GlobalAIVision/subscriptions",
"organizations_url": "https://api.github.com/users/GlobalAIVision/orgs",
"repos_url": "https://api.github.com/users/GlobalAIVision/repos",
"events_url": "https://api.github.com/users/GlobalAIVision/events{/privacy}",
"received_events_url": "https://api.github.com/users/GlobalAIVision/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg",
"url": "https://api.github.com/repos/ollama/ollama/labels/windows",
"name": "windows",
"color": "0052CC",
"default": false,
"description": ""
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 3
| 2024-04-04T16:07:32
| 2024-06-22T00:05:33
| 2024-06-22T00:05:33
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Issue: Ollama is really slow (2.70 tokens per second) even i have 3 RTX 4090 and a I9 14900K CPU.
### What did you expect to see?
I exptected to see faster tokens generation for a 35b model on 3 RTX 4090.
### Steps to reproduce
I have Cuda Toolkit 12.4. On startup, i see this:
`time=2024-04-04T17:58:24.004+02:00 level=INFO source=payload_common.go:140 msg="Dynamic LLM libraries [cpu_avx rocm_v5.7 cpu_avx2 cpu cuda_v11.3]"`
The 41/41 Layers are interely on the GPU.
When generating, i obtain this:
`{"function":"print_timings","level":"INFO","line":286,"msg":"generation eval time = 28903.17 ms / 78 runs ( 370.55 ms per token, 2.70 tokens per second)","n_decoded":78,"n_tokens_second":2.6986661116179373,"slot_id":0,"t_token":370.5534358974359,"t_token_generation":28903.168,"task_id":148,"tid":"11876","timestamp":1712246421}`
This is really slow. I'm also using OpenWebUI for generation.
May the problem be the difference in cuda version? (I have 12.4 and ollama says that loads dll for 11.3)?
### Are there any recent changes that introduced the issue?
_No response_
### OS
Windows
### Architecture
amd64
### Platform
_No response_
### Ollama version
0.1.30
### GPU
Intel
### GPU info
3 X RTX 4090
### CPU
Intel
### Other software
_No response_
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3493/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3493/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3379
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3379/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3379/comments
|
https://api.github.com/repos/ollama/ollama/issues/3379/events
|
https://github.com/ollama/ollama/pull/3379
| 2,212,006,387
|
PR_kwDOJ0Z1Ps5q_U-8
| 3,379
|
fix: trim quotes on OLLAMA_ORIGINS
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-03-27T22:27:05
| 2024-03-28T21:14:19
| 2024-03-28T21:14:18
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3379",
"html_url": "https://github.com/ollama/ollama/pull/3379",
"diff_url": "https://github.com/ollama/ollama/pull/3379.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3379.patch",
"merged_at": "2024-03-28T21:14:18"
}
|
resolves #3365
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3379/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3379/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5620
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5620/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5620/comments
|
https://api.github.com/repos/ollama/ollama/issues/5620/events
|
https://github.com/ollama/ollama/pull/5620
| 2,401,926,454
|
PR_kwDOJ0Z1Ps51B5Qq
| 5,620
|
update embedded templates
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-07-10T23:45:10
| 2024-07-11T00:16:26
| 2024-07-11T00:16:24
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5620",
"html_url": "https://github.com/ollama/ollama/pull/5620",
"diff_url": "https://github.com/ollama/ollama/pull/5620.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5620.patch",
"merged_at": "2024-07-11T00:16:24"
}
|
add test to ensure legacy and messages template produce the same output. there are some templates which cannot produce the same outputs so those will be skipped
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5620/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5620/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3061
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3061/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3061/comments
|
https://api.github.com/repos/ollama/ollama/issues/3061/events
|
https://github.com/ollama/ollama/issues/3061
| 2,179,903,460
|
I_kwDOJ0Z1Ps6B7q_k
| 3,061
|
How to specify a port number for `ollama serve`?
|
{
"login": "soonhokong",
"id": 403281,
"node_id": "MDQ6VXNlcjQwMzI4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/403281?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/soonhokong",
"html_url": "https://github.com/soonhokong",
"followers_url": "https://api.github.com/users/soonhokong/followers",
"following_url": "https://api.github.com/users/soonhokong/following{/other_user}",
"gists_url": "https://api.github.com/users/soonhokong/gists{/gist_id}",
"starred_url": "https://api.github.com/users/soonhokong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/soonhokong/subscriptions",
"organizations_url": "https://api.github.com/users/soonhokong/orgs",
"repos_url": "https://api.github.com/users/soonhokong/repos",
"events_url": "https://api.github.com/users/soonhokong/events{/privacy}",
"received_events_url": "https://api.github.com/users/soonhokong/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-03-11T18:24:33
| 2024-03-11T20:50:25
| 2024-03-11T20:50:25
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Is there a way to specify a different port number (other than 11434) when I start `ollama serve`?
|
{
"login": "soonhokong",
"id": 403281,
"node_id": "MDQ6VXNlcjQwMzI4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/403281?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/soonhokong",
"html_url": "https://github.com/soonhokong",
"followers_url": "https://api.github.com/users/soonhokong/followers",
"following_url": "https://api.github.com/users/soonhokong/following{/other_user}",
"gists_url": "https://api.github.com/users/soonhokong/gists{/gist_id}",
"starred_url": "https://api.github.com/users/soonhokong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/soonhokong/subscriptions",
"organizations_url": "https://api.github.com/users/soonhokong/orgs",
"repos_url": "https://api.github.com/users/soonhokong/repos",
"events_url": "https://api.github.com/users/soonhokong/events{/privacy}",
"received_events_url": "https://api.github.com/users/soonhokong/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3061/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3061/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4348
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4348/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4348/comments
|
https://api.github.com/repos/ollama/ollama/issues/4348/events
|
https://github.com/ollama/ollama/issues/4348
| 2,290,744,781
|
I_kwDOJ0Z1Ps6Iif3N
| 4,348
|
noKeyError: rms_norm_eps
|
{
"login": "czs397001",
"id": 163976485,
"node_id": "U_kgDOCcYVJQ",
"avatar_url": "https://avatars.githubusercontent.com/u/163976485?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/czs397001",
"html_url": "https://github.com/czs397001",
"followers_url": "https://api.github.com/users/czs397001/followers",
"following_url": "https://api.github.com/users/czs397001/following{/other_user}",
"gists_url": "https://api.github.com/users/czs397001/gists{/gist_id}",
"starred_url": "https://api.github.com/users/czs397001/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/czs397001/subscriptions",
"organizations_url": "https://api.github.com/users/czs397001/orgs",
"repos_url": "https://api.github.com/users/czs397001/repos",
"events_url": "https://api.github.com/users/czs397001/events{/privacy}",
"received_events_url": "https://api.github.com/users/czs397001/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-05-11T07:30:26
| 2024-09-02T02:56:00
| 2024-09-02T02:56:00
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
(base)
rerted
qwen-
s-1.bin --pad-vocab
L file /data/tempriles/Qwen-1 8B-chat-diseasecls2/model.safetensors
oading model
raceback (most recent call last):
File "/data/github/ollama/llm/llama.cpp/convert.py", line 1555, in <module>
main()
rile "/data/qithub/ollama/llm/llama.cpp/convert.py", line 1498, in main
params load model pius)
params
File "/data/github/ollan
/llm/llama.cpp/convert.py",
line 328.in load
: parans, loadnprransformerason (nodel plns model, hf config path)
params =
File "/data/github/ol.
line 266,in loadHrTransformerJson
llama .cpp/convert.py"
f norm ep.
eps"]
OnfiOTIMS LOLL
eyError:
! 1m S
0 O rr
### OS
_No response_
### GPU
_No response_
### CPU
_No response_
### Ollama version
_No response_
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4348/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4348/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5037
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5037/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5037/comments
|
https://api.github.com/repos/ollama/ollama/issues/5037/events
|
https://github.com/ollama/ollama/pull/5037
| 2,352,248,805
|
PR_kwDOJ0Z1Ps5ybEGv
| 5,037
|
More parallelism on windows generate
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-06-14T00:14:36
| 2024-06-15T15:03:08
| 2024-06-15T15:03:06
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5037",
"html_url": "https://github.com/ollama/ollama/pull/5037",
"diff_url": "https://github.com/ollama/ollama/pull/5037.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5037.patch",
"merged_at": "2024-06-15T15:03:05"
}
|
Make the build faster
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5037/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5037/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/989
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/989/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/989/comments
|
https://api.github.com/repos/ollama/ollama/issues/989/events
|
https://github.com/ollama/ollama/issues/989
| 1,976,894,544
|
I_kwDOJ0Z1Ps511QRQ
| 989
|
CLI clear command
|
{
"login": "tommyneu",
"id": 57959550,
"node_id": "MDQ6VXNlcjU3OTU5NTUw",
"avatar_url": "https://avatars.githubusercontent.com/u/57959550?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tommyneu",
"html_url": "https://github.com/tommyneu",
"followers_url": "https://api.github.com/users/tommyneu/followers",
"following_url": "https://api.github.com/users/tommyneu/following{/other_user}",
"gists_url": "https://api.github.com/users/tommyneu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tommyneu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tommyneu/subscriptions",
"organizations_url": "https://api.github.com/users/tommyneu/orgs",
"repos_url": "https://api.github.com/users/tommyneu/repos",
"events_url": "https://api.github.com/users/tommyneu/events{/privacy}",
"received_events_url": "https://api.github.com/users/tommyneu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 5667396210,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2acg",
"url": "https://api.github.com/repos/ollama/ollama/labels/good%20first%20issue",
"name": "good first issue",
"color": "7057ff",
"default": true,
"description": "Good for newcomers"
}
] |
closed
| false
| null |
[] | null | 3
| 2023-11-03T20:33:31
| 2023-11-04T13:18:42
| 2023-11-03T23:12:18
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
The terminal gets a bit cluttered when asking questions. It would be nice if I could clear the terminal with a command. Something like `/clear` to clear the terminal so my next output is a little easier to read or I can clear the output when I'm done asking questions for now.
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/989/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/989/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8215
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8215/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8215/comments
|
https://api.github.com/repos/ollama/ollama/issues/8215/events
|
https://github.com/ollama/ollama/pull/8215
| 2,755,217,178
|
PR_kwDOJ0Z1Ps6GCfNJ
| 8,215
|
added source.go for syntax highlighting on code blocks
|
{
"login": "belfie13",
"id": 39270867,
"node_id": "MDQ6VXNlcjM5MjcwODY3",
"avatar_url": "https://avatars.githubusercontent.com/u/39270867?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/belfie13",
"html_url": "https://github.com/belfie13",
"followers_url": "https://api.github.com/users/belfie13/followers",
"following_url": "https://api.github.com/users/belfie13/following{/other_user}",
"gists_url": "https://api.github.com/users/belfie13/gists{/gist_id}",
"starred_url": "https://api.github.com/users/belfie13/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/belfie13/subscriptions",
"organizations_url": "https://api.github.com/users/belfie13/orgs",
"repos_url": "https://api.github.com/users/belfie13/repos",
"events_url": "https://api.github.com/users/belfie13/events{/privacy}",
"received_events_url": "https://api.github.com/users/belfie13/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 2
| 2024-12-23T04:54:20
| 2025-01-17T16:15:36
| 2024-12-27T18:17:49
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/8215",
"html_url": "https://github.com/ollama/ollama/pull/8215",
"diff_url": "https://github.com/ollama/ollama/pull/8215.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8215.patch",
"merged_at": "2024-12-27T18:17:49"
}
|
the code blocks were just shown as raw preformatted text with the `golang` code block.
I changed the code block info string to the value of the `go` language `tm_scope` from linguist in https://github.com/github-linguist/linguist/blob/main/lib/linguist/languages.yml which works fine on my end.
I hope the highlight issue is a global thing, i've tested it on another browser.
I can update the other files if this is accepted, thank you.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8215/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8215/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2207
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2207/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2207/comments
|
https://api.github.com/repos/ollama/ollama/issues/2207/events
|
https://github.com/ollama/ollama/issues/2207
| 2,102,416,310
|
I_kwDOJ0Z1Ps59UFO2
| 2,207
|
Magicoder-S-DS-6.7B-GGUF is not working
|
{
"login": "pablovalle",
"id": 55744688,
"node_id": "MDQ6VXNlcjU1NzQ0Njg4",
"avatar_url": "https://avatars.githubusercontent.com/u/55744688?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pablovalle",
"html_url": "https://github.com/pablovalle",
"followers_url": "https://api.github.com/users/pablovalle/followers",
"following_url": "https://api.github.com/users/pablovalle/following{/other_user}",
"gists_url": "https://api.github.com/users/pablovalle/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pablovalle/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pablovalle/subscriptions",
"organizations_url": "https://api.github.com/users/pablovalle/orgs",
"repos_url": "https://api.github.com/users/pablovalle/repos",
"events_url": "https://api.github.com/users/pablovalle/events{/privacy}",
"received_events_url": "https://api.github.com/users/pablovalle/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info",
"name": "needs more info",
"color": "BA8041",
"default": false,
"description": "More information is needed to assist"
}
] |
closed
| false
| null |
[] | null | 3
| 2024-01-26T15:32:45
| 2024-04-10T13:06:41
| 2024-04-10T13:06:41
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi,
I wanted to run a specific version of magicoder ([Magicoder-S-DS-6.7B-GGUF](https://huggingface.co/TheBloke/Magicoder-S-DS-6.7B-GGUF/blob/main/magicoder-s-ds-6.7b.Q5_K_S.gguf)) so I started creating a customize Modelfile as follows:
```
FROM ./magicoder-s-ds-6.7b
TEMPLATE """{{- if .System }}
<|im_start|>system {{ .System }}<|im_end|>
{{- end }}
<im start|>user
{{ .Prompt }}<|im_end|>
<|im start|>assistant
"""
PARAMETER stop "<|im_start|>"
PARAMETER stop "<|im_end|>"
```
Then I ran the following command:
```
ollama create pruebaPablo -f ./Modelfile
```
Everyting worked properly with the following output:
```
transferring model data
creating model layer
creating template layer
creating system layer
creating parameters layer
creating config layer
using already created layer sha256:7f0011bbe58c5bd2b3aa81abaa595d0e2181d7fa287a39a5899e7bb9a3117262
writing layer sha256:fbd3a46e44915917bdfe24b8c1cb7bf74f49cf12761b45c9787842b477c5a8fe
using already created layer sha256:f02dd72bb2423204352eabc5637b44d79d17f109fdb510a7c51455892aa2d216
writing layer sha256:f37e567548df4ecb18a3ef059c296cc50f0e0a11e5c1eb04b427688b1d4ea3ea
writing manifest
success
```
But, then, when I execute the run instruction it gets stuck loading and I can not send anything to the model as you can see in the following image:

Is there anything I am doing wrong?
Many thanks,
Pablo.
|
{
"login": "pablovalle",
"id": 55744688,
"node_id": "MDQ6VXNlcjU1NzQ0Njg4",
"avatar_url": "https://avatars.githubusercontent.com/u/55744688?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pablovalle",
"html_url": "https://github.com/pablovalle",
"followers_url": "https://api.github.com/users/pablovalle/followers",
"following_url": "https://api.github.com/users/pablovalle/following{/other_user}",
"gists_url": "https://api.github.com/users/pablovalle/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pablovalle/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pablovalle/subscriptions",
"organizations_url": "https://api.github.com/users/pablovalle/orgs",
"repos_url": "https://api.github.com/users/pablovalle/repos",
"events_url": "https://api.github.com/users/pablovalle/events{/privacy}",
"received_events_url": "https://api.github.com/users/pablovalle/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2207/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2207/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7335
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7335/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7335/comments
|
https://api.github.com/repos/ollama/ollama/issues/7335/events
|
https://github.com/ollama/ollama/pull/7335
| 2,609,816,452
|
PR_kwDOJ0Z1Ps5_rc2r
| 7,335
|
Fixes #6728 - adding /quit alias for /bye and updating relevant help messages
|
{
"login": "kmerenkov",
"id": 26511,
"node_id": "MDQ6VXNlcjI2NTEx",
"avatar_url": "https://avatars.githubusercontent.com/u/26511?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kmerenkov",
"html_url": "https://github.com/kmerenkov",
"followers_url": "https://api.github.com/users/kmerenkov/followers",
"following_url": "https://api.github.com/users/kmerenkov/following{/other_user}",
"gists_url": "https://api.github.com/users/kmerenkov/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kmerenkov/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kmerenkov/subscriptions",
"organizations_url": "https://api.github.com/users/kmerenkov/orgs",
"repos_url": "https://api.github.com/users/kmerenkov/repos",
"events_url": "https://api.github.com/users/kmerenkov/events{/privacy}",
"received_events_url": "https://api.github.com/users/kmerenkov/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-10-23T20:50:12
| 2024-11-21T19:01:22
| 2024-11-21T18:57:01
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7335",
"html_url": "https://github.com/ollama/ollama/pull/7335",
"diff_url": "https://github.com/ollama/ollama/pull/7335.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7335.patch",
"merged_at": null
}
| null |
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7335/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7335/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1069
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1069/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1069/comments
|
https://api.github.com/repos/ollama/ollama/issues/1069/events
|
https://github.com/ollama/ollama/issues/1069
| 1,986,783,052
|
I_kwDOJ0Z1Ps52a-dM
| 1,069
|
FROM: command not found"
|
{
"login": "seyi33",
"id": 22711332,
"node_id": "MDQ6VXNlcjIyNzExMzMy",
"avatar_url": "https://avatars.githubusercontent.com/u/22711332?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/seyi33",
"html_url": "https://github.com/seyi33",
"followers_url": "https://api.github.com/users/seyi33/followers",
"following_url": "https://api.github.com/users/seyi33/following{/other_user}",
"gists_url": "https://api.github.com/users/seyi33/gists{/gist_id}",
"starred_url": "https://api.github.com/users/seyi33/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/seyi33/subscriptions",
"organizations_url": "https://api.github.com/users/seyi33/orgs",
"repos_url": "https://api.github.com/users/seyi33/repos",
"events_url": "https://api.github.com/users/seyi33/events{/privacy}",
"received_events_url": "https://api.github.com/users/seyi33/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2023-11-10T03:25:11
| 2023-11-11T04:31:58
| 2023-11-11T04:31:57
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
i installed ollama via WSL, but i keep getting "FROM: command not found", when i try to create a model file using a local model and this is the command i have been using "FROM /mistral-7b-instruct-v0.1.Q4_K_M.gguf".
|
{
"login": "seyi33",
"id": 22711332,
"node_id": "MDQ6VXNlcjIyNzExMzMy",
"avatar_url": "https://avatars.githubusercontent.com/u/22711332?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/seyi33",
"html_url": "https://github.com/seyi33",
"followers_url": "https://api.github.com/users/seyi33/followers",
"following_url": "https://api.github.com/users/seyi33/following{/other_user}",
"gists_url": "https://api.github.com/users/seyi33/gists{/gist_id}",
"starred_url": "https://api.github.com/users/seyi33/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/seyi33/subscriptions",
"organizations_url": "https://api.github.com/users/seyi33/orgs",
"repos_url": "https://api.github.com/users/seyi33/repos",
"events_url": "https://api.github.com/users/seyi33/events{/privacy}",
"received_events_url": "https://api.github.com/users/seyi33/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1069/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1069/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6684
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6684/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6684/comments
|
https://api.github.com/repos/ollama/ollama/issues/6684/events
|
https://github.com/ollama/ollama/issues/6684
| 2,511,542,529
|
I_kwDOJ0Z1Ps6VsxkB
| 6,684
|
Deepseek v2.5 sha256 digest mismatch
|
{
"login": "mintisan",
"id": 9136049,
"node_id": "MDQ6VXNlcjkxMzYwNDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/9136049?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mintisan",
"html_url": "https://github.com/mintisan",
"followers_url": "https://api.github.com/users/mintisan/followers",
"following_url": "https://api.github.com/users/mintisan/following{/other_user}",
"gists_url": "https://api.github.com/users/mintisan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mintisan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mintisan/subscriptions",
"organizations_url": "https://api.github.com/users/mintisan/orgs",
"repos_url": "https://api.github.com/users/mintisan/repos",
"events_url": "https://api.github.com/users/mintisan/events{/privacy}",
"received_events_url": "https://api.github.com/users/mintisan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-09-07T08:33:14
| 2025-01-29T02:38:09
| 2024-09-08T12:01:05
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
➜ ~ ollama pull deepseek-v2.5:236b
pulling manifest
pulling manifest
pulling manifest
pulling manifest
pulling manifest
pulling manifest
pulling manifest
pulling manifest
pulling manifest
pulling manifest
pulling 799587243b19... 100% ▕█████████████████████████████████████████████████████████▏ 132 GB
pulling 8aa4c0321ccd... 100% ▕█████████████████████████████████████████████████████████▏ 493 B
pulling ccfee4895df0... 100% ▕█████████████████████████████████████████████████████████▏ 13 KB
pulling 059ecca256c0... 100% ▕█████████████████████████████████████████████████████████▏ 241 B
pulling f50c0c6cdd1e... 100% ▕█████████████████████████████████████████████████████████▏ 495 B
verifying sha256 digest
Error: digest mismatch, file must be downloaded again: want sha256:799587243b19fdcc715a4aab927f5700d1b9508bd0b8b0db9dc2bd6fc622979c, got sha256:e6500636daabf1a172ad5775c9d17f170478d5243055c26d8877f1aa3503425f
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.3.9
|
{
"login": "mintisan",
"id": 9136049,
"node_id": "MDQ6VXNlcjkxMzYwNDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/9136049?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mintisan",
"html_url": "https://github.com/mintisan",
"followers_url": "https://api.github.com/users/mintisan/followers",
"following_url": "https://api.github.com/users/mintisan/following{/other_user}",
"gists_url": "https://api.github.com/users/mintisan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mintisan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mintisan/subscriptions",
"organizations_url": "https://api.github.com/users/mintisan/orgs",
"repos_url": "https://api.github.com/users/mintisan/repos",
"events_url": "https://api.github.com/users/mintisan/events{/privacy}",
"received_events_url": "https://api.github.com/users/mintisan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6684/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6684/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3821
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3821/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3821/comments
|
https://api.github.com/repos/ollama/ollama/issues/3821/events
|
https://github.com/ollama/ollama/issues/3821
| 2,256,446,872
|
I_kwDOJ0Z1Ps6GfqWY
| 3,821
|
Wrong storage directory for Orion model [bug might hide a dangerous arbitrary file overwriting problem]
|
{
"login": "liar666",
"id": 3216927,
"node_id": "MDQ6VXNlcjMyMTY5Mjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3216927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/liar666",
"html_url": "https://github.com/liar666",
"followers_url": "https://api.github.com/users/liar666/followers",
"following_url": "https://api.github.com/users/liar666/following{/other_user}",
"gists_url": "https://api.github.com/users/liar666/gists{/gist_id}",
"starred_url": "https://api.github.com/users/liar666/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/liar666/subscriptions",
"organizations_url": "https://api.github.com/users/liar666/orgs",
"repos_url": "https://api.github.com/users/liar666/repos",
"events_url": "https://api.github.com/users/liar666/events{/privacy}",
"received_events_url": "https://api.github.com/users/liar666/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-04-22T12:45:30
| 2024-04-22T18:42:40
| 2024-04-22T18:42:40
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Hi,
I've written a script to export/backup ollama's model (see https://github.com/ollama/ollama/issues/335#issuecomment-1968768357)
When I tried to backup orion14b-q4, I discovered a strange thing: it is stored in `/usr/share/ollama/.ollama/models/manifests/registry.ollama.ai/**orionstar/orion14b-q4/**latest` instead of `/usr/share/ollama/.ollama/models/manifests/registry.ollama.ai/**library/<model>/**latest`
I assume this is due to the fact that the name of the model itself is "badly" formed (it contains a '/'):
```
zephyr:latest bbe38b81adec 4.1 GB 7 weeks ago
orionstar/orion14b-q4:latest 9297ec2a4101 8.8 GB 17 minutes ago
```
and some parsing script has mistook the '/' in the name for a directory separator.
Someone could probably abuse this parsing bug and overwrite any file that `ollama` user/group is allowed to change.
My advice would be to strip model names from non letters/digits characters.
### OS
Linux
### GPU
_No response_
### CPU
_No response_
### Ollama version
0.1.27
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3821/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3821/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6699
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6699/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6699/comments
|
https://api.github.com/repos/ollama/ollama/issues/6699/events
|
https://github.com/ollama/ollama/pull/6699
| 2,512,245,500
|
PR_kwDOJ0Z1Ps56wx5I
| 6,699
|
readme: add crewai to community integrations
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-09-08T07:36:12
| 2024-09-08T07:36:26
| 2024-09-08T07:36:25
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6699",
"html_url": "https://github.com/ollama/ollama/pull/6699",
"diff_url": "https://github.com/ollama/ollama/pull/6699.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6699.patch",
"merged_at": "2024-09-08T07:36:25"
}
| null |
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6699/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6699/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5654
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5654/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5654/comments
|
https://api.github.com/repos/ollama/ollama/issues/5654/events
|
https://github.com/ollama/ollama/issues/5654
| 2,406,387,816
|
I_kwDOJ0Z1Ps6PbpBo
| 5,654
|
Failure to Generate Response After Model Unloading
|
{
"login": "NWBx01",
"id": 21149527,
"node_id": "MDQ6VXNlcjIxMTQ5NTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/21149527?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NWBx01",
"html_url": "https://github.com/NWBx01",
"followers_url": "https://api.github.com/users/NWBx01/followers",
"following_url": "https://api.github.com/users/NWBx01/following{/other_user}",
"gists_url": "https://api.github.com/users/NWBx01/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NWBx01/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NWBx01/subscriptions",
"organizations_url": "https://api.github.com/users/NWBx01/orgs",
"repos_url": "https://api.github.com/users/NWBx01/repos",
"events_url": "https://api.github.com/users/NWBx01/events{/privacy}",
"received_events_url": "https://api.github.com/users/NWBx01/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info",
"name": "needs more info",
"color": "BA8041",
"default": false,
"description": "More information is needed to assist"
}
] |
closed
| false
| null |
[] | null | 3
| 2024-07-12T21:43:22
| 2024-11-01T20:37:58
| 2024-10-24T02:47:43
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Generating a response after first starting Ollama works flawlessly from what I can tell. I am able to change models and generate responses from prompts. After the model unloads due to inactivity, however, I am unable to generate any response.
I use Nvidia vGPU 17.1 to passthrough my GPU to a virtual machine running the ollama docker image that has GPU capability. The CUDA compute capability are the same between the host and guest: 6.1 on the host Quadro P4000, and 6.1 on the guest GRID P40-8Q. Both also have the same amount of VRAM: 8GB on the host, and 8GB on the guest. I don't believe this would cause any issues, but I thought it would be wise to mention.
Below are logs from when this happens (I've had to split this into two messages because of length):
```
2024/07/12 20:35:32 routes.go:1033: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_MODELS:/root/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
time=2024-07-12T20:35:32.959Z level=INFO source=images.go:751 msg="total blobs: 25"
time=2024-07-12T20:35:32.960Z level=INFO source=images.go:758 msg="total unused blobs removed: 0"
time=2024-07-12T20:35:32.961Z level=INFO source=routes.go:1080 msg="Listening on [::]:11434 (version 0.2.1)"
time=2024-07-12T20:35:32.961Z level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama4058211551/runners
time=2024-07-12T20:35:32.962Z level=DEBUG source=payload.go:182 msg=extracting variant=cpu file=build/linux/x86_64/cpu/bin/ollama_llama_server.gz
time=2024-07-12T20:35:32.962Z level=DEBUG source=payload.go:182 msg=extracting variant=cpu_avx file=build/linux/x86_64/cpu_avx/bin/ollama_llama_server.gz
time=2024-07-12T20:35:32.962Z level=DEBUG source=payload.go:182 msg=extracting variant=cpu_avx2 file=build/linux/x86_64/cpu_avx2/bin/ollama_llama_server.gz
time=2024-07-12T20:35:32.962Z level=DEBUG source=payload.go:182 msg=extracting variant=cuda_v11 file=build/linux/x86_64/cuda_v11/bin/libcublas.so.11.gz
time=2024-07-12T20:35:32.962Z level=DEBUG source=payload.go:182 msg=extracting variant=cuda_v11 file=build/linux/x86_64/cuda_v11/bin/libcublasLt.so.11.gz
time=2024-07-12T20:35:32.962Z level=DEBUG source=payload.go:182 msg=extracting variant=cuda_v11 file=build/linux/x86_64/cuda_v11/bin/libcudart.so.11.0.gz
time=2024-07-12T20:35:32.962Z level=DEBUG source=payload.go:182 msg=extracting variant=cuda_v11 file=build/linux/x86_64/cuda_v11/bin/ollama_llama_server.gz
time=2024-07-12T20:35:32.962Z level=DEBUG source=payload.go:182 msg=extracting variant=rocm_v60101 file=build/linux/x86_64/rocm_v60101/bin/deps.txt.gz
time=2024-07-12T20:35:32.962Z level=DEBUG source=payload.go:182 msg=extracting variant=rocm_v60101 file=build/linux/x86_64/rocm_v60101/bin/ollama_llama_server.gz
time=2024-07-12T20:35:37.143Z level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama4058211551/runners/cpu/ollama_llama_server
time=2024-07-12T20:35:37.143Z level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama4058211551/runners/cpu_avx/ollama_llama_server
time=2024-07-12T20:35:37.143Z level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama4058211551/runners/cpu_avx2/ollama_llama_server
time=2024-07-12T20:35:37.143Z level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama4058211551/runners/cuda_v11/ollama_llama_server
time=2024-07-12T20:35:37.143Z level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama4058211551/runners/rocm_v60101/ollama_llama_server
time=2024-07-12T20:35:37.143Z level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu_avx2 cuda_v11 rocm_v60101 cpu cpu_avx]"
time=2024-07-12T20:35:37.143Z level=DEBUG source=payload.go:45 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY"
time=2024-07-12T20:35:37.143Z level=DEBUG source=sched.go:102 msg="starting llm scheduler"
time=2024-07-12T20:35:37.143Z level=INFO source=gpu.go:205 msg="looking for compatible GPUs"
time=2024-07-12T20:35:37.144Z level=DEBUG source=gpu.go:91 msg="searching for GPU discovery libraries for NVIDIA"
time=2024-07-12T20:35:37.144Z level=DEBUG source=gpu.go:438 msg="Searching for GPU library" name=libcuda.so*
time=2024-07-12T20:35:37.144Z level=DEBUG source=gpu.go:457 msg="gpu library search" globs="[/usr/local/nvidia/lib/libcuda.so** /usr/local/nvidia/lib64/libcuda.so** /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]"
time=2024-07-12T20:35:37.150Z level=DEBUG source=gpu.go:491 msg="discovered GPU libraries" paths=[/usr/lib/x86_64-linux-gnu/libcuda.so.550.54.15]
CUDA driver version: 12.4
time=2024-07-12T20:35:37.202Z level=DEBUG source=gpu.go:124 msg="detected GPUs" count=1 library=/usr/lib/x86_64-linux-gnu/libcuda.so.550.54.15
[GPU-2c3dceb7-4052-11ef-99c8-7f57aa5c9cda] CUDA totalMem 8192 mb
[GPU-2c3dceb7-4052-11ef-99c8-7f57aa5c9cda] CUDA freeMem 7541 mb
[GPU-2c3dceb7-4052-11ef-99c8-7f57aa5c9cda] Compute Capability 6.1
time=2024-07-12T20:35:37.364Z level=DEBUG source=amd_linux.go:356 msg="amdgpu driver not detected /sys/module/amdgpu"
releasing cuda driver library
time=2024-07-12T20:35:37.364Z level=INFO source=types.go:103 msg="inference compute" id=GPU-2c3dceb7-4052-11ef-99c8-7f57aa5c9cda library=cuda compute=6.1 driver=12.4 name="GRID P40-8Q" total="8.0 GiB" available="7.4 GiB"
[GIN] 2024/07/12 - 20:36:17 | 200 | 1.769258ms | 192.168.75.195 | GET "/api/tags"
[GIN] 2024/07/12 - 20:36:17 | 200 | 94.14µs | 192.168.75.195 | GET "/api/version"
time=2024-07-12T20:38:26.695Z level=DEBUG source=gpu.go:336 msg="updating system memory data" before.total="23.5 GiB" before.free="20.5 GiB" now.total="23.5 GiB" now.free="20.5 GiB"
CUDA driver version: 12.4
time=2024-07-12T20:38:26.899Z level=DEBUG source=gpu.go:377 msg="updating cuda memory data" gpu=GPU-2c3dceb7-4052-11ef-99c8-7f57aa5c9cda name="GRID P40-8Q" before.total="8.0 GiB" before.free="7.4 GiB" now.total="8.0 GiB" now.free="7.4 GiB" now.used="650.0 MiB"
releasing cuda driver library
time=2024-07-12T20:38:26.899Z level=DEBUG source=sched.go:182 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=3 gpu_count=1
time=2024-07-12T20:38:26.923Z level=DEBUG source=memory.go:101 msg=evaluating library=cuda gpu_count=1 available="[7.4 GiB]"
time=2024-07-12T20:38:26.924Z level=DEBUG source=sched.go:251 msg="loading first model" model=/root/.ollama/models/blobs/sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa
time=2024-07-12T20:38:26.924Z level=DEBUG source=memory.go:101 msg=evaluating library=cuda gpu_count=1 available="[7.4 GiB]"
time=2024-07-12T20:38:26.924Z level=INFO source=sched.go:738 msg="new model will fit in available VRAM in single GPU, loading" model=/root/.ollama/models/blobs/sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa gpu=GPU-2c3dceb7-4052-11ef-99c8-7f57aa5c9cda parallel=4 available=7908343808 required="6.2 GiB"
time=2024-07-12T20:38:26.924Z level=DEBUG source=server.go:98 msg="system memory" total="23.5 GiB" free=21974278144
time=2024-07-12T20:38:26.924Z level=DEBUG source=memory.go:101 msg=evaluating library=cuda gpu_count=1 available="[7.4 GiB]"
time=2024-07-12T20:38:26.925Z level=INFO source=memory.go:309 msg="offload to cuda" layers.requested=-1 layers.model=33 layers.offload=33 layers.split="" memory.available="[7.4 GiB]" memory.required.full="6.2 GiB" memory.required.partial="6.2 GiB" memory.required.kv="1.0 GiB" memory.required.allocations="[6.2 GiB]" memory.weights.total="4.7 GiB" memory.weights.repeating="4.3 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="560.0 MiB" memory.graph.partial="677.5 MiB"
time=2024-07-12T20:38:26.925Z level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama4058211551/runners/cpu/ollama_llama_server
time=2024-07-12T20:38:26.925Z level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama4058211551/runners/cpu_avx/ollama_llama_server
time=2024-07-12T20:38:26.925Z level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama4058211551/runners/cpu_avx2/ollama_llama_server
time=2024-07-12T20:38:26.925Z level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama4058211551/runners/cuda_v11/ollama_llama_server
time=2024-07-12T20:38:26.925Z level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama4058211551/runners/rocm_v60101/ollama_llama_server
time=2024-07-12T20:38:26.926Z level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama4058211551/runners/cpu/ollama_llama_server
time=2024-07-12T20:38:26.926Z level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama4058211551/runners/cpu_avx/ollama_llama_server
time=2024-07-12T20:38:26.926Z level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama4058211551/runners/cpu_avx2/ollama_llama_server
time=2024-07-12T20:38:26.926Z level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama4058211551/runners/cuda_v11/ollama_llama_server
time=2024-07-12T20:38:26.926Z level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama4058211551/runners/rocm_v60101/ollama_llama_server
time=2024-07-12T20:38:26.926Z level=INFO source=server.go:375 msg="starting llama server" cmd="/tmp/ollama4058211551/runners/cuda_v11/ollama_llama_server --model /root/.ollama/models/blobs/sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa --ctx-size 8192 --batch-size 512 --embedding --log-disable --n-gpu-layers 33 --verbose --parallel 4 --port 44335"
time=2024-07-12T20:38:26.926Z level=DEBUG source=server.go:390 msg=subprocess environment="[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin LD_LIBRARY_PATH=/tmp/ollama4058211551/runners/cuda_v11:/tmp/ollama4058211551/runners:/usr/local/nvidia/lib:/usr/local/nvidia/lib64 CUDA_VISIBLE_DEVICES=GPU-2c3dceb7-4052-11ef-99c8-7f57aa5c9cda]"
time=2024-07-12T20:38:26.927Z level=INFO source=sched.go:474 msg="loaded runners" count=1
time=2024-07-12T20:38:26.927Z level=INFO source=server.go:563 msg="waiting for llama runner to start responding"
time=2024-07-12T20:38:26.927Z level=INFO source=server.go:604 msg="waiting for server to become available" status="llm server error"
INFO [main] build info | build=1 commit="a8db2a9" tid="139851958501376" timestamp=1720816706
INFO [main] system info | n_threads=8 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 0 | " tid="139851958501376" timestamp=1720816706 total_threads=8
INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="7" port="44335" tid="139851958501376" timestamp=1720816706
llama_model_loader: loaded meta data with 22 key-value pairs and 291 tensors from /root/.ollama/models/blobs/sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.name str = Meta-Llama-3-8B-Instruct
llama_model_loader: - kv 2: llama.block_count u32 = 32
llama_model_loader: - kv 3: llama.context_length u32 = 8192
llama_model_loader: - kv 4: llama.embedding_length u32 = 4096
llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336
llama_model_loader: - kv 6: llama.attention.head_count u32 = 32
llama_model_loader: - kv 7: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 8: llama.rope.freq_base f32 = 500000.000000
llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 10: general.file_type u32 = 2
llama_model_loader: - kv 11: llama.vocab_size u32 = 128256
llama_model_loader: - kv 12: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 13: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 14: tokenizer.ggml.pre str = llama-bpe
llama_model_loader: - kv 15: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 16: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 17: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv 18: tokenizer.ggml.bos_token_id u32 = 128000
llama_model_loader: - kv 19: tokenizer.ggml.eos_token_id u32 = 128009
llama_model_loader: - kv 20: tokenizer.chat_template str = {% set loop_messages = messages %}{% ...
llama_model_loader: - kv 21: general.quantization_version u32 = 2
llama_model_loader: - type f32: 65 tensors
llama_model_loader: - type q4_0: 225 tensors
llama_model_loader: - type q6_K: 1 tensors
time=2024-07-12T20:38:27.179Z level=INFO source=server.go:604 msg="waiting for server to become available" status="llm server loading model"
llm_load_vocab: special tokens cache size = 256
llm_load_vocab: token to piece cache size = 0.8000 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 128256
llm_load_print_meta: n_merges = 280147
llm_load_print_meta: vocab_only = 0
llm_load_print_meta: n_ctx_train = 8192
llm_load_print_meta: n_embd = 4096
llm_load_print_meta: n_layer = 32
llm_load_print_meta: n_head = 32
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_swa = 0
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 4
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 14336
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 8192
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: model type = 8B
llm_load_print_meta: model ftype = Q4_0
llm_load_print_meta: model params = 8.03 B
llm_load_print_meta: model size = 4.33 GiB (4.64 BPW)
llm_load_print_meta: general.name = Meta-Llama-3-8B-Instruct
llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>'
llm_load_print_meta: EOS token = 128009 '<|eot_id|>'
llm_load_print_meta: LF token = 128 'Ä'
llm_load_print_meta: EOT token = 128009 '<|eot_id|>'
llm_load_print_meta: max token length = 256
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: yes
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
Device 0: GRID P40-8Q, compute capability 6.1, VMM: no
llm_load_tensors: ggml ctx size = 0.27 MiB
llm_load_tensors: offloading 32 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 33/33 layers to GPU
llm_load_tensors: CPU buffer size = 281.81 MiB
llm_load_tensors: CUDA0 buffer size = 4155.99 MiB
time=2024-07-12T20:38:28.184Z level=DEBUG source=server.go:615 msg="model load progress 0.18"
time=2024-07-12T20:38:28.436Z level=DEBUG source=server.go:615 msg="model load progress 0.64"
time=2024-07-12T20:38:28.686Z level=DEBUG source=server.go:615 msg="model load progress 0.99"
llama_new_context_with_model: n_ctx = 8192
llama_new_context_with_model: n_batch = 512
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 500000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: CUDA0 KV buffer size = 1024.00 MiB
llama_new_context_with_model: KV self size = 1024.00 MiB, K (f16): 512.00 MiB, V (f16): 512.00 MiB
llama_new_context_with_model: CUDA_Host output buffer size = 2.02 MiB
llama_new_context_with_model: CUDA0 compute buffer size = 560.00 MiB
llama_new_context_with_model: CUDA_Host compute buffer size = 24.01 MiB
llama_new_context_with_model: graph nodes = 1030
llama_new_context_with_model: graph splits = 2
time=2024-07-12T20:38:28.938Z level=DEBUG source=server.go:615 msg="model load progress 1.00"
time=2024-07-12T20:38:29.189Z level=DEBUG source=server.go:618 msg="model load completed, waiting for server to become available" status="llm server loading model"
DEBUG [initialize] initializing slots | n_slots=4 tid="139851958501376" timestamp=1720816709
DEBUG [initialize] new slot | n_ctx_slot=2048 slot_id=0 tid="139851958501376" timestamp=1720816709
DEBUG [initialize] new slot | n_ctx_slot=2048 slot_id=1 tid="139851958501376" timestamp=1720816709
DEBUG [initialize] new slot | n_ctx_slot=2048 slot_id=2 tid="139851958501376" timestamp=1720816709
DEBUG [initialize] new slot | n_ctx_slot=2048 slot_id=3 tid="139851958501376" timestamp=1720816709
INFO [main] model loaded | tid="139851958501376" timestamp=1720816709
DEBUG [update_slots] all slots are idle and system prompt is empty, clear the KV cache | tid="139851958501376" timestamp=1720816709
DEBUG [process_single_task] slot data | n_idle_slots=4 n_processing_slots=0 task_id=0 tid="139851958501376" timestamp=1720816709
time=2024-07-12T20:38:29.440Z level=INFO source=server.go:609 msg="llama runner started in 2.51 seconds"
time=2024-07-12T20:38:29.440Z level=DEBUG source=sched.go:487 msg="finished setting up runner" model=/root/.ollama/models/blobs/sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa
DEBUG [process_single_task] slot data | n_idle_slots=4 n_processing_slots=0 task_id=1 tid="139851958501376" timestamp=1720816709
DEBUG [log_server_request] request | method="POST" params={} path="/tokenize" remote_addr="127.0.0.1" remote_port=39120 status=200 tid="139851487993856" timestamp=1720816709
time=2024-07-12T20:38:29.485Z level=DEBUG source=prompt.go:168 msg="prompt now fits in context window" required=19 window=2048
time=2024-07-12T20:38:29.485Z level=DEBUG source=routes.go:1334 msg="chat handler" prompt="<|start_header_id|>user<|end_header_id|>\n\nWhat's the deal with orange juice?<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n" images=0
DEBUG [process_single_task] slot data | n_idle_slots=4 n_processing_slots=0 task_id=2 tid="139851958501376" timestamp=1720816709
DEBUG [launch_slot_with_data] slot is processing task | slot_id=0 task_id=3 tid="139851958501376" timestamp=1720816709
DEBUG [update_slots] slot progression | ga_i=0 n_past=0 n_past_se=0 n_prompt_tokens_processed=18 slot_id=0 task_id=3 tid="139851958501376" timestamp=1720816709
DEBUG [update_slots] kv cache rm [p0, end) | p0=0 slot_id=0 task_id=3 tid="139851958501376" timestamp=1720816709
DEBUG [print_timings] prompt eval time = 123.72 ms / 18 tokens ( 6.87 ms per token, 145.49 tokens per second) | n_prompt_tokens_processed=18 n_tokens_second=145.48511202353626 slot_id=0 t_prompt_processing=123.724 t_token=6.873555555555556 task_id=3 tid="139851958501376" timestamp=1720816731
DEBUG [print_timings] generation eval time = 21845.44 ms / 482 runs ( 45.32 ms per token, 22.06 tokens per second) | n_decoded=482 n_tokens_second=22.064097209468482 slot_id=0 t_token=45.322497925311204 t_token_generation=21845.444 task_id=3 tid="139851958501376" timestamp=1720816731
DEBUG [print_timings] total time = 21969.17 ms | slot_id=0 t_prompt_processing=123.724 t_token_generation=21845.444 t_total=21969.167999999998 task_id=3 tid="139851958501376" timestamp=1720816731
DEBUG [update_slots] slot released | n_cache_tokens=500 n_ctx=8192 n_past=499 n_system_tokens=0 slot_id=0 task_id=3 tid="139851958501376" timestamp=1720816731 truncated=false
DEBUG [log_server_request] request | method="POST" params={} path="/completion" remote_addr="127.0.0.1" remote_port=39120 status=200 tid="139851487993856" timestamp=1720816731
[GIN] 2024/07/12 - 20:38:51 | 200 | 24.824350507s | 192.168.75.195 | POST "/api/chat"
time=2024-07-12T20:38:51.499Z level=DEBUG source=sched.go:491 msg="context for request finished"
time=2024-07-12T20:38:51.499Z level=DEBUG source=sched.go:363 msg="runner with non-zero duration has gone idle, adding timer" modelPath=/root/.ollama/models/blobs/sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa duration=5m0s
time=2024-07-12T20:38:51.499Z level=DEBUG source=sched.go:381 msg="after processing request finished event" modelPath=/root/.ollama/models/blobs/sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa refCount=0
time=2024-07-12T20:38:51.606Z level=DEBUG source=sched.go:600 msg="evaluating already loaded" model=/root/.ollama/models/blobs/sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa
DEBUG [process_single_task] slot data | n_idle_slots=4 n_processing_slots=0 task_id=488 tid="139851958501376" timestamp=1720816731
DEBUG [process_single_task] slot data | n_idle_slots=4 n_processing_slots=0 task_id=489 tid="139851958501376" timestamp=1720816731
DEBUG [log_server_request] request | method="POST" params={} path="/tokenize" remote_addr="127.0.0.1" remote_port=50248 status=200 tid="139851479601152" timestamp=1720816731
time=2024-07-12T20:38:51.653Z level=DEBUG source=prompt.go:168 msg="prompt now fits in context window" required=92 window=2048
time=2024-07-12T20:38:51.653Z level=DEBUG source=routes.go:1334 msg="chat handler" prompt="<|start_header_id|>user<|end_header_id|>\n\nHere is the query:\nWhat's the deal with orange juice?\n\nCreate a concise, 3-5 word phrase as a title for the previous query. Avoid quotation marks or special formatting. RESPOND ONLY WITH THE TITLE TEXT.\n\nExamples of titles:\nStock Market Trends\nPerfect Chocolate Chip Recipe\nEvolution of Music Streaming\nRemote Work Productivity Tips\nArtificial Intelligence in Healthcare\nVideo Game Development Insights<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n" images=0
DEBUG [process_single_task] slot data | n_idle_slots=4 n_processing_slots=0 task_id=490 tid="139851958501376" timestamp=1720816731
DEBUG [prefix_slot] slot with common prefix found | 0=["slot_id",0,"characters",42]
DEBUG [launch_slot_with_data] slot is processing task | slot_id=0 task_id=491 tid="139851958501376" timestamp=1720816731
DEBUG [update_slots] slot progression | ga_i=0 n_past=5 n_past_se=0 n_prompt_tokens_processed=91 slot_id=0 task_id=491 tid="139851958501376" timestamp=1720816731
DEBUG [update_slots] kv cache rm [p0, end) | p0=5 slot_id=0 task_id=491 tid="139851958501376" timestamp=1720816731
DEBUG [print_timings] prompt eval time = 300.25 ms / 91 tokens ( 3.30 ms per token, 303.08 tokens per second) | n_prompt_tokens_processed=91 n_tokens_second=303.0797566036416 slot_id=0 t_prompt_processing=300.251 t_token=3.299461538461538 task_id=491 tid="139851958501376" timestamp=1720816732
DEBUG [print_timings] generation eval time = 175.14 ms / 5 runs ( 35.03 ms per token, 28.55 tokens per second) | n_decoded=5 n_tokens_second=28.548589699668838 slot_id=0 t_token=35.028 t_token_generation=175.14 task_id=491 tid="139851958501376" timestamp=1720816732
DEBUG [print_timings] total time = 475.39 ms | slot_id=0 t_prompt_processing=300.251 t_token_generation=175.14 t_total=475.39099999999996 task_id=491 tid="139851958501376" timestamp=1720816732
DEBUG [update_slots] slot released | n_cache_tokens=96 n_ctx=8192 n_past=95 n_system_tokens=0 slot_id=0 task_id=491 tid="139851958501376" timestamp=1720816732 truncated=false
DEBUG [log_server_request] request | method="POST" params={} path="/completion" remote_addr="127.0.0.1" remote_port=50248 status=200 tid="139851479601152" timestamp=1720816732
[GIN] 2024/07/12 - 20:38:52 | 200 | 590.647083ms | 192.168.75.195 | POST "/v1/chat/completions"
time=2024-07-12T20:38:52.178Z level=DEBUG source=sched.go:432 msg="context for request finished"
time=2024-07-12T20:38:52.179Z level=DEBUG source=sched.go:363 msg="runner with non-zero duration has gone idle, adding timer" modelPath=/root/.ollama/models/blobs/sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa duration=5m0s
time=2024-07-12T20:38:52.179Z level=DEBUG source=sched.go:381 msg="after processing request finished event" modelPath=/root/.ollama/models/blobs/sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa refCount=0
time=2024-07-12T20:39:23.060Z level=DEBUG source=sched.go:600 msg="evaluating already loaded" model=/root/.ollama/models/blobs/sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa
DEBUG [process_single_task] slot data | n_idle_slots=4 n_processing_slots=0 task_id=499 tid="139851958501376" timestamp=1720816763
DEBUG [process_single_task] slot data | n_idle_slots=4 n_processing_slots=0 task_id=500 tid="139851958501376" timestamp=1720816763
DEBUG [log_server_request] request | method="POST" params={} path="/tokenize" remote_addr="127.0.0.1" remote_port=55306 status=200 tid="139851471208448" timestamp=1720816763
DEBUG [process_single_task] slot data | n_idle_slots=4 n_processing_slots=0 task_id=501 tid="139851958501376" timestamp=1720816763
DEBUG [log_server_request] request | method="POST" params={} path="/tokenize" remote_addr="127.0.0.1" remote_port=55306 status=200 tid="139851471208448" timestamp=1720816763
time=2024-07-12T20:39:23.194Z level=DEBUG source=prompt.go:168 msg="prompt now fits in context window" required=539 window=2048
time=2024-07-12T20:39:23.194Z level=DEBUG source=routes.go:1334 msg="chat handler" prompt="<|start_header_id|>user<|end_header_id|>\n\nWhat's the deal with orange juice?<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\nOrange juice - a staple in many breakfast routines, but also a beverage that sparks controversy and debate. Here are some interesting facts and perspectives on OJ:\n\n**Quality concerns:**\n\n1. **Processing:** Most commercial orange juices are made from concentrate, which involves freezing or drying the juice before reconstituting it with water. This can affect the flavor, nutrients, and overall quality of the final product.\n2. **Additives:** Some OJs contain added sugars, preservatives, and flavor enhancers, which may not align with consumer expectations.\n\n**Nutritional aspects:**\n\n1. **Vitamin C:** Orange juice is an excellent source of vitamin C, a vital antioxidant that boosts immune function and overall health.\n2. **Flavonoids:** OJ contains flavonoids, a type of polyphenol that may help protect against chronic diseases like heart disease, cancer, and cognitive decline.\n3. **Sugar content:** Orange juice can be high in natural sugars (fructose and glucose), which can be a concern for those monitoring their sugar intake.\n\n**Environmental impact:**\n\n1. **Sustainability:** Large-scale orange farming can have negative environmental impacts, such as water pollution, soil degradation, and habitat destruction.\n2. **Fair trade:** The orange juice industry is often criticized for exploiting small farmers and workers in countries like Brazil and Florida.\n\n**Cultural significance:**\n\n1. **Breakfast staple:** Orange juice has become a standard breakfast beverage in many Western cultures, particularly in the United States.\n2. **Florida's identity:** Orange juice is closely tied to Florida's agricultural heritage and economy, with the state being one of the world's largest producers.\n\n**Controversies and debates:**\n\n1. **Fake OJ:** The term \"fake orange juice\" refers to OJs that are not 100% freshly squeezed or contain added ingredients.\n2. **Squeeze vs. concentrate:** There is ongoing debate about whether fresh-squeezed OJ or concentrated juice with water reconstitution is better.\n\nIn conclusion, orange juice is a complex beverage with both positive and negative aspects. While it provides essential nutrients like vitamin C, its processing methods, sugar content, and environmental impact are areas of concern. As consumers, we can make informed choices by opting for high-quality, sustainably sourced OJs or exploring alternative beverages that align with our values and dietary needs.<|eot_id|><|start_header_id|>user<|end_header_id|>\n\nOh, yeah. Speaking of Florida. I've heard that there was a train that carried oranges or orange juice. Do you know about that?<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n" images=0
DEBUG [process_single_task] slot data | n_idle_slots=4 n_processing_slots=0 task_id=502 tid="139851958501376" timestamp=1720816763
DEBUG [prefix_slot] slot with common prefix found | 0=["slot_id",0,"characters",42]
DEBUG [launch_slot_with_data] slot is processing task | slot_id=0 task_id=503 tid="139851958501376" timestamp=1720816763
DEBUG [update_slots] slot progression | ga_i=0 n_past=5 n_past_se=0 n_prompt_tokens_processed=538 slot_id=0 task_id=503 tid="139851958501376" timestamp=1720816763
DEBUG [update_slots] kv cache rm [p0, end) | p0=5 slot_id=0 task_id=503 tid="139851958501376" timestamp=1720816763
DEBUG [print_timings] prompt eval time = 1492.37 ms / 538 tokens ( 2.77 ms per token, 360.50 tokens per second) | n_prompt_tokens_processed=538 n_tokens_second=360.5006536587131 slot_id=0 t_prompt_processing=1492.369 t_token=2.773920074349442 task_id=503 tid="139851958501376" timestamp=1720816781
DEBUG [print_timings] generation eval time = 16369.56 ms / 329 runs ( 49.76 ms per token, 20.10 tokens per second) | n_decoded=329 n_tokens_second=20.09827875041976 slot_id=0 t_token=49.75550455927051 t_token_generation=16369.561 task_id=503 tid="139851958501376" timestamp=1720816781
DEBUG [print_timings] total time = 17861.93 ms | slot_id=0 t_prompt_processing=1492.369 t_token_generation=16369.561 t_total=17861.93 task_id=503 tid="139851958501376" timestamp=1720816781
DEBUG [update_slots] slot released | n_cache_tokens=867 n_ctx=8192 n_past=866 n_system_tokens=0 slot_id=0 task_id=503 tid="139851958501376" timestamp=1720816781 truncated=false
DEBUG [log_server_request] request | method="POST" params={} path="/completion" remote_addr="127.0.0.1" remote_port=55310 status=200 tid="139851462815744" timestamp=1720816781
[GIN] 2024/07/12 - 20:39:41 | 200 | 18.015866775s | 192.168.75.195 | POST "/api/chat"
time=2024-07-12T20:39:41.058Z level=DEBUG source=sched.go:432 msg="context for request finished"
time=2024-07-12T20:39:41.058Z level=DEBUG source=sched.go:363 msg="runner with non-zero duration has gone idle, adding timer" modelPath=/root/.ollama/models/blobs/sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa duration=5m0s
time=2024-07-12T20:39:41.058Z level=DEBUG source=sched.go:381 msg="after processing request finished event" modelPath=/root/.ollama/models/blobs/sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa refCount=0
time=2024-07-12T20:40:53.034Z level=DEBUG source=sched.go:600 msg="evaluating already loaded" model=/root/.ollama/models/blobs/sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa
DEBUG [process_single_task] slot data | n_idle_slots=4 n_processing_slots=0 task_id=835 tid="139851958501376" timestamp=1720816853
DEBUG [process_single_task] slot data | n_idle_slots=4 n_processing_slots=0 task_id=836 tid="139851958501376" timestamp=1720816853
DEBUG [log_server_request] request | method="POST" params={} path="/tokenize" remote_addr="127.0.0.1" remote_port=46440 status=200 tid="139851382910976" timestamp=1720816853
DEBUG [process_single_task] slot data | n_idle_slots=4 n_processing_slots=0 task_id=837 tid="139851958501376" timestamp=1720816853
DEBUG [log_server_request] request | method="POST" params={} path="/tokenize" remote_addr="127.0.0.1" remote_port=46440 status=200 tid="139851382910976" timestamp=1720816853
DEBUG [process_single_task] slot data | n_idle_slots=4 n_processing_slots=0 task_id=838 tid="139851958501376" timestamp=1720816853
DEBUG [log_server_request] request | method="POST" params={} path="/tokenize" remote_addr="127.0.0.1" remote_port=46444 status=200 tid="139851506835456" timestamp=1720816853
time=2024-07-12T20:40:53.209Z level=DEBUG source=prompt.go:168 msg="prompt now fits in context window" required=899 window=2048
time=2024-07-12T20:40:53.209Z level=DEBUG source=routes.go:1334 msg="chat handler" prompt="<|start_header_id|>user<|end_header_id|>\n\nWhat's the deal with orange juice?<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\nOrange juice - a staple in many breakfast routines, but also a beverage that sparks controversy and debate. Here are some interesting facts and perspectives on OJ:\n\n**Quality concerns:**\n\n1. **Processing:** Most commercial orange juices are made from concentrate, which involves freezing or drying the juice before reconstituting it with water. This can affect the flavor, nutrients, and overall quality of the final product.\n2. **Additives:** Some OJs contain added sugars, preservatives, and flavor enhancers, which may not align with consumer expectations.\n\n**Nutritional aspects:**\n\n1. **Vitamin C:** Orange juice is an excellent source of vitamin C, a vital antioxidant that boosts immune function and overall health.\n2. **Flavonoids:** OJ contains flavonoids, a type of polyphenol that may help protect against chronic diseases like heart disease, cancer, and cognitive decline.\n3. **Sugar content:** Orange juice can be high in natural sugars (fructose and glucose), which can be a concern for those monitoring their sugar intake.\n\n**Environmental impact:**\n\n1. **Sustainability:** Large-scale orange farming can have negative environmental impacts, such as water pollution, soil degradation, and habitat destruction.\n2. **Fair trade:** The orange juice industry is often criticized for exploiting small farmers and workers in countries like Brazil and Florida.\n\n**Cultural significance:**\n\n1. **Breakfast staple:** Orange juice has become a standard breakfast beverage in many Western cultures, particularly in the United States.\n2. **Florida's identity:** Orange juice is closely tied to Florida's agricultural heritage and economy, with the state being one of the world's largest producers.\n\n**Controversies and debates:**\n\n1. **Fake OJ:** The term \"fake orange juice\" refers to OJs that are not 100% freshly squeezed or contain added ingredients.\n2. **Squeeze vs. concentrate:** There is ongoing debate about whether fresh-squeezed OJ or concentrated juice with water reconstitution is better.\n\nIn conclusion, orange juice is a complex beverage with both positive and negative aspects. While it provides essential nutrients like vitamin C, its processing methods, sugar content, and environmental impact are areas of concern. As consumers, we can make informed choices by opting for high-quality, sustainably sourced OJs or exploring alternative beverages that align with our values and dietary needs.<|eot_id|><|start_header_id|>user<|end_header_id|>\n\nOh, yeah. Speaking of Florida. I've heard that there was a train that carried oranges or orange juice. Do you know about that?<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\nYou're referring to the \"Orange Blossom Special\"!\n\nThe Orange Blossom Special was a nickname for the Atlantic Coast Line Railroad's (ACL) passenger train service between Jacksonville, Florida, and New York City. The train ran from 1929 to 1970 and became famous for its unique cargo: orange juice.\n\nIn the early 20th century, Florida's citrus industry was booming, and oranges were a major commodity. To transport these perishable goods efficiently and safely, ACL developed a specialized train service. The Orange Blossom Special would carry refrigerated cars filled with freshly squeezed orange juice from Florida to major cities in the Northeast.\n\nThe train's route would take it through the Appalachian Mountains, where it would stop at key stations like Washington D.C. and Philadelphia. At each stop, the train would offload its precious cargo to supply local markets. The journey took about 30 hours, depending on the number of stops and the weather conditions.\n\nThe Orange Blossom Special was more than just a transportation service; it became an iconic symbol of Florida's citrus industry and American culture. The train was immortalized in song by Johnny Cash, who wrote \"Orange Blossom Special\" (also known as \"The Orange Blossom Special\") in 1965. The catchy tune tells the story of a man waiting for the train at a station, reminiscing about his love of the Florida sunshine and the sweet taste of freshly squeezed OJ.\n\nAlthough the Orange Blossom Special ceased operations in 1970 due to declining passenger traffic and the rise of air transportation, its legacy lives on as a nostalgic reminder of Florida's citrus heritage.<|eot_id|><|start_header_id|>user<|end_header_id|>\n\nHuh. That's pretty interesting. Speaking of, do you know what's going on with Amtrak?<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n" images=0
DEBUG [process_single_task] slot data | n_idle_slots=4 n_processing_slots=0 task_id=839 tid="139851958501376" timestamp=1720816853
DEBUG [prefix_slot] slot with common prefix found | 0=["slot_id",0,"characters",2780]
DEBUG [launch_slot_with_data] slot is processing task | slot_id=0 task_id=840 tid="139851958501376" timestamp=1720816853
DEBUG [update_slots] slot progression | ga_i=0 n_past=866 n_past_se=0 n_prompt_tokens_processed=898 slot_id=0 task_id=840 tid="139851958501376" timestamp=1720816853
DEBUG [update_slots] kv cache rm [p0, end) | p0=866 slot_id=0 task_id=840 tid="139851958501376" timestamp=1720816853
DEBUG [print_timings] prompt eval time = 192.75 ms / 898 tokens ( 0.21 ms per token, 4658.86 tokens per second) | n_prompt_tokens_processed=898 n_tokens_second=4658.860395017406 slot_id=0 t_prompt_processing=192.751 t_token=0.21464476614699332 task_id=840 tid="139851958501376" timestamp=1720816876
DEBUG [print_timings] generation eval time = 23009.74 ms / 433 runs ( 53.14 ms per token, 18.82 tokens per second) | n_decoded=433 n_tokens_second=18.818118710516444 slot_id=0 t_token=53.14027482678984 t_token_generation=23009.739 task_id=840 tid="139851958501376" timestamp=1720816876
DEBUG [print_timings] total time = 23202.49 ms | slot_id=0 t_prompt_processing=192.751 t_token_generation=23009.739 t_total=23202.49 task_id=840 tid="139851958501376" timestamp=1720816876
DEBUG [update_slots] slot released | n_cache_tokens=1331 n_ctx=8192 n_past=1330 n_system_tokens=0 slot_id=0 task_id=840 tid="139851958501376" timestamp=1720816876 truncated=false
DEBUG [log_server_request] request | method="POST" params={} path="/completion" remote_addr="127.0.0.1" remote_port=46444 status=200 tid="139851506835456" timestamp=1720816876
time=2024-07-12T20:41:16.460Z level=DEBUG source=sched.go:432 msg="context for request finished"
[GIN] 2024/07/12 - 20:41:16 | 200 | 23.448294785s | 192.168.75.195 | POST "/api/chat"
time=2024-07-12T20:41:16.461Z level=DEBUG source=sched.go:363 msg="runner with non-zero duration has gone idle, adding timer" modelPath=/root/.ollama/models/blobs/sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa duration=5m0s
time=2024-07-12T20:41:16.461Z level=DEBUG source=sched.go:381 msg="after processing request finished event" modelPath=/root/.ollama/models/blobs/sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa refCount=0
time=2024-07-12T20:41:57.790Z level=DEBUG source=sched.go:600 msg="evaluating already loaded" model=/root/.ollama/models/blobs/sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa
DEBUG [process_single_task] slot data | n_idle_slots=4 n_processing_slots=0 task_id=1276 tid="139851958501376" timestamp=1720816917
DEBUG [process_single_task] slot data | n_idle_slots=4 n_processing_slots=0 task_id=1277 tid="139851958501376" timestamp=1720816917
DEBUG [log_server_request] request | method="POST" params={} path="/tokenize" remote_addr="127.0.0.1" remote_port=44952 status=200 tid="139851496386560" timestamp=1720816917
DEBUG [process_single_task] slot data | n_idle_slots=4 n_processing_slots=0 task_id=1278 tid="139851958501376" timestamp=1720816917
DEBUG [log_server_request] request | method="POST" params={} path="/tokenize" remote_addr="127.0.0.1" remote_port=44952 status=200 tid="139851496386560" timestamp=1720816917
DEBUG [process_single_task] slot data | n_idle_slots=4 n_processing_slots=0 task_id=1279 tid="139851958501376" timestamp=1720816917
DEBUG [log_server_request] request | method="POST" params={} path="/tokenize" remote_addr="127.0.0.1" remote_port=44962 status=200 tid="139851487993856" timestamp=1720816917
DEBUG [process_single_task] slot data | n_idle_slots=4 n_processing_slots=0 task_id=1280 tid="139851958501376" timestamp=1720816917
DEBUG [log_server_request] request | method="POST" params={} path="/tokenize" remote_addr="127.0.0.1" remote_port=44962 status=200 tid="139851487993856" timestamp=1720816918
time=2024-07-12T20:41:58.057Z level=DEBUG source=prompt.go:168 msg="prompt now fits in context window" required=1361 window=2048
time=2024-07-12T20:41:58.057Z level=DEBUG source=routes.go:1334 msg="chat handler" prompt="<|start_header_id|>user<|end_header_id|>\n\nWhat's the deal with orange juice?<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\nOrange juice - a staple in many breakfast routines, but also a beverage that sparks controversy and debate. Here are some interesting facts and perspectives on OJ:\n\n**Quality concerns:**\n\n1. **Processing:** Most commercial orange juices are made from concentrate, which involves freezing or drying the juice before reconstituting it with water. This can affect the flavor, nutrients, and overall quality of the final product.\n2. **Additives:** Some OJs contain added sugars, preservatives, and flavor enhancers, which may not align with consumer expectations.\n\n**Nutritional aspects:**\n\n1. **Vitamin C:** Orange juice is an excellent source of vitamin C, a vital antioxidant that boosts immune function and overall health.\n2. **Flavonoids:** OJ contains flavonoids, a type of polyphenol that may help protect against chronic diseases like heart disease, cancer, and cognitive decline.\n3. **Sugar content:** Orange juice can be high in natural sugars (fructose and glucose), which can be a concern for those monitoring their sugar intake.\n\n**Environmental impact:**\n\n1. **Sustainability:** Large-scale orange farming can have negative environmental impacts, such as water pollution, soil degradation, and habitat destruction.\n2. **Fair trade:** The orange juice industry is often criticized for exploiting small farmers and workers in countries like Brazil and Florida.\n\n**Cultural significance:**\n\n1. **Breakfast staple:** Orange juice has become a standard breakfast beverage in many Western cultures, particularly in the United States.\n2. **Florida's identity:** Orange juice is closely tied to Florida's agricultural heritage and economy, with the state being one of the world's largest producers.\n\n**Controversies and debates:**\n\n1. **Fake OJ:** The term \"fake orange juice\" refers to OJs that are not 100% freshly squeezed or contain added ingredients.\n2. **Squeeze vs. concentrate:** There is ongoing debate about whether fresh-squeezed OJ or concentrated juice with water reconstitution is better.\n\nIn conclusion, orange juice is a complex beverage with both positive and negative aspects. While it provides essential nutrients like vitamin C, its processing methods, sugar content, and environmental impact are areas of concern. As consumers, we can make informed choices by opting for high-quality, sustainably sourced OJs or exploring alternative beverages that align with our values and dietary needs.<|eot_id|><|start_header_id|>user<|end_header_id|>\n\nOh, yeah. Speaking of Florida. I've heard that there was a train that carried oranges or orange juice. Do you know about that?<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\nYou're referring to the \"Orange Blossom Special\"!\n\nThe Orange Blossom Special was a nickname for the Atlantic Coast Line Railroad's (ACL) passenger train service between Jacksonville, Florida, and New York City. The train ran from 1929 to 1970 and became famous for its unique cargo: orange juice.\n\nIn the early 20th century, Florida's citrus industry was booming, and oranges were a major commodity. To transport these perishable goods efficiently and safely, ACL developed a specialized train service. The Orange Blossom Special would carry refrigerated cars filled with freshly squeezed orange juice from Florida to major cities in the Northeast.\n\nThe train's route would take it through the Appalachian Mountains, where it would stop at key stations like Washington D.C. and Philadelphia. At each stop, the train would offload its precious cargo to supply local markets. The journey took about 30 hours, depending on the number of stops and the weather conditions.\n\nThe Orange Blossom Special was more than just a transportation service; it became an iconic symbol of Florida's citrus industry and American culture. The train was immortalized in song by Johnny Cash, who wrote \"Orange Blossom Special\" (also known as \"The Orange Blossom Special\") in 1965. The catchy tune tells the story of a man waiting for the train at a station, reminiscing about his love of the Florida sunshine and the sweet taste of freshly squeezed OJ.\n\nAlthough the Orange Blossom Special ceased operations in 1970 due to declining passenger traffic and the rise of air transportation, its legacy lives on as a nostalgic reminder of Florida's citrus heritage.<|eot_id|><|start_header_id|>user<|end_header_id|>\n\nHuh. That's pretty interesting. Speaking of, do you know what's going on with Amtrak?<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\nAmtrak! The national passenger railroad service in the United States has been around since 1971, and it's had its share of challenges and changes over the years.\n\nCurrently, Amtrak is facing some significant hurdles:\n\n1. **Funding:** Amtrak relies heavily on federal funding to operate its services. However, as a result of the COVID-19 pandemic, ridership numbers have dropped significantly, leading to reduced revenue and increased pressure on funding.\n2. **Infrastructure:** Much of Amtrak's infrastructure, including tracks, bridges, and stations, is aging or in need of repair. The railroad is working to upgrade its network, but this process can be slow and costly.\n3. **Competition:** With the rise of ride-hailing services, buses, and airlines, Amtrak faces increased competition for passengers' attention. To stay competitive, Amtrak has been focusing on improving service quality, expanding routes, and offering more amenities.\n4. **Coronavirus pandemic:** As I mentioned earlier, the pandemic has had a significant impact on Amtrak's ridership and revenue. The railroad has implemented various safety measures to reduce the risk of transmission, but this has also affected its operations.\n\nDespite these challenges, Amtrak is taking steps to modernize and improve its services:\n\n1. **New trains:** Amtrak is introducing new trainsets, such as the Acela Express and the Northeast Regional trains, which offer improved amenities, comfort, and technology.\n2. **Electrification:** Amtrak is working on electrifying some of its routes, like the Northeast Corridor (NEC), to reduce emissions and increase efficiency.\n3. **Station upgrades:** Amtrak is investing in station renovations, including modernizing facilities, improving accessibility, and enhancing passenger experiences.\n4. **Coronavirus response:** The railroad has implemented various measures to reduce the spread of COVID-19 on its trains and stations, such as increased cleaning protocols, social distancing measures, and mask mandates.\n\nAmtrak continues to play a vital role in connecting Americans across the country, and while it faces challenges, the railroad is working to adapt and improve its services for the future.<|eot_id|><|start_header_id|>user<|end_header_id|>\n\nAt this point, I'm going to wait 10 minutes or so before my next response. <|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n" images=0
DEBUG [process_single_task] slot data | n_idle_slots=4 n_processing_slots=0 task_id=1281 tid="139851958501376" timestamp=1720816918
DEBUG [prefix_slot] slot with common prefix found | 0=["slot_id",0,"characters",4613]
DEBUG [launch_slot_with_data] slot is processing task | slot_id=0 task_id=1282 tid="139851958501376" timestamp=1720816918
DEBUG [update_slots] slot progression | ga_i=0 n_past=1330 n_past_se=0 n_prompt_tokens_processed=1360 slot_id=0 task_id=1282 tid="139851958501376" timestamp=1720816918
DEBUG [update_slots] kv cache rm [p0, end) | p0=1330 slot_id=0 task_id=1282 tid="139851958501376" timestamp=1720816918
DEBUG [print_timings] prompt eval time = 210.03 ms / 1360 tokens ( 0.15 ms per token, 6475.20 tokens per second) | n_prompt_tokens_processed=1360 n_tokens_second=6475.203778471851 slot_id=0 t_prompt_processing=210.032 t_token=0.15443529411764706 task_id=1282 tid="139851958501376" timestamp=1720816922
DEBUG [print_timings] generation eval time = 4452.38 ms / 81 runs ( 54.97 ms per token, 18.19 tokens per second) | n_decoded=81 n_tokens_second=18.192509088393585 slot_id=0 t_token=54.96767901234568 t_token_generation=4452.382 task_id=1282 tid="139851958501376" timestamp=1720816922
DEBUG [print_timings] total time = 4662.41 ms | slot_id=0 t_prompt_processing=210.032 t_token_generation=4452.382 t_total=4662.414 task_id=1282 tid="139851958501376" timestamp=1720816922
DEBUG [update_slots] slot released | n_cache_tokens=1441 n_ctx=8192 n_past=1440 n_system_tokens=0 slot_id=0 task_id=1282 tid="139851958501376" timestamp=1720816922 truncated=false
DEBUG [log_server_request] request | method="POST" params={} path="/completion" remote_addr="127.0.0.1" remote_port=44966 status=200 tid="139851479601152" timestamp=1720816922
[GIN] 2024/07/12 - 20:42:02 | 200 | 4.952035613s | 192.168.75.195 | POST "/api/chat"
time=2024-07-12T20:42:02.722Z level=DEBUG source=sched.go:432 msg="context for request finished"
time=2024-07-12T20:42:02.722Z level=DEBUG source=sched.go:363 msg="runner with non-zero duration has gone idle, adding timer" modelPath=/root/.ollama/models/blobs/sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa duration=5m0s
time=2024-07-12T20:42:02.722Z level=DEBUG source=sched.go:381 msg="after processing request finished event" modelPath=/root/.ollama/models/blobs/sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa refCount=0
time=2024-07-12T20:47:02.722Z level=DEBUG source=sched.go:365 msg="timer expired, expiring to unload" modelPath=/root/.ollama/models/blobs/sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa
time=2024-07-12T20:47:02.722Z level=DEBUG source=sched.go:384 msg="runner expired event received" modelPath=/root/.ollama/models/blobs/sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa
time=2024-07-12T20:47:02.722Z level=DEBUG source=sched.go:400 msg="got lock to unload" modelPath=/root/.ollama/models/blobs/sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa
time=2024-07-12T20:47:02.723Z level=DEBUG source=gpu.go:336 msg="updating system memory data" before.total="23.5 GiB" before.free="20.5 GiB" now.total="23.5 GiB" now.free="19.9 GiB"
CUDA driver version: 12.4
time=2024-07-12T20:47:03.004Z level=DEBUG source=gpu.go:377 msg="updating cuda memory data" gpu=GPU-2c3dceb7-4052-11ef-99c8-7f57aa5c9cda name="GRID P40-8Q" before.total="8.0 GiB" before.free="7.4 GiB" now.total="8.0 GiB" now.free="1.5 GiB" now.used="6.5 GiB"
releasing cuda driver library
time=2024-07-12T20:47:03.005Z level=DEBUG source=server.go:1026 msg="stopping llama server"
time=2024-07-12T20:47:03.005Z level=DEBUG source=server.go:1032 msg="waiting for llama server to exit"
time=2024-07-12T20:47:03.089Z level=DEBUG source=server.go:1036 msg="llama server stopped"
time=2024-07-12T20:47:03.089Z level=DEBUG source=sched.go:405 msg="runner released" modelPath=/root/.ollama/models/blobs/sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa
time=2024-07-12T20:47:03.256Z level=DEBUG source=gpu.go:336 msg="updating system memory data" before.total="23.5 GiB" before.free="19.9 GiB" now.total="23.5 GiB" now.free="20.4 GiB"
CUDA driver version: 12.4
time=2024-07-12T20:47:03.406Z level=DEBUG source=gpu.go:377 msg="updating cuda memory data" gpu=GPU-2c3dceb7-4052-11ef-99c8-7f57aa5c9cda name="GRID P40-8Q" before.total="8.0 GiB" before.free="1.5 GiB" now.total="8.0 GiB" now.free="7.4 GiB" now.used="650.0 MiB"
releasing cuda driver library
time=2024-07-12T20:47:03.406Z level=DEBUG source=sched.go:684 msg="gpu VRAM free memory converged after 0.68 seconds" model=/root/.ollama/models/blobs/sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa
time=2024-07-12T20:47:03.406Z level=DEBUG source=sched.go:409 msg="sending an unloaded event" modelPath=/root/.ollama/models/blobs/sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa
time=2024-07-12T20:47:03.406Z level=DEBUG source=sched.go:332 msg="ignoring unload event with no pending requests"
time=2024-07-12T20:52:06.542Z level=DEBUG source=gpu.go:336 msg="updating system memory data" before.total="23.5 GiB" before.free="20.4 GiB" now.total="23.5 GiB" now.free="20.4 GiB"
CUDA driver version: 12.4
time=2024-07-12T20:52:06.733Z level=DEBUG source=gpu.go:377 msg="updating cuda memory data" gpu=GPU-2c3dceb7-4052-11ef-99c8-7f57aa5c9cda name="GRID P40-8Q" before.total="8.0 GiB" before.free="7.4 GiB" now.total="8.0 GiB" now.free="7.4 GiB" now.used="650.0 MiB"
releasing cuda driver library
time=2024-07-12T20:52:06.755Z level=DEBUG source=memory.go:101 msg=evaluating library=cuda gpu_count=1 available="[7.4 GiB]"
time=2024-07-12T20:52:06.756Z level=DEBUG source=sched.go:251 msg="loading first model" model=/root/.ollama/models/blobs/sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa
time=2024-07-12T20:52:06.756Z level=DEBUG source=memory.go:101 msg=evaluating library=cuda gpu_count=1 available="[7.4 GiB]"
time=2024-07-12T20:52:06.756Z level=INFO source=sched.go:738 msg="new model will fit in available VRAM in single GPU, loading" model=/root/.ollama/models/blobs/sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa gpu=GPU-2c3dceb7-4052-11ef-99c8-7f57aa5c9cda parallel=4 available=7908343808 required="6.2 GiB"
time=2024-07-12T20:52:06.756Z level=DEBUG source=server.go:98 msg="system memory" total="23.5 GiB" free=21902262272
time=2024-07-12T20:52:06.756Z level=DEBUG source=memory.go:101 msg=evaluating library=cuda gpu_count=1 available="[7.4 GiB]"
time=2024-07-12T20:52:06.757Z level=INFO source=memory.go:309 msg="offload to cuda" layers.requested=-1 layers.model=33 layers.offload=33 layers.split="" memory.available="[7.4 GiB]" memory.required.full="6.2 GiB" memory.required.partial="6.2 GiB" memory.required.kv="1.0 GiB" memory.required.allocations="[6.2 GiB]" memory.weights.total="4.7 GiB" memory.weights.repeating="4.3 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="560.0 MiB" memory.graph.partial="677.5 MiB"
time=2024-07-12T20:52:06.757Z level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama4058211551/runners/cpu/ollama_llama_server
time=2024-07-12T20:52:06.757Z level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama4058211551/runners/cpu_avx/ollama_llama_server
time=2024-07-12T20:52:06.757Z level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama4058211551/runners/cpu_avx2/ollama_llama_server
time=2024-07-12T20:52:06.757Z level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama4058211551/runners/cuda_v11/ollama_llama_server
time=2024-07-12T20:52:06.757Z level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama4058211551/runners/rocm_v60101/ollama_llama_server
time=2024-07-12T20:52:06.757Z level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama4058211551/runners/cpu/ollama_llama_server
time=2024-07-12T20:52:06.757Z level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama4058211551/runners/cpu_avx/ollama_llama_server
time=2024-07-12T20:52:06.757Z level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama4058211551/runners/cpu_avx2/ollama_llama_server
time=2024-07-12T20:52:06.757Z level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama4058211551/runners/cuda_v11/ollama_llama_server
time=2024-07-12T20:52:06.757Z level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama4058211551/runners/rocm_v60101/ollama_llama_server
time=2024-07-12T20:52:06.757Z level=INFO source=server.go:375 msg="starting llama server" cmd="/tmp/ollama4058211551/runners/cuda_v11/ollama_llama_server --model /root/.ollama/models/blobs/sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa --ctx-size 8192 --batch-size 512 --embedding --log-disable --n-gpu-layers 33 --verbose --parallel 4 --port 35875"
time=2024-07-12T20:52:06.757Z level=DEBUG source=server.go:390 msg=subprocess environment="[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin LD_LIBRARY_PATH=/tmp/ollama4058211551/runners/cuda_v11:/tmp/ollama4058211551/runners:/usr/local/nvidia/lib:/usr/local/nvidia/lib64 CUDA_VISIBLE_DEVICES=GPU-2c3dceb7-4052-11ef-99c8-7f57aa5c9cda]"
time=2024-07-12T20:52:06.758Z level=INFO source=sched.go:474 msg="loaded runners" count=1
time=2024-07-12T20:52:06.758Z level=INFO source=server.go:563 msg="waiting for llama runner to start responding"
time=2024-07-12T20:52:06.758Z level=INFO source=server.go:604 msg="waiting for server to become available" status="llm server error"
INFO [main] build info | build=1 commit="a8db2a9" tid="139684785954816" timestamp=1720817526
INFO [main] system info | n_threads=8 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 0 | " tid="139684785954816" timestamp=1720817526 total_threads=8
INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="7" port="35875" tid="139684785954816" timestamp=1720817526
llama_model_loader: loaded meta data with 22 key-value pairs and 291 tensors from /root/.ollama/models/blobs/sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.name str = Meta-Llama-3-8B-Instruct
llama_model_loader: - kv 2: llama.block_count u32 = 32
llama_model_loader: - kv 3: llama.context_length u32 = 8192
llama_model_loader: - kv 4: llama.embedding_length u32 = 4096
llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336
llama_model_loader: - kv 6: llama.attention.head_count u32 = 32
llama_model_loader: - kv 7: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 8: llama.rope.freq_base f32 = 500000.000000
llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 10: general.file_type u32 = 2
llama_model_loader: - kv 11: llama.vocab_size u32 = 128256
llama_model_loader: - kv 12: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 13: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 14: tokenizer.ggml.pre str = llama-bpe
llama_model_loader: - kv 15: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 16: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 17: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv 18: tokenizer.ggml.bos_token_id u32 = 128000
llama_model_loader: - kv 19: tokenizer.ggml.eos_token_id u32 = 128009
llama_model_loader: - kv 20: tokenizer.chat_template str = {% set loop_messages = messages %}{% ...
llama_model_loader: - kv 21: general.quantization_version u32 = 2
llama_model_loader: - type f32: 65 tensors
llama_model_loader: - type q4_0: 225 tensors
llama_model_loader: - type q6_K: 1 tensors
time=2024-07-12T20:52:07.010Z level=INFO source=server.go:604 msg="waiting for server to become available" status="llm server loading model"
llm_load_vocab: special tokens cache size = 256
llm_load_vocab: token to piece cache size = 0.8000 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 128256
llm_load_print_meta: n_merges = 280147
llm_load_print_meta: vocab_only = 0
llm_load_print_meta: n_ctx_train = 8192
llm_load_print_meta: n_embd = 4096
llm_load_print_meta: n_layer = 32
llm_load_print_meta: n_head = 32
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_swa = 0
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 4
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
```
### OS
Docker
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.2.1
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5654/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5654/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7420
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7420/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7420/comments
|
https://api.github.com/repos/ollama/ollama/issues/7420/events
|
https://github.com/ollama/ollama/issues/7420
| 2,624,398,009
|
I_kwDOJ0Z1Ps6cbSK5
| 7,420
|
Support AMD GPUs on Ampere, Raspberry Pis (arm64 ROCm)
|
{
"login": "geerlingguy",
"id": 481677,
"node_id": "MDQ6VXNlcjQ4MTY3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/481677?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/geerlingguy",
"html_url": "https://github.com/geerlingguy",
"followers_url": "https://api.github.com/users/geerlingguy/followers",
"following_url": "https://api.github.com/users/geerlingguy/following{/other_user}",
"gists_url": "https://api.github.com/users/geerlingguy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/geerlingguy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/geerlingguy/subscriptions",
"organizations_url": "https://api.github.com/users/geerlingguy/orgs",
"repos_url": "https://api.github.com/users/geerlingguy/repos",
"events_url": "https://api.github.com/users/geerlingguy/events{/privacy}",
"received_events_url": "https://api.github.com/users/geerlingguy/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 3
| 2024-10-30T15:01:36
| 2024-11-18T00:45:39
| 2024-11-01T16:10:23
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I have an AMD RX 6700 XT running on my Raspberry Pi 5 (see https://github.com/geerlingguy/raspberry-pi-pcie-devices/issues/222, I know I'm nuts)... and I tried installing Ollama, but I get the following error:
```
pi@pi5-pcie:~ $ curl -fsSL https://ollama.com/install.sh | sh
>>> Installing ollama to /usr/local
>>> Downloading Linux arm64 bundle
######################################################################## 100.0%
>>> Creating ollama user...
>>> Adding ollama user to render group...
>>> Adding ollama user to video group...
>>> Adding current user to ollama group...
>>> Creating ollama systemd service...
>>> Enabling and starting ollama service...
Created symlink /etc/systemd/system/default.target.wants/ollama.service -> /etc/systemd/system/ollama.service.
>>> Downloading Linux ROCm arm64 bundle
curl: (22) The requested URL returned error: 404
gzip: stdin: unexpected end of file
tar: Child returned status 1
tar: Error is not recoverable: exiting now
```
It tries downloading from `https://ollama.com/download/ollama-linux-arm64-rocm.tgz`, but right now there's only an `ollama-linux-amd64-rocm.tgz` included in the builds.
Seeing as there is nobody else asking for it, I thought I'd open an issue. It would be neat to be able to run larger models on tiny SBCs sipping 2-4W at idle.
I'm also perfectly fine with this issue being closed 'wontfix', as AMD as of 2020/2021 said they aren't supporting arm64 with ROCm: https://github.com/ROCm/ROCm/issues/1052 (maybe that's changed, I will ask again in https://github.com/ROCm/ROCm/issues/3960 ...).
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7420/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 2
}
|
https://api.github.com/repos/ollama/ollama/issues/7420/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/965
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/965/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/965/comments
|
https://api.github.com/repos/ollama/ollama/issues/965/events
|
https://github.com/ollama/ollama/pull/965
| 1,972,922,245
|
PR_kwDOJ0Z1Ps5eXQBB
| 965
|
go mod tidy
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-11-01T18:54:35
| 2023-11-01T18:55:44
| 2023-11-01T18:55:43
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/965",
"html_url": "https://github.com/ollama/ollama/pull/965",
"diff_url": "https://github.com/ollama/ollama/pull/965.diff",
"patch_url": "https://github.com/ollama/ollama/pull/965.patch",
"merged_at": "2023-11-01T18:55:43"
}
|
```
go mod tidy
go fmt ./...
```
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/965/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/965/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2378
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2378/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2378/comments
|
https://api.github.com/repos/ollama/ollama/issues/2378/events
|
https://github.com/ollama/ollama/pull/2378
| 2,121,693,759
|
PR_kwDOJ0Z1Ps5mMT1x
| 2,378
|
runners
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-02-06T21:26:52
| 2024-02-06T21:49:59
| 2024-02-06T21:49:58
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/2378",
"html_url": "https://github.com/ollama/ollama/pull/2378",
"diff_url": "https://github.com/ollama/ollama/pull/2378.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2378.patch",
"merged_at": "2024-02-06T21:49:58"
}
| null |
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2378/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2378/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5754
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5754/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5754/comments
|
https://api.github.com/repos/ollama/ollama/issues/5754/events
|
https://github.com/ollama/ollama/issues/5754
| 2,414,286,852
|
I_kwDOJ0Z1Ps6P5xgE
| 5,754
|
OLLAMA_MAX_VRAM is ignored
|
{
"login": "BartWillems",
"id": 6066578,
"node_id": "MDQ6VXNlcjYwNjY1Nzg=",
"avatar_url": "https://avatars.githubusercontent.com/u/6066578?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BartWillems",
"html_url": "https://github.com/BartWillems",
"followers_url": "https://api.github.com/users/BartWillems/followers",
"following_url": "https://api.github.com/users/BartWillems/following{/other_user}",
"gists_url": "https://api.github.com/users/BartWillems/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BartWillems/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BartWillems/subscriptions",
"organizations_url": "https://api.github.com/users/BartWillems/orgs",
"repos_url": "https://api.github.com/users/BartWillems/repos",
"events_url": "https://api.github.com/users/BartWillems/events{/privacy}",
"received_events_url": "https://api.github.com/users/BartWillems/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 2
| 2024-07-17T18:23:32
| 2024-07-23T18:00:15
| 2024-07-22T17:35:30
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I'm trying to limit the GPU memory usage, so I set the `OLLAMA_MAX_VRAM ` env var.
I see it is correctly parsed in the logs, but the limit itself is ignored.
When I set the limit to `5000000000` (5GB) the `llama3:8b` model will use `6172MiB` according to `nvidia-smi`.
Even when I set it to an absurdly low value like `5` it still uses more than 6GB of memory
### OS
Linux, Docker
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.2.5
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5754/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5754/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2059
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2059/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2059/comments
|
https://api.github.com/repos/ollama/ollama/issues/2059/events
|
https://github.com/ollama/ollama/issues/2059
| 2,089,089,566
|
I_kwDOJ0Z1Ps58hPoe
| 2,059
|
Model info include model type
|
{
"login": "iplayfast",
"id": 751306,
"node_id": "MDQ6VXNlcjc1MTMwNg==",
"avatar_url": "https://avatars.githubusercontent.com/u/751306?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iplayfast",
"html_url": "https://github.com/iplayfast",
"followers_url": "https://api.github.com/users/iplayfast/followers",
"following_url": "https://api.github.com/users/iplayfast/following{/other_user}",
"gists_url": "https://api.github.com/users/iplayfast/gists{/gist_id}",
"starred_url": "https://api.github.com/users/iplayfast/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iplayfast/subscriptions",
"organizations_url": "https://api.github.com/users/iplayfast/orgs",
"repos_url": "https://api.github.com/users/iplayfast/repos",
"events_url": "https://api.github.com/users/iplayfast/events{/privacy}",
"received_events_url": "https://api.github.com/users/iplayfast/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 1
| 2024-01-18T21:40:16
| 2024-03-11T18:00:21
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Enhancement request:
I've recently come across an issue where a model was a autocompletion model and not a chat/instruct model. (model is stable-code)
As these models work in very different ways there should be a way to programatically know what type of usage the mode should have.
https://github.com/jmorganca/ollama/issues/2025
I'm not sure of the best solution, but the current state is very confusing since most of the models in ollama are chat/instruct.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2059/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2059/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/3029
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3029/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3029/comments
|
https://api.github.com/repos/ollama/ollama/issues/3029/events
|
https://github.com/ollama/ollama/issues/3029
| 2,177,419,846
|
I_kwDOJ0Z1Ps6ByMpG
| 3,029
|
Ollama Hanging/Freeze On Embeddings API With Nomic-Embed-Text
|
{
"login": "matthewsmorrison",
"id": 19672700,
"node_id": "MDQ6VXNlcjE5NjcyNzAw",
"avatar_url": "https://avatars.githubusercontent.com/u/19672700?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/matthewsmorrison",
"html_url": "https://github.com/matthewsmorrison",
"followers_url": "https://api.github.com/users/matthewsmorrison/followers",
"following_url": "https://api.github.com/users/matthewsmorrison/following{/other_user}",
"gists_url": "https://api.github.com/users/matthewsmorrison/gists{/gist_id}",
"starred_url": "https://api.github.com/users/matthewsmorrison/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/matthewsmorrison/subscriptions",
"organizations_url": "https://api.github.com/users/matthewsmorrison/orgs",
"repos_url": "https://api.github.com/users/matthewsmorrison/repos",
"events_url": "https://api.github.com/users/matthewsmorrison/events{/privacy}",
"received_events_url": "https://api.github.com/users/matthewsmorrison/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 10
| 2024-03-09T22:03:30
| 2024-03-13T23:03:06
| 2024-03-13T23:03:06
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I am running Ollama (0.1.28) on a Google Cloud VM (n1-standard-2, Intel Broadwell, NVIDIA T4 GPU, 7.5GB RAM). When I run the cURL command for the embeddings API with the nomic-embed-text model (version: nomic-embed-text:latest 0a109f422b47 )
```
curl http://localhost:11434/api/embeddings -d '{
"model": "nomic-embed-text",
"prompt": "Here is an article about llamas..."
}'
```
Ollama indefinitely hangs. The logs I am getting from the server:
```
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = nomic-bert
llama_model_loader: - kv 1: general.name str = nomic-embed-text-v1.5
llama_model_loader: - kv 2: nomic-bert.block_count u32 = 12
llama_model_loader: - kv 3: nomic-bert.context_length u32 = 2048
llama_model_loader: - kv 4: nomic-bert.embedding_length u32 = 768
llama_model_loader: - kv 5: nomic-bert.feed_forward_length u32 = 3072
llama_model_loader: - kv 6: nomic-bert.attention.head_count u32 = 12
llama_model_loader: - kv 7: nomic-bert.attention.layer_norm_epsilon f32 = 0.000000
llama_model_loader: - kv 8: general.file_type u32 = 1
llama_model_loader: - kv 9: nomic-bert.attention.causal bool = false
llama_model_loader: - kv 10: nomic-bert.pooling_type u32 = 1
llama_model_loader: - kv 11: nomic-bert.rope.freq_base f32 = 1000.000000
llama_model_loader: - kv 12: tokenizer.ggml.token_type_count u32 = 2
llama_model_loader: - kv 13: tokenizer.ggml.bos_token_id u32 = 101
llama_model_loader: - kv 14: tokenizer.ggml.eos_token_id u32 = 102
llama_model_loader: - kv 15: tokenizer.ggml.model str = bert
llama_model_loader: - kv 16: tokenizer.ggml.tokens arr[str,30522] = ["[PAD]", "[unused0]", "[unused1]", "...
llama_model_loader: - kv 17: tokenizer.ggml.scores arr[f32,30522] = [-1000.000000, -1000.000000, -1000.00...
llama_model_loader: - kv 18: tokenizer.ggml.token_type arr[i32,30522] = [3, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 19: tokenizer.ggml.unknown_token_id u32 = 100
llama_model_loader: - kv 20: tokenizer.ggml.seperator_token_id u32 = 102
llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 0
llama_model_loader: - kv 22: tokenizer.ggml.cls_token_id u32 = 101
llama_model_loader: - kv 23: tokenizer.ggml.mask_token_id u32 = 103
llama_model_loader: - type f32: 51 tensors
llama_model_loader: - type f16: 61 tensors
llm_load_vocab: mismatch in special tokens definition ( 7104/30522 vs 5/30522 ).
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = nomic-bert
llm_load_print_meta: vocab type = WPM
llm_load_print_meta: n_vocab = 30522
llm_load_print_meta: n_merges = 0
llm_load_print_meta: n_ctx_train = 2048
llm_load_print_meta: n_embd = 768
llm_load_print_meta: n_head = 12
llm_load_print_meta: n_head_kv = 12
llm_load_print_meta: n_layer = 12
llm_load_print_meta: n_rot = 64
llm_load_print_meta: n_embd_head_k = 64
llm_load_print_meta: n_embd_head_v = 64
llm_load_print_meta: n_gqa = 1
llm_load_print_meta: n_embd_k_gqa = 768
llm_load_print_meta: n_embd_v_gqa = 768
llm_load_print_meta: f_norm_eps = 1.0e-12
llm_load_print_meta: f_norm_rms_eps = 0.0e+00
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: n_ff = 3072
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: pooling type = 1
llm_load_print_meta: rope type = 2
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 1000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx = 2048
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: model type = 137M
llm_load_print_meta: model ftype = F16
llm_load_print_meta: model params = 136.73 M
llm_load_print_meta: model size = 260.86 MiB (16.00 BPW)
llm_load_print_meta: general.name = nomic-embed-text-v1.5
llm_load_print_meta: BOS token = 101 '[CLS]'
llm_load_print_meta: EOS token = 102 '[SEP]'
llm_load_print_meta: UNK token = 100 '[UNK]'
llm_load_print_meta: SEP token = 102 '[SEP]'
llm_load_print_meta: PAD token = 0 '[PAD]'
llm_load_tensors: ggml ctx size = 0.09 MiB
llm_load_tensors: offloading 12 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 13/13 layers to GPU
llm_load_tensors: CPU buffer size = 44.72 MiB
llm_load_tensors: CUDA0 buffer size = 216.15 MiB
.......................................................
llama_new_context_with_model: n_ctx = 2048
llama_new_context_with_model: freq_base = 1000.0
llama_new_context_with_model: freq_scale = 1
ggml_init_cublas: GGML_CUDA_FORCE_MMQ: yes
ggml_init_cublas: CUDA_USE_TENSOR_CORES: no
ggml_init_cublas: found 1 CUDA devices:
Device 0: Tesla T4, compute capability 7.5, VMM: yes
llama_kv_cache_init: CUDA0 KV buffer size = 72.00 MiB
llama_new_context_with_model: KV self size = 72.00 MiB, K (f16): 36.00 MiB, V (f16): 36.00 MiB
llama_new_context_with_model: CUDA_Host input buffer size = 6.52 MiB
llama_new_context_with_model: CUDA0 compute buffer size = 62.00 MiB
llama_new_context_with_model: CUDA_Host compute buffer size = 1.50 MiB
llama_new_context_with_model: graph splits (measure): 2
{"function":"initialize","level":"INFO","line":433,"msg":"initializing slots","n_slots":1,"tid":"140541573117696","timestamp":1710020724}
{"function":"initialize","level":"INFO","line":442,"msg":"new slot","n_ctx_slot":2048,"slot_id":0,"tid":"140541573117696","timestamp":1710020724}
time=2024-03-09T21:45:24.886Z level=INFO source=dyn_ext_server.go:161 msg="Starting llama main loop"
{"function":"update_slots","level":"INFO","line":1565,"msg":"all slots are idle and system prompt is empty, clear the KV cache","tid":"140539684976384","timestamp":1710020724}
{"function":"launch_slot_with_data","level":"INFO","line":823,"msg":"slot is processing task","slot_id":0,"task_id":0,"tid":"140539684976384","timestamp":1710020724}
```
It is also important to add that this was running fine in my local development (CPU only, 4GB RAM).
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3029/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3029/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6363
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6363/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6363/comments
|
https://api.github.com/repos/ollama/ollama/issues/6363/events
|
https://github.com/ollama/ollama/pull/6363
| 2,466,886,555
|
PR_kwDOJ0Z1Ps54aSAT
| 6,363
|
fix: noprune on pull
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-08-14T21:40:01
| 2024-08-15T19:20:41
| 2024-08-15T19:20:38
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6363",
"html_url": "https://github.com/ollama/ollama/pull/6363",
"diff_url": "https://github.com/ollama/ollama/pull/6363.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6363.patch",
"merged_at": "2024-08-15T19:20:38"
}
|
`OLLAMA_NOPRUNE` is ignored on pull requests. switch to `envconfig.NoPrune()`; `noprune` was check but never set
ignore any manifest that's not actually a manifest. this can be files such as `.DS_Store` on macOS
resolves #6333
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6363/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6363/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/949
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/949/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/949/comments
|
https://api.github.com/repos/ollama/ollama/issues/949/events
|
https://github.com/ollama/ollama/pull/949
| 1,968,932,065
|
PR_kwDOJ0Z1Ps5eJqFf
| 949
|
fix: private gpt example was broken due to changes in chroma
|
{
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.github.com/users/technovangelist/followers",
"following_url": "https://api.github.com/users/technovangelist/following{/other_user}",
"gists_url": "https://api.github.com/users/technovangelist/gists{/gist_id}",
"starred_url": "https://api.github.com/users/technovangelist/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/technovangelist/subscriptions",
"organizations_url": "https://api.github.com/users/technovangelist/orgs",
"repos_url": "https://api.github.com/users/technovangelist/repos",
"events_url": "https://api.github.com/users/technovangelist/events{/privacy}",
"received_events_url": "https://api.github.com/users/technovangelist/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-10-30T17:58:00
| 2023-10-31T00:17:02
| 2023-10-31T00:17:01
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/949",
"html_url": "https://github.com/ollama/ollama/pull/949",
"diff_url": "https://github.com/ollama/ollama/pull/949.diff",
"patch_url": "https://github.com/ollama/ollama/pull/949.patch",
"merged_at": "2023-10-31T00:17:01"
}
|
This resolves #928
Chroma updated its api and thus langchain updated and everything broke.
|
{
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.github.com/users/technovangelist/followers",
"following_url": "https://api.github.com/users/technovangelist/following{/other_user}",
"gists_url": "https://api.github.com/users/technovangelist/gists{/gist_id}",
"starred_url": "https://api.github.com/users/technovangelist/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/technovangelist/subscriptions",
"organizations_url": "https://api.github.com/users/technovangelist/orgs",
"repos_url": "https://api.github.com/users/technovangelist/repos",
"events_url": "https://api.github.com/users/technovangelist/events{/privacy}",
"received_events_url": "https://api.github.com/users/technovangelist/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/949/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/949/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/8635
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8635/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8635/comments
|
https://api.github.com/repos/ollama/ollama/issues/8635/events
|
https://github.com/ollama/ollama/issues/8635
| 2,815,779,059
|
I_kwDOJ0Z1Ps6n1WDz
| 8,635
|
Use of System Ram over RDMA in GPU to allow for GPU acceleration on lower VRAM hardware.
|
{
"login": "SlinkierElm5611",
"id": 52179385,
"node_id": "MDQ6VXNlcjUyMTc5Mzg1",
"avatar_url": "https://avatars.githubusercontent.com/u/52179385?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SlinkierElm5611",
"html_url": "https://github.com/SlinkierElm5611",
"followers_url": "https://api.github.com/users/SlinkierElm5611/followers",
"following_url": "https://api.github.com/users/SlinkierElm5611/following{/other_user}",
"gists_url": "https://api.github.com/users/SlinkierElm5611/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SlinkierElm5611/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SlinkierElm5611/subscriptions",
"organizations_url": "https://api.github.com/users/SlinkierElm5611/orgs",
"repos_url": "https://api.github.com/users/SlinkierElm5611/repos",
"events_url": "https://api.github.com/users/SlinkierElm5611/events{/privacy}",
"received_events_url": "https://api.github.com/users/SlinkierElm5611/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 0
| 2025-01-28T14:05:32
| 2025-01-28T14:05:32
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi all!
I'm a GPU dev who has been messing around with Ollama for some self hosting. I was wondering if there is any reason Ollama has not been able to take advantage of GPU acceleration while using system RAM through RDMA(reBar). I have done system ram access through RDMA on GPU for real time processing and have had better results than CPU side tasks despite the increase in data latency when going over PCIE.
I look forward to hearing from you!
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8635/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8635/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/2436
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2436/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2436/comments
|
https://api.github.com/repos/ollama/ollama/issues/2436/events
|
https://github.com/ollama/ollama/issues/2436
| 2,127,983,433
|
I_kwDOJ0Z1Ps5-1nNJ
| 2,436
|
Unable to load dynamic server library in hardened environment (tmp mounted as noexec)
|
{
"login": "crenz",
"id": 124283,
"node_id": "MDQ6VXNlcjEyNDI4Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/124283?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/crenz",
"html_url": "https://github.com/crenz",
"followers_url": "https://api.github.com/users/crenz/followers",
"following_url": "https://api.github.com/users/crenz/following{/other_user}",
"gists_url": "https://api.github.com/users/crenz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/crenz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/crenz/subscriptions",
"organizations_url": "https://api.github.com/users/crenz/orgs",
"repos_url": "https://api.github.com/users/crenz/repos",
"events_url": "https://api.github.com/users/crenz/events{/privacy}",
"received_events_url": "https://api.github.com/users/crenz/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-02-09T23:42:28
| 2024-03-11T17:57:10
| 2024-03-11T17:57:10
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I installed ollama on a hardened Ubuntu 22 system successfully. When running `ollama run mistral`, I am getting the following error message:
`Error: Unable to load dynamic library: Unable to load dynamic server library: /tmp/ollama2322208974/cpu_avx2/libext_server.so: failed to map segment from shared object`
The root cause seems to be that on this system, `/tmp` is mounted as noexec. I was able to fix the issue by setting another temporary directory in `/etc/systemd/system/ollama.service` by adding the line
`Environment="TMPDIR=/usr/share/ollama/tmp"`
I suggest addressing the issue by using a temporary directory within the `/usr/share/ollama` directory if `/tmp`is mounted as noexec, or to at least mention this issue in the documentation.
|
{
"login": "hoyyeva",
"id": 63033505,
"node_id": "MDQ6VXNlcjYzMDMzNTA1",
"avatar_url": "https://avatars.githubusercontent.com/u/63033505?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hoyyeva",
"html_url": "https://github.com/hoyyeva",
"followers_url": "https://api.github.com/users/hoyyeva/followers",
"following_url": "https://api.github.com/users/hoyyeva/following{/other_user}",
"gists_url": "https://api.github.com/users/hoyyeva/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hoyyeva/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hoyyeva/subscriptions",
"organizations_url": "https://api.github.com/users/hoyyeva/orgs",
"repos_url": "https://api.github.com/users/hoyyeva/repos",
"events_url": "https://api.github.com/users/hoyyeva/events{/privacy}",
"received_events_url": "https://api.github.com/users/hoyyeva/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2436/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2436/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5948
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5948/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5948/comments
|
https://api.github.com/repos/ollama/ollama/issues/5948/events
|
https://github.com/ollama/ollama/issues/5948
| 2,429,720,589
|
I_kwDOJ0Z1Ps6Q0pgN
| 5,948
|
关于嵌入模型通过ollama部署安装后,无法通过ollama的API接口进行生成嵌入-检索-生成的RAG过程
|
{
"login": "Kyriell1999",
"id": 53622847,
"node_id": "MDQ6VXNlcjUzNjIyODQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/53622847?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Kyriell1999",
"html_url": "https://github.com/Kyriell1999",
"followers_url": "https://api.github.com/users/Kyriell1999/followers",
"following_url": "https://api.github.com/users/Kyriell1999/following{/other_user}",
"gists_url": "https://api.github.com/users/Kyriell1999/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Kyriell1999/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Kyriell1999/subscriptions",
"organizations_url": "https://api.github.com/users/Kyriell1999/orgs",
"repos_url": "https://api.github.com/users/Kyriell1999/repos",
"events_url": "https://api.github.com/users/Kyriell1999/events{/privacy}",
"received_events_url": "https://api.github.com/users/Kyriell1999/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 0
| 2024-07-25T11:26:13
| 2024-07-25T11:34:56
| 2024-07-25T11:34:56
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null | null |
{
"login": "Kyriell1999",
"id": 53622847,
"node_id": "MDQ6VXNlcjUzNjIyODQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/53622847?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Kyriell1999",
"html_url": "https://github.com/Kyriell1999",
"followers_url": "https://api.github.com/users/Kyriell1999/followers",
"following_url": "https://api.github.com/users/Kyriell1999/following{/other_user}",
"gists_url": "https://api.github.com/users/Kyriell1999/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Kyriell1999/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Kyriell1999/subscriptions",
"organizations_url": "https://api.github.com/users/Kyriell1999/orgs",
"repos_url": "https://api.github.com/users/Kyriell1999/repos",
"events_url": "https://api.github.com/users/Kyriell1999/events{/privacy}",
"received_events_url": "https://api.github.com/users/Kyriell1999/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5948/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5948/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4889
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4889/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4889/comments
|
https://api.github.com/repos/ollama/ollama/issues/4889/events
|
https://github.com/ollama/ollama/issues/4889
| 2,339,405,704
|
I_kwDOJ0Z1Ps6LcH-I
| 4,889
|
Version check
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 0
| 2024-06-07T01:20:47
| 2024-07-01T23:32:15
| 2024-07-01T23:32:15
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
When running a new model, Ollama should check to ensure it can run it otherwise providing a message to upgrade
### OS
_No response_
### GPU
_No response_
### CPU
_No response_
### Ollama version
_No response_
|
{
"login": "joshyan1",
"id": 76125168,
"node_id": "MDQ6VXNlcjc2MTI1MTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/76125168?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joshyan1",
"html_url": "https://github.com/joshyan1",
"followers_url": "https://api.github.com/users/joshyan1/followers",
"following_url": "https://api.github.com/users/joshyan1/following{/other_user}",
"gists_url": "https://api.github.com/users/joshyan1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joshyan1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joshyan1/subscriptions",
"organizations_url": "https://api.github.com/users/joshyan1/orgs",
"repos_url": "https://api.github.com/users/joshyan1/repos",
"events_url": "https://api.github.com/users/joshyan1/events{/privacy}",
"received_events_url": "https://api.github.com/users/joshyan1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4889/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4889/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4249
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4249/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4249/comments
|
https://api.github.com/repos/ollama/ollama/issues/4249/events
|
https://github.com/ollama/ollama/issues/4249
| 2,284,623,608
|
I_kwDOJ0Z1Ps6ILJb4
| 4,249
|
The model does not output correctly in Ollama, but it works fine in LM Studio.
|
{
"login": "vawterdada",
"id": 130421680,
"node_id": "U_kgDOB8YTsA",
"avatar_url": "https://avatars.githubusercontent.com/u/130421680?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vawterdada",
"html_url": "https://github.com/vawterdada",
"followers_url": "https://api.github.com/users/vawterdada/followers",
"following_url": "https://api.github.com/users/vawterdada/following{/other_user}",
"gists_url": "https://api.github.com/users/vawterdada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vawterdada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vawterdada/subscriptions",
"organizations_url": "https://api.github.com/users/vawterdada/orgs",
"repos_url": "https://api.github.com/users/vawterdada/repos",
"events_url": "https://api.github.com/users/vawterdada/events{/privacy}",
"received_events_url": "https://api.github.com/users/vawterdada/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 0
| 2024-05-08T03:54:18
| 2024-05-08T03:54:18
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I use a small data set to fine-tune the model, and then load it into Ollama. It will speak nonsense, but if I load it into LM Studio, it can be used normally. What is the reason for this? Is there anything that needs to be adjusted?


### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.34
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4249/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4249/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/3305
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3305/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3305/comments
|
https://api.github.com/repos/ollama/ollama/issues/3305/events
|
https://github.com/ollama/ollama/issues/3305
| 2,203,709,130
|
I_kwDOJ0Z1Ps6DWe7K
| 3,305
|
Changing installation drive
|
{
"login": "atanu2531",
"id": 217446,
"node_id": "MDQ6VXNlcjIxNzQ0Ng==",
"avatar_url": "https://avatars.githubusercontent.com/u/217446?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/atanu2531",
"html_url": "https://github.com/atanu2531",
"followers_url": "https://api.github.com/users/atanu2531/followers",
"following_url": "https://api.github.com/users/atanu2531/following{/other_user}",
"gists_url": "https://api.github.com/users/atanu2531/gists{/gist_id}",
"starred_url": "https://api.github.com/users/atanu2531/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/atanu2531/subscriptions",
"organizations_url": "https://api.github.com/users/atanu2531/orgs",
"repos_url": "https://api.github.com/users/atanu2531/repos",
"events_url": "https://api.github.com/users/atanu2531/events{/privacy}",
"received_events_url": "https://api.github.com/users/atanu2531/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 1
| 2024-03-23T05:37:09
| 2024-03-24T02:10:27
| 2024-03-23T07:02:19
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What are you trying to do?
are there option for changing installation drive ? by default going to C drive ..
### How should we solve this?
just by providing installation path option in install menu system
### What is the impact of not solving this?
need to test , probably going to run as usual only might have some delay
### Anything else?
_No response_
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3305/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3305/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8451
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8451/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8451/comments
|
https://api.github.com/repos/ollama/ollama/issues/8451/events
|
https://github.com/ollama/ollama/issues/8451
| 2,792,072,218
|
I_kwDOJ0Z1Ps6ma6Qa
| 8,451
|
How can a large language model be equipped with the capability to answer with real-time data information?
|
{
"login": "20246688",
"id": 156653831,
"node_id": "U_kgDOCVZZBw",
"avatar_url": "https://avatars.githubusercontent.com/u/156653831?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/20246688",
"html_url": "https://github.com/20246688",
"followers_url": "https://api.github.com/users/20246688/followers",
"following_url": "https://api.github.com/users/20246688/following{/other_user}",
"gists_url": "https://api.github.com/users/20246688/gists{/gist_id}",
"starred_url": "https://api.github.com/users/20246688/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/20246688/subscriptions",
"organizations_url": "https://api.github.com/users/20246688/orgs",
"repos_url": "https://api.github.com/users/20246688/repos",
"events_url": "https://api.github.com/users/20246688/events{/privacy}",
"received_events_url": "https://api.github.com/users/20246688/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 2
| 2025-01-16T08:47:07
| 2025-01-17T01:57:04
| 2025-01-17T01:57:04
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
How to obtain real-time data, for example: q: What is today's date? a: Today is January 16.
|
{
"login": "20246688",
"id": 156653831,
"node_id": "U_kgDOCVZZBw",
"avatar_url": "https://avatars.githubusercontent.com/u/156653831?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/20246688",
"html_url": "https://github.com/20246688",
"followers_url": "https://api.github.com/users/20246688/followers",
"following_url": "https://api.github.com/users/20246688/following{/other_user}",
"gists_url": "https://api.github.com/users/20246688/gists{/gist_id}",
"starred_url": "https://api.github.com/users/20246688/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/20246688/subscriptions",
"organizations_url": "https://api.github.com/users/20246688/orgs",
"repos_url": "https://api.github.com/users/20246688/repos",
"events_url": "https://api.github.com/users/20246688/events{/privacy}",
"received_events_url": "https://api.github.com/users/20246688/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8451/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8451/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1507
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1507/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1507/comments
|
https://api.github.com/repos/ollama/ollama/issues/1507/events
|
https://github.com/ollama/ollama/issues/1507
| 2,040,272,284
|
I_kwDOJ0Z1Ps55nBWc
| 1,507
|
Grammar and Logits questions
|
{
"login": "verdverm",
"id": 1390600,
"node_id": "MDQ6VXNlcjEzOTA2MDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/1390600?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/verdverm",
"html_url": "https://github.com/verdverm",
"followers_url": "https://api.github.com/users/verdverm/followers",
"following_url": "https://api.github.com/users/verdverm/following{/other_user}",
"gists_url": "https://api.github.com/users/verdverm/gists{/gist_id}",
"starred_url": "https://api.github.com/users/verdverm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/verdverm/subscriptions",
"organizations_url": "https://api.github.com/users/verdverm/orgs",
"repos_url": "https://api.github.com/users/verdverm/repos",
"events_url": "https://api.github.com/users/verdverm/events{/privacy}",
"received_events_url": "https://api.github.com/users/verdverm/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 6
| 2023-12-13T18:40:33
| 2024-12-05T00:39:22
| 2024-12-05T00:39:22
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Using a grammar to influence the logits of a model is becoming a useful technique
- Is this possible with ollama? seems like it ought to be
- Can we get an example? I'm interested to do so, but some guidance would be helpful, pun intended :] (https://github.com/guidance-ai/guidance)
I'm thinking there could be
1. a base model that has the logits & grammar code, built in llama2 as an example, but seems most models ought to support this, codellama might be a better choice as one often wants to restrict what comes out with this model
2. some examples that build on the base model showing how you can add the grammar and prompt
|
{
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "https://api.github.com/users/ParthSareen/followers",
"following_url": "https://api.github.com/users/ParthSareen/following{/other_user}",
"gists_url": "https://api.github.com/users/ParthSareen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ParthSareen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ParthSareen/subscriptions",
"organizations_url": "https://api.github.com/users/ParthSareen/orgs",
"repos_url": "https://api.github.com/users/ParthSareen/repos",
"events_url": "https://api.github.com/users/ParthSareen/events{/privacy}",
"received_events_url": "https://api.github.com/users/ParthSareen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1507/reactions",
"total_count": 10,
"+1": 7,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 3,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1507/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2372
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2372/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2372/comments
|
https://api.github.com/repos/ollama/ollama/issues/2372/events
|
https://github.com/ollama/ollama/issues/2372
| 2,120,668,398
|
I_kwDOJ0Z1Ps5-ZtTu
| 2,372
|
How to stop/exit `ollama` service on macos?
|
{
"login": "Rusteam",
"id": 22130831,
"node_id": "MDQ6VXNlcjIyMTMwODMx",
"avatar_url": "https://avatars.githubusercontent.com/u/22130831?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rusteam",
"html_url": "https://github.com/Rusteam",
"followers_url": "https://api.github.com/users/Rusteam/followers",
"following_url": "https://api.github.com/users/Rusteam/following{/other_user}",
"gists_url": "https://api.github.com/users/Rusteam/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rusteam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rusteam/subscriptions",
"organizations_url": "https://api.github.com/users/Rusteam/orgs",
"repos_url": "https://api.github.com/users/Rusteam/repos",
"events_url": "https://api.github.com/users/Rusteam/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rusteam/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 3
| 2024-02-06T12:32:13
| 2024-03-11T21:20:35
| 2024-03-11T21:20:35
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I haven't been able to find a command to stop the ollama service after running it with `ollama run <model>`. After a `/bye` command is called, the service is still running at `localhost:11434`. Only force quitting all ollama services from the activity monitor kills the service.
|
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2372/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2372/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4457
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4457/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4457/comments
|
https://api.github.com/repos/ollama/ollama/issues/4457/events
|
https://github.com/ollama/ollama/issues/4457
| 2,298,453,644
|
I_kwDOJ0Z1Ps6I_56M
| 4,457
|
error loading model: error loading model vocabulary: unknown pre-tokenizer type: 'qwen2'
|
{
"login": "xdfnet",
"id": 68147460,
"node_id": "MDQ6VXNlcjY4MTQ3NDYw",
"avatar_url": "https://avatars.githubusercontent.com/u/68147460?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xdfnet",
"html_url": "https://github.com/xdfnet",
"followers_url": "https://api.github.com/users/xdfnet/followers",
"following_url": "https://api.github.com/users/xdfnet/following{/other_user}",
"gists_url": "https://api.github.com/users/xdfnet/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xdfnet/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xdfnet/subscriptions",
"organizations_url": "https://api.github.com/users/xdfnet/orgs",
"repos_url": "https://api.github.com/users/xdfnet/repos",
"events_url": "https://api.github.com/users/xdfnet/events{/privacy}",
"received_events_url": "https://api.github.com/users/xdfnet/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
closed
| false
| null |
[] | null | 19
| 2024-05-15T17:26:23
| 2024-09-15T11:58:37
| 2024-06-02T00:56:11
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
> D:\llama.cpp>ollama create eduaigc -f modelfile
transferring model data
using existing layer sha256:28ce318a0cda9dac3b5561c944c16c7e966b07890bed5bb12e122646bc8d71c4
creating new layer sha256:58353639a7c4b7529da8c5c8a63e81c426f206bab10cf82e4b9e427f15a466f8
creating new layer sha256:1da117d6723df114af0d948b614cae0aa684875e2775ca9607d23e2e0769651d
creating new layer sha256:9297f08dd6c6435240b5cddc93261e8a159aa0fecf010de4568ec2df2417bdb2
creating new layer sha256:14d7a26fe5b8e2168e038646c5fb6b0048e27c33628abda8d92ebfed0f369b9f
writing manifest
success
> D:\llama.cpp>ollama run eduaigc
Error: llama runner process has terminated: exit status 0xc0000409
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
e.g.,0.1.37
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4457/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4457/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1160
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1160/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1160/comments
|
https://api.github.com/repos/ollama/ollama/issues/1160/events
|
https://github.com/ollama/ollama/pull/1160
| 1,998,043,288
|
PR_kwDOJ0Z1Ps5fsZ3d
| 1,160
|
update faq
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-11-17T00:46:53
| 2023-11-17T00:48:52
| 2023-11-17T00:48:51
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1160",
"html_url": "https://github.com/ollama/ollama/pull/1160",
"diff_url": "https://github.com/ollama/ollama/pull/1160.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1160.patch",
"merged_at": "2023-11-17T00:48:51"
}
| null |
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1160/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1160/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2503
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2503/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2503/comments
|
https://api.github.com/repos/ollama/ollama/issues/2503/events
|
https://github.com/ollama/ollama/issues/2503
| 2,135,256,562
|
I_kwDOJ0Z1Ps5_RW3y
| 2,503
|
Support Radeon RX 5700 XT (gfx1010)
|
{
"login": "scabros",
"id": 3169546,
"node_id": "MDQ6VXNlcjMxNjk1NDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/3169546?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/scabros",
"html_url": "https://github.com/scabros",
"followers_url": "https://api.github.com/users/scabros/followers",
"following_url": "https://api.github.com/users/scabros/following{/other_user}",
"gists_url": "https://api.github.com/users/scabros/gists{/gist_id}",
"starred_url": "https://api.github.com/users/scabros/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/scabros/subscriptions",
"organizations_url": "https://api.github.com/users/scabros/orgs",
"repos_url": "https://api.github.com/users/scabros/repos",
"events_url": "https://api.github.com/users/scabros/events{/privacy}",
"received_events_url": "https://api.github.com/users/scabros/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396191,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aXw",
"url": "https://api.github.com/repos/ollama/ollama/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
},
{
"id": 6433346500,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf3UTxA",
"url": "https://api.github.com/repos/ollama/ollama/labels/amd",
"name": "amd",
"color": "000000",
"default": false,
"description": "Issues relating to AMD GPUs and ROCm"
}
] |
open
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 27
| 2024-02-14T21:47:13
| 2024-12-11T06:23:41
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi! Congrats for the great project!
We were trying to test ollama with AMD GPU support and we struggled a bit because the install guides are not clear that CUDA libraries are required for ollama (or llama.cpp) to work properly even with team red GPUs.
The error when running ollama run llama2 was (leaving here for reference):
...
llama_new_context_with_model: n_ctx = 2048
llama_new_context_with_model: freq_base = 10000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: ROCm0 KV buffer size = 1024.00 MiB
llama_new_context_with_model: KV self size = 1024.00 MiB, K (f16): 512.00 MiB, V (f16): 512.00 MiB
llama_new_context_with_model: ROCm_Host input buffer size = 13.01 MiB
llama_new_context_with_model: ROCm0 compute buffer size = 164.00 MiB
llama_new_context_with_model: ROCm_Host compute buffer size = 8.00 MiB
llama_new_context_with_model: graph splits (measure): 3
**CUDA error: shared object initialization failed
current device: 0, in function ggml_cuda_op_flatten at /home/devel/ollama/llm/llama.cpp/ggml-cuda.cu:9208
hipGetLastError()
loading library /tmp/ollama3700311510/rocm_v6/libext_server.so
GGML_ASSERT: /home/devel/ollama/llm/llama.cpp/ggml-cuda.cu:241: !"CUDA error"**
[New LWP 4411]
[New LWP 4412]
[New LWP 4413]
[New LWP 4414]
[New LWP 4415]
...
after we installed the cuda libraries as per the instructions [HERE](https://developer.nvidia.com/cuda-downloads) the problem went away.
we also faced problems with ROCm 6.0.2 support for different gpu models (in our case it is a RX 5700 XT, arch gfx1010), the current binary packages doesn't contain TensileLibrary.dat (that somehow "maps" the kernels objects to use with different GPUs).
We had this error:
time=2024-02-09T17:05:38.481Z level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: /tmp/ollama3752973675/rocm_v6/libext_server.so"
time=2024-02-09T17:05:38.481Z level=INFO source=dyn_ext_server.go:145 msg="Initializing llama server"
**rocBLAS error: Cannot read /opt/rocm/lib/rocblas/library/TensileLibrary.dat: Illegal seek for GPU arch : gfx1010**
free(): invalid pointer
SIGABRT: abort
PC=0x7f4bb13739fc m=3 sigcode=18446744073709551610
so I downloaded full rocm source and tried to build again, just to get the right command for compiling TensileLibrary.dat :
this was the command that cmake uses:
'/home/devel/rocBLAS/build/virtualenv/lib/python3.10/site-packages/Tensile/bin/TensileCreateLibrary' '--merge-files' '--separate-architectures' '--lazy-library-loading' '--no-short-file-names' '--library-print-debug' '--code-object-version=default' '--cxx-compiler=hipcc' '--jobs=14' '--library-format=msgpack' '--architecture=gfx1012' '/home/devel/rocBLAS/library/src/blas3/Tensile/Logic/asm_full' '/home/devel/rocBLAS/build/Tensile' 'HIP'
this is the command I used to generate a new TensileLibrary.dat:
'/home/devel/rocBLAS/build/virtualenv/lib/python3.10/site-packages/Tensile/bin/TensileCreateLibrary' '--merge-files' '--no-short-file-names' '--library-print-debug' '--code-object-version=default' '--cxx-compiler=hipcc' '--jobs=14' '--library-format=msgpack' '/home/devel/rocBLAS/library/src/blas3/Tensile/Logic/asm_full' '/home/devel/rocBLAS/build/Tensile' 'HIP'
(i removed '--separate-architectures' '--lazy-library-loading' as per instructions [in this bug](https://github.com/ROCm/Tensile/issues/1757) )
Hope this helps to others! Thanks again!
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2503/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2503/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/7937
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7937/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7937/comments
|
https://api.github.com/repos/ollama/ollama/issues/7937/events
|
https://github.com/ollama/ollama/issues/7937
| 2,719,062,597
|
I_kwDOJ0Z1Ps6iEZpF
| 7,937
|
Using from Vim gives weird characters
|
{
"login": "gwpl",
"id": 221403,
"node_id": "MDQ6VXNlcjIyMTQwMw==",
"avatar_url": "https://avatars.githubusercontent.com/u/221403?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gwpl",
"html_url": "https://github.com/gwpl",
"followers_url": "https://api.github.com/users/gwpl/followers",
"following_url": "https://api.github.com/users/gwpl/following{/other_user}",
"gists_url": "https://api.github.com/users/gwpl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gwpl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gwpl/subscriptions",
"organizations_url": "https://api.github.com/users/gwpl/orgs",
"repos_url": "https://api.github.com/users/gwpl/repos",
"events_url": "https://api.github.com/users/gwpl/events{/privacy}",
"received_events_url": "https://api.github.com/users/gwpl/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-12-05T00:37:13
| 2024-12-14T15:40:59
| 2024-12-14T15:40:59
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
When pipeing text from Vim and wanting result e.g. :
```
Example:
tomato
End.
```
When running in vim to pipe second line to make a joke `:2!ollama run smollm2:135m 'joke about'`
then it shows weird characters in Vim:

However if I run from terminal like thisl:
```
echo tomato | ollamasmollm2_135m 'joke about' | vim -
```
Then I see properly clear text in Vim.
### OS
Linux
### GPU
_No response_
### CPU
_No response_
### Ollama version
ollama version is 0.4.6
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7937/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7937/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4221
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4221/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4221/comments
|
https://api.github.com/repos/ollama/ollama/issues/4221/events
|
https://github.com/ollama/ollama/issues/4221
| 2,282,409,317
|
I_kwDOJ0Z1Ps6ICs1l
| 4,221
|
DeepSeek发布全球最强开源第二代MoE模型:DeepSeek-V2!
|
{
"login": "tqangxl",
"id": 9669944,
"node_id": "MDQ6VXNlcjk2Njk5NDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/9669944?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tqangxl",
"html_url": "https://github.com/tqangxl",
"followers_url": "https://api.github.com/users/tqangxl/followers",
"following_url": "https://api.github.com/users/tqangxl/following{/other_user}",
"gists_url": "https://api.github.com/users/tqangxl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tqangxl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tqangxl/subscriptions",
"organizations_url": "https://api.github.com/users/tqangxl/orgs",
"repos_url": "https://api.github.com/users/tqangxl/repos",
"events_url": "https://api.github.com/users/tqangxl/events{/privacy}",
"received_events_url": "https://api.github.com/users/tqangxl/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
closed
| false
| null |
[] | null | 4
| 2024-05-07T06:43:09
| 2024-06-11T22:12:34
| 2024-06-11T22:12:33
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
https://mp.weixin.qq.com/s/3AmJpYe1eLPHk7HJLYM24A
pls add DeepSeek-V2
模型&论文双开源
深度求索始终秉持着最开放的开源精神,以开源推动人类AGI事业的前行。这次的DeepSeek-V2模型和论文也将完全开源,免费商用,无需申请:
模型权重:
https://huggingface.co/deepseek-ai
技术报告:
https://github.com/deepseek-ai/DeepSeek-V2/blob/main/deepseek-v2-tech-report.pdf
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4221/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4221/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5471
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5471/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5471/comments
|
https://api.github.com/repos/ollama/ollama/issues/5471/events
|
https://github.com/ollama/ollama/issues/5471
| 2,389,591,111
|
I_kwDOJ0Z1Ps6ObkRH
| 5,471
|
Available memory calculation on AMD APU no longer takes GTT into account
|
{
"login": "Ph0enix89",
"id": 1931477,
"node_id": "MDQ6VXNlcjE5MzE0Nzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1931477?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ph0enix89",
"html_url": "https://github.com/Ph0enix89",
"followers_url": "https://api.github.com/users/Ph0enix89/followers",
"following_url": "https://api.github.com/users/Ph0enix89/following{/other_user}",
"gists_url": "https://api.github.com/users/Ph0enix89/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ph0enix89/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ph0enix89/subscriptions",
"organizations_url": "https://api.github.com/users/Ph0enix89/orgs",
"repos_url": "https://api.github.com/users/Ph0enix89/repos",
"events_url": "https://api.github.com/users/Ph0enix89/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ph0enix89/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6433346500,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf3UTxA",
"url": "https://api.github.com/repos/ollama/ollama/labels/amd",
"name": "amd",
"color": "000000",
"default": false,
"description": "Issues relating to AMD GPUs and ROCm"
},
{
"id": 6677745918,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgZQ_g",
"url": "https://api.github.com/repos/ollama/ollama/labels/gpu",
"name": "gpu",
"color": "76C49E",
"default": false,
"description": ""
}
] |
open
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 4
| 2024-07-03T22:23:35
| 2025-01-30T08:43:40
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
First I have to acknowledge that I understand that officially running on 780m GPU is not supported. However for some scenarios it performs better than pure CPU and in others it's perhaps more power efficient to run on the GPU.
Perhaps this could also be relevant for bigger GPUs that are supported as the patch states:
```
The solution is MI300A approach, i.e., let VRAM allocations go to GTT.
Then device and host can flexibly and effectively share memory resource.
```
However I have to admit that the last part is just my speculation.
In any case `6.10` kernel release candidates have an improved GPU memory allocation which now allows computational workloads to utilize `GTT` in addition to `VRAM` as opposed to just `VRAM` before. More details [here](https://www.phoronix.com/news/Linux-6.10-AMDKFD-Small-APUs). I believe [this](https://gitlab.freedesktop.org/drm/kernel/-/commit/89773b85599affe89dfc030aa1cb70d6ca7de4d3) to be the relevant commit.
In practice what it means is that on my laptop with 64 GB of memory I can play around with bigger models. Prior to `0.1.45` I see the following in the logs:
```
level=INFO source=types.go:71 msg="inference compute" id=0 library=rocm compute=gfx1103 driver=0.0 name=1002:15bf total="27.3 GiB" available="27.3 GiB"
```
While this is perhaps still less than what is actually available:
```
kernel: [drm] amdgpu: 8192M of VRAM memory ready
kernel: [drm] amdgpu: 27940M of GTT memory ready.
```
it still allows to work with models that are bigger than 8 GBs that still offer reasonable performance on the 780m GPU.
It would be nice to be able to continue to be able to use that extra memory in the future. Ideally being able to access `VRAM`+`GTT`, e.g. 36 GBs would be even better.
I suspect that [this commit](https://github.com/ollama/ollama/commit/b32ebb4f2990817403484d50974077a5c52a4677) introduced some changes to how available memory is calculated. Starting from `0.1.45` I see the following in the logs:
```
level=INFO source=types.go:98 msg="inference compute" id=0 library=rocm compute=gfx1103 driver=0.0 name=1002:15bf total="8.0 GiB" available="6.4 GiB"
```
The new way of calculating the available memory does a better job of determining the actual available free memory but ideally it would be nice if would run this calculation against `VRAM`+`GTT`.
### OS
Arch (6.10.0-rc6-1-mainline) + docker container
### GPU
AMD
### CPU
AMD
### Ollama version
0.1.48-rocm
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5471/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5471/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/2672
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2672/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2672/comments
|
https://api.github.com/repos/ollama/ollama/issues/2672/events
|
https://github.com/ollama/ollama/issues/2672
| 2,148,629,038
|
I_kwDOJ0Z1Ps6AEXou
| 2,672
|
Do Ollama support multiple GPUs working simultaneously?
|
{
"login": "papandadj",
"id": 25424898,
"node_id": "MDQ6VXNlcjI1NDI0ODk4",
"avatar_url": "https://avatars.githubusercontent.com/u/25424898?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/papandadj",
"html_url": "https://github.com/papandadj",
"followers_url": "https://api.github.com/users/papandadj/followers",
"following_url": "https://api.github.com/users/papandadj/following{/other_user}",
"gists_url": "https://api.github.com/users/papandadj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/papandadj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/papandadj/subscriptions",
"organizations_url": "https://api.github.com/users/papandadj/orgs",
"repos_url": "https://api.github.com/users/papandadj/repos",
"events_url": "https://api.github.com/users/papandadj/events{/privacy}",
"received_events_url": "https://api.github.com/users/papandadj/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 11
| 2024-02-22T09:37:50
| 2025-01-28T18:19:18
| 2024-02-26T12:11:55
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I have 8 RTX 4090 GPUs. Can they support a 70B-int4 parameter model?
|
{
"login": "papandadj",
"id": 25424898,
"node_id": "MDQ6VXNlcjI1NDI0ODk4",
"avatar_url": "https://avatars.githubusercontent.com/u/25424898?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/papandadj",
"html_url": "https://github.com/papandadj",
"followers_url": "https://api.github.com/users/papandadj/followers",
"following_url": "https://api.github.com/users/papandadj/following{/other_user}",
"gists_url": "https://api.github.com/users/papandadj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/papandadj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/papandadj/subscriptions",
"organizations_url": "https://api.github.com/users/papandadj/orgs",
"repos_url": "https://api.github.com/users/papandadj/repos",
"events_url": "https://api.github.com/users/papandadj/events{/privacy}",
"received_events_url": "https://api.github.com/users/papandadj/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2672/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2672/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7453
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7453/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7453/comments
|
https://api.github.com/repos/ollama/ollama/issues/7453/events
|
https://github.com/ollama/ollama/pull/7453
| 2,627,632,320
|
PR_kwDOJ0Z1Ps6AkBDt
| 7,453
|
runner.go: Don't set cross attention before sending embeddings
|
{
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users/jessegross/followers",
"following_url": "https://api.github.com/users/jessegross/following{/other_user}",
"gists_url": "https://api.github.com/users/jessegross/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jessegross/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jessegross/subscriptions",
"organizations_url": "https://api.github.com/users/jessegross/orgs",
"repos_url": "https://api.github.com/users/jessegross/repos",
"events_url": "https://api.github.com/users/jessegross/events{/privacy}",
"received_events_url": "https://api.github.com/users/jessegross/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-10-31T19:08:42
| 2024-10-31T20:56:09
| 2024-10-31T20:56:08
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7453",
"html_url": "https://github.com/ollama/ollama/pull/7453",
"diff_url": "https://github.com/ollama/ollama/pull/7453.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7453.patch",
"merged_at": "2024-10-31T20:56:08"
}
|
Currently if an input has embeddings at any point then we will set cross attention to true from the beginning. This means that any tokens before the embeddings are sent will incorrectly have cross attention layers applied.
This only sets cross attention when we have an embedding, either previously in this sequence or in the cache. It also makes cross attention capable of supporting parallelism at the runner level, though the mllama implementation doesn't support that yet.
|
{
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users/jessegross/followers",
"following_url": "https://api.github.com/users/jessegross/following{/other_user}",
"gists_url": "https://api.github.com/users/jessegross/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jessegross/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jessegross/subscriptions",
"organizations_url": "https://api.github.com/users/jessegross/orgs",
"repos_url": "https://api.github.com/users/jessegross/repos",
"events_url": "https://api.github.com/users/jessegross/events{/privacy}",
"received_events_url": "https://api.github.com/users/jessegross/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7453/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7453/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/8105
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8105/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8105/comments
|
https://api.github.com/repos/ollama/ollama/issues/8105/events
|
https://github.com/ollama/ollama/issues/8105
| 2,740,249,062
|
I_kwDOJ0Z1Ps6jVOHm
| 8,105
|
Digest mismatch for llama3.3
|
{
"login": "sanity",
"id": 23075,
"node_id": "MDQ6VXNlcjIzMDc1",
"avatar_url": "https://avatars.githubusercontent.com/u/23075?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanity",
"html_url": "https://github.com/sanity",
"followers_url": "https://api.github.com/users/sanity/followers",
"following_url": "https://api.github.com/users/sanity/following{/other_user}",
"gists_url": "https://api.github.com/users/sanity/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanity/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanity/subscriptions",
"organizations_url": "https://api.github.com/users/sanity/orgs",
"repos_url": "https://api.github.com/users/sanity/repos",
"events_url": "https://api.github.com/users/sanity/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanity/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
|
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 18
| 2024-12-15T03:09:51
| 2025-01-26T10:37:49
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I've tried this several times - same result every time:
~ took 43s ❯ ollama run llama3.3
pulling manifest
pulling 4824460d29f2... 100% ▕████████████████████████████████████▏ 42 GB
pulling 948af2743fc7... 100% ▕████████████████████████████████████▏ 1.5 KB
pulling bc371a43ce90... 100% ▕████████████████████████████████████▏ 7.6 KB
pulling 53a87df39647... 100% ▕████████████████████████████████████▏ 5.6 KB
pulling 56bb8bd477a5... 100% ▕████████████████████████████████████▏ 96 B
pulling c7091aa45e9b... 100% ▕████████████████████████████████████▏ 562 B
verifying sha256 digest
Error: digest mismatch, file must be downloaded again: want sha256:4824460d29f2058aaf6e1118a63a7a197a09bed509f0e7d4e2efb1ee273b447d, got sha256:4e7ab3e3f5fba9ba2d72787c3e5a8e0d4931059bae000821f1278753855af7ac
### OS
Linux
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.5.1
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8105/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8105/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/323
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/323/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/323/comments
|
https://api.github.com/repos/ollama/ollama/issues/323/events
|
https://github.com/ollama/ollama/pull/323
| 1,846,012,101
|
PR_kwDOJ0Z1Ps5Xrv4W
| 323
|
fix could not convert int
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-08-10T23:22:58
| 2023-08-10T23:24:34
| 2023-08-10T23:24:33
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/323",
"html_url": "https://github.com/ollama/ollama/pull/323",
"diff_url": "https://github.com/ollama/ollama/pull/323.diff",
"patch_url": "https://github.com/ollama/ollama/pull/323.patch",
"merged_at": "2023-08-10T23:24:33"
}
| null |
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/323/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/323/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/8669
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8669/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8669/comments
|
https://api.github.com/repos/ollama/ollama/issues/8669/events
|
https://github.com/ollama/ollama/issues/8669
| 2,818,980,252
|
I_kwDOJ0Z1Ps6oBjmc
| 8,669
|
deepseek-r1:32b do not support tools? qwen2.5 base model should support.
|
{
"login": "HuChundong",
"id": 3194932,
"node_id": "MDQ6VXNlcjMxOTQ5MzI=",
"avatar_url": "https://avatars.githubusercontent.com/u/3194932?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HuChundong",
"html_url": "https://github.com/HuChundong",
"followers_url": "https://api.github.com/users/HuChundong/followers",
"following_url": "https://api.github.com/users/HuChundong/following{/other_user}",
"gists_url": "https://api.github.com/users/HuChundong/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HuChundong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HuChundong/subscriptions",
"organizations_url": "https://api.github.com/users/HuChundong/orgs",
"repos_url": "https://api.github.com/users/HuChundong/repos",
"events_url": "https://api.github.com/users/HuChundong/events{/privacy}",
"received_events_url": "https://api.github.com/users/HuChundong/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 1
| 2025-01-29T18:49:54
| 2025-01-29T20:29:43
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
when i use autogen, deepseek-r1:32b raise error: model do not support tools.
### OS
WSL2
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.5.7
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8669/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8669/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/8077
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8077/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8077/comments
|
https://api.github.com/repos/ollama/ollama/issues/8077/events
|
https://github.com/ollama/ollama/issues/8077
| 2,736,911,996
|
I_kwDOJ0Z1Ps6jIfZ8
| 8,077
|
Add support for setting models to private on `ollama push`
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 0
| 2024-12-12T21:27:30
| 2024-12-12T21:27:39
| null |
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Currently, while it's possible to create private repositories through the ollama.com web interface, there's no way to initialize a new private repository directly through the ollama push CLI command. This creates friction in automated workflows and requires context switching between CLI and web interface when working with private models.
# Proposed Solution
Extend the ollama push command to support creating private repositories on initialization. This would allow users to create and push to private repos in a single command.
Example usage:
```bash
ollama push username/my-model:latest --private
```
# Justification
Enables fully automated workflows for private model management
Maintains consistency with other CLI tools (like GitHub CLI) that support private repo creation
Reduces context switching between CLI and web interface
Important for organizations that want to automate private model deployment pipelines
# Alternative Solutions
1. Status Quo: Continue requiring web interface for private repo creation
Pro: Simpler CLI interface
Con: Breaks automation workflows
2. A user-level setting on ollama.com that can set new repos to automatically be private for the user.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8077/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8077/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/8691
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8691/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8691/comments
|
https://api.github.com/repos/ollama/ollama/issues/8691/events
|
https://github.com/ollama/ollama/pull/8691
| 2,820,813,762
|
PR_kwDOJ0Z1Ps6Jf9LD
| 8,691
|
Fix install_cuda_driver_yum() for dnf5
|
{
"login": "FreeCap23",
"id": 62378314,
"node_id": "MDQ6VXNlcjYyMzc4MzE0",
"avatar_url": "https://avatars.githubusercontent.com/u/62378314?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/FreeCap23",
"html_url": "https://github.com/FreeCap23",
"followers_url": "https://api.github.com/users/FreeCap23/followers",
"following_url": "https://api.github.com/users/FreeCap23/following{/other_user}",
"gists_url": "https://api.github.com/users/FreeCap23/gists{/gist_id}",
"starred_url": "https://api.github.com/users/FreeCap23/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/FreeCap23/subscriptions",
"organizations_url": "https://api.github.com/users/FreeCap23/orgs",
"repos_url": "https://api.github.com/users/FreeCap23/repos",
"events_url": "https://api.github.com/users/FreeCap23/events{/privacy}",
"received_events_url": "https://api.github.com/users/FreeCap23/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 0
| 2025-01-30T13:15:48
| 2025-01-30T13:15:48
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/8691",
"html_url": "https://github.com/ollama/ollama/pull/8691",
"diff_url": "https://github.com/ollama/ollama/pull/8691.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8691.patch",
"merged_at": null
}
|
This commit tests the version of dnf because in version 5 there was a change to the `--add-repo` flag to `--addrepo`, which would fail the script.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8691/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8691/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2178
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2178/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2178/comments
|
https://api.github.com/repos/ollama/ollama/issues/2178/events
|
https://github.com/ollama/ollama/issues/2178
| 2,099,080,856
|
I_kwDOJ0Z1Ps59HW6Y
| 2,178
|
Additional package managers
|
{
"login": "rubencallewaert",
"id": 68565649,
"node_id": "MDQ6VXNlcjY4NTY1NjQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/68565649?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rubencallewaert",
"html_url": "https://github.com/rubencallewaert",
"followers_url": "https://api.github.com/users/rubencallewaert/followers",
"following_url": "https://api.github.com/users/rubencallewaert/following{/other_user}",
"gists_url": "https://api.github.com/users/rubencallewaert/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rubencallewaert/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rubencallewaert/subscriptions",
"organizations_url": "https://api.github.com/users/rubencallewaert/orgs",
"repos_url": "https://api.github.com/users/rubencallewaert/repos",
"events_url": "https://api.github.com/users/rubencallewaert/events{/privacy}",
"received_events_url": "https://api.github.com/users/rubencallewaert/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396210,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2acg",
"url": "https://api.github.com/repos/ollama/ollama/labels/good%20first%20issue",
"name": "good first issue",
"color": "7057ff",
"default": true,
"description": "Good for newcomers"
},
{
"id": 5895046125,
"node_id": "LA_kwDOJ0Z1Ps8AAAABX19D7Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/integration",
"name": "integration",
"color": "92E43A",
"default": false,
"description": ""
},
{
"id": 6677279472,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjf8y8A",
"url": "https://api.github.com/repos/ollama/ollama/labels/macos",
"name": "macos",
"color": "E2DBC0",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] | null | 2
| 2024-01-24T20:50:28
| 2024-03-11T22:18:28
| 2024-03-11T22:18:27
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
While it's true that Homebrew is by far the most popular package manager on Mac, It would be great to be able to install Ollama via MacPorts.
This gives people maximum freedom in installing Ollama the way they want to, for a lot of people including me it isn't really acceptable to run an electron GUI application that needs to be granted root privileges to install a CLI.
I understand wanting to make the barrier entry as low as possible for the maximum amount of people, but there should always be a secondary option to just use a package manager of your choice to install a CLI.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2178/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2178/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7568
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7568/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7568/comments
|
https://api.github.com/repos/ollama/ollama/issues/7568/events
|
https://github.com/ollama/ollama/issues/7568
| 2,643,145,440
|
I_kwDOJ0Z1Ps6dizLg
| 7,568
|
CUDA error: unspecified launch failure in function ggml_backend_cuda_synchronize at ggml-cuda.cu:2508
|
{
"login": "romansvet",
"id": 19204498,
"node_id": "MDQ6VXNlcjE5MjA0NDk4",
"avatar_url": "https://avatars.githubusercontent.com/u/19204498?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/romansvet",
"html_url": "https://github.com/romansvet",
"followers_url": "https://api.github.com/users/romansvet/followers",
"following_url": "https://api.github.com/users/romansvet/following{/other_user}",
"gists_url": "https://api.github.com/users/romansvet/gists{/gist_id}",
"starred_url": "https://api.github.com/users/romansvet/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/romansvet/subscriptions",
"organizations_url": "https://api.github.com/users/romansvet/orgs",
"repos_url": "https://api.github.com/users/romansvet/repos",
"events_url": "https://api.github.com/users/romansvet/events{/privacy}",
"received_events_url": "https://api.github.com/users/romansvet/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg",
"url": "https://api.github.com/repos/ollama/ollama/labels/nvidia",
"name": "nvidia",
"color": "8CDB00",
"default": false,
"description": "Issues relating to Nvidia GPUs and CUDA"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 3
| 2024-11-08T07:32:33
| 2024-11-08T22:08:32
| 2024-11-08T22:08:32
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I am using open webUI version v0.3.30 and when I try to analyze an image using the llama3.2-vision:latest model I get nothing.
In the ollama service log I see the following:
```
Nov 08 07:06:28 ollama[11902]: time=2024-11-08T07:06:28.729Z level=WARN source=sched.go:137 msg="multimodal models don't support parallel requests yet"
Nov 08 07:06:28 ollama[11902]: time=2024-11-08T07:06:28.905Z level=INFO source=sched.go:730 msg="new model will fit in available VRAM, loading" model=/usr/share/ollama/.ollama/models/blobs/sha256-11f274007f093fefeec994a5dbbb33d0733a4feb87f7ab66dcd7c1069fef0068 library=cuda parallel=1 required="13.0 GiB"
Nov 08 07:06:29 ollama[11902]: time=2024-11-08T07:06:29.030Z level=INFO source=server.go:105 msg="system memory" total="125.7 GiB" free="123.6 GiB" free_swap="32.0 GiB"
Nov 08 07:06:29 ollama[11902]: time=2024-11-08T07:06:29.032Z level=INFO source=memory.go:343 msg="offload to cuda" projector.weights="1.8 GiB" projector.graph="2.8 GiB" layers.requested=-1 layers.model=41 layers.offload=41 layers.split=13,28 memory.available="[7.6 GiB 7.6 GiB]" memory.gpu_overhead="0 B" memory.required.full="13.0 GiB" memory.required.partial="13.0 GiB" memory.required.kv="656.2 MiB" memory.required.allocations="[7.6 GiB 5.4 GiB]" memory.weights.total="5.5 GiB" memory.weights.repeating="5.1 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="669.5 MiB" memory.graph.partial="669.5 MiB"
Nov 08 07:06:29 ollama[11902]: time=2024-11-08T07:06:29.032Z level=INFO source=server.go:388 msg="starting llama server" cmd="/tmp/ollama1948393618/runners/cuda_v12/ollama_llama_server --model /usr/share/ollama/.ollama/models/blobs/sha256-11f274007f093fefeec994a5dbbb33d0733a4feb87f7ab66dcd7c1069fef0068 --ctx-size 2048 --batch-size 512 --embedding --n-gpu-layers 41 --mmproj /usr/share/ollama/.ollama/models/blobs/sha256-ece5e659647a20a5c28ab9eea1c12a1ad430bc0f2a27021d00ad103b3bf5206f --threads 4 --parallel 1 --tensor-split 13,28 --port 40263"
Nov 08 07:06:29 ollama[11902]: time=2024-11-08T07:06:29.032Z level=INFO source=sched.go:449 msg="loaded runners" count=1
Nov 08 07:06:29 ollama[11902]: time=2024-11-08T07:06:29.032Z level=INFO source=server.go:567 msg="waiting for llama runner to start responding"
Nov 08 07:06:29 ollama[11902]: time=2024-11-08T07:06:29.032Z level=INFO source=server.go:601 msg="waiting for server to become available" status="llm server error"
Nov 08 07:06:29 ollama[11902]: time=2024-11-08T07:06:29.056Z level=INFO source=runner.go:869 msg="starting go runner"
Nov 08 07:06:29 ollama[11902]: time=2024-11-08T07:06:29.056Z level=INFO source=runner.go:870 msg=system info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | cgo(gcc)" threads=4
Nov 08 07:06:29 ollama[11902]: time=2024-11-08T07:06:29.056Z level=INFO source=.:0 msg="Server listening on 127.0.0.1:40263"
Nov 08 07:06:29 ollama[11902]: llama_model_loader: loaded meta data with 27 key-value pairs and 396 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-11f274007f093fefeec994a5dbbb33d0733a4feb87f7ab66dcd7c1069fef0068 (version GGUF V3 (latest))
Nov 08 07:06:29 ollama[11902]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
Nov 08 07:06:29 ollama[11902]: llama_model_loader: - kv 0: general.architecture str = mllama
Nov 08 07:06:29 ollama[11902]: llama_model_loader: - kv 1: general.type str = model
Nov 08 07:06:29 ollama[11902]: llama_model_loader: - kv 2: general.name str = Model
Nov 08 07:06:29 ollama[11902]: llama_model_loader: - kv 3: general.size_label str = 10B
Nov 08 07:06:29 ollama[11902]: llama_model_loader: - kv 4: mllama.block_count u32 = 40
Nov 08 07:06:29 ollama[11902]: llama_model_loader: - kv 5: mllama.context_length u32 = 131072
Nov 08 07:06:29 ollama[11902]: llama_model_loader: - kv 6: mllama.embedding_length u32 = 4096
Nov 08 07:06:29 ollama[11902]: llama_model_loader: - kv 7: mllama.feed_forward_length u32 = 14336
Nov 08 07:06:29 ollama[11902]: llama_model_loader: - kv 8: mllama.attention.head_count u32 = 32
Nov 08 07:06:29 ollama[11902]: llama_model_loader: - kv 9: mllama.attention.head_count_kv u32 = 8
Nov 08 07:06:29 ollama[11902]: llama_model_loader: - kv 10: mllama.rope.freq_base f32 = 500000.000000
Nov 08 07:06:29 ollama[11902]: llama_model_loader: - kv 11: mllama.attention.layer_norm_rms_epsilon f32 = 0.000010
Nov 08 07:06:29 ollama[11902]: llama_model_loader: - kv 12: general.file_type u32 = 15
Nov 08 07:06:29 ollama[11902]: llama_model_loader: - kv 13: mllama.vocab_size u32 = 128256
Nov 08 07:06:29 ollama[11902]: llama_model_loader: - kv 14: mllama.rope.dimension_count u32 = 128
Nov 08 07:06:29 ollama[11902]: llama_model_loader: - kv 15: mllama.attention.cross_attention_layers arr[i32,8] = [3, 8, 13, 18, 23, 28, 33, 38]
Nov 08 07:06:29 ollama[11902]: llama_model_loader: - kv 16: tokenizer.ggml.add_bos_token bool = true
Nov 08 07:06:29 ollama[11902]: llama_model_loader: - kv 17: tokenizer.ggml.model str = gpt2
Nov 08 07:06:29 ollama[11902]: llama_model_loader: - kv 18: tokenizer.ggml.pre str = llama-bpe
Nov 08 07:06:29 ollama[11902]: llama_model_loader: - kv 19: tokenizer.ggml.tokens arr[str,128257] = ["!", "\"", "#", "$", "%", "&", "'", ...
Nov 08 07:06:29 ollama[11902]: llama_model_loader: - kv 20: tokenizer.ggml.token_type arr[i32,128257] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
Nov 08 07:06:29 ollama[11902]: llama_model_loader: - kv 21: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
Nov 08 07:06:29 ollama[11902]: llama_model_loader: - kv 22: tokenizer.ggml.bos_token_id u32 = 128000
Nov 08 07:06:29 ollama[11902]: llama_model_loader: - kv 23: tokenizer.ggml.eos_token_id u32 = 128009
Nov 08 07:06:29 ollama[11902]: llama_model_loader: - kv 24: tokenizer.ggml.padding_token_id u32 = 128004
Nov 08 07:06:29 ollama[11902]: llama_model_loader: - kv 25: tokenizer.chat_template str = {{- bos_token }}\n{%- if custom_tools ...
Nov 08 07:06:29 ollama[11902]: llama_model_loader: - kv 26: general.quantization_version u32 = 2
Nov 08 07:06:29 ollama[11902]: llama_model_loader: - type f32: 114 tensors
Nov 08 07:06:29 ollama[11902]: llama_model_loader: - type q4_K: 245 tensors
Nov 08 07:06:29 ollama[11902]: llama_model_loader: - type q6_K: 37 tensors
Nov 08 07:06:29 ollama[11902]: time=2024-11-08T07:06:29.283Z level=INFO source=server.go:601 msg="waiting for server to become available" status="llm server loading model"
Nov 08 07:06:29 ollama[11902]: llm_load_vocab: special tokens cache size = 257
Nov 08 07:06:29 ollama[11902]: llm_load_vocab: token to piece cache size = 0.7999 MB
Nov 08 07:06:29 ollama[11902]: llm_load_print_meta: format = GGUF V3 (latest)
Nov 08 07:06:29 ollama[11902]: llm_load_print_meta: arch = mllama
Nov 08 07:06:29 ollama[11902]: llm_load_print_meta: vocab type = BPE
Nov 08 07:06:29 ollama[11902]: llm_load_print_meta: n_vocab = 128256
Nov 08 07:06:29 ollama[11902]: llm_load_print_meta: n_merges = 280147
Nov 08 07:06:29 ollama[11902]: llm_load_print_meta: vocab_only = 0
Nov 08 07:06:29 ollama[11902]: llm_load_print_meta: n_ctx_train = 131072
Nov 08 07:06:29 ollama[11902]: llm_load_print_meta: n_embd = 4096
Nov 08 07:06:29 ollama[11902]: llm_load_print_meta: n_layer = 40
Nov 08 07:06:29 ollama[11902]: llm_load_print_meta: n_head = 32
Nov 08 07:06:29 ollama[11902]: llm_load_print_meta: n_head_kv = 8
Nov 08 07:06:29 ollama[11902]: llm_load_print_meta: n_rot = 128
Nov 08 07:06:29 ollama[11902]: llm_load_print_meta: n_swa = 0
Nov 08 07:06:29 ollama[11902]: llm_load_print_meta: n_embd_head_k = 128
Nov 08 07:06:29 ollama[11902]: llm_load_print_meta: n_embd_head_v = 128
Nov 08 07:06:29 ollama[11902]: llm_load_print_meta: n_gqa = 4
Nov 08 07:06:29 ollama[11902]: llm_load_print_meta: n_embd_k_gqa = 1024
Nov 08 07:06:29 ollama[11902]: llm_load_print_meta: n_embd_v_gqa = 1024
Nov 08 07:06:29 ollama[11902]: llm_load_print_meta: f_norm_eps = 0.0e+00
Nov 08 07:06:29 ollama[11902]: llm_load_print_meta: f_norm_rms_eps = 1.0e-05
Nov 08 07:06:29 ollama[11902]: llm_load_print_meta: f_clamp_kqv = 0.0e+00
Nov 08 07:06:29 ollama[11902]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00
Nov 08 07:06:29 ollama[11902]: llm_load_print_meta: f_logit_scale = 0.0e+00
Nov 08 07:06:29 ollama[11902]: llm_load_print_meta: n_ff = 14336
Nov 08 07:06:29 ollama[11902]: llm_load_print_meta: n_expert = 0
Nov 08 07:06:29 ollama[11902]: llm_load_print_meta: n_expert_used = 0
Nov 08 07:06:29 ollama[11902]: llm_load_print_meta: causal attn = 1
Nov 08 07:06:29 ollama[11902]: llm_load_print_meta: pooling type = 0
Nov 08 07:06:29 ollama[11902]: llm_load_print_meta: rope type = 0
Nov 08 07:06:29 ollama[11902]: llm_load_print_meta: rope scaling = linear
Nov 08 07:06:29 ollama[11902]: llm_load_print_meta: freq_base_train = 500000.0
Nov 08 07:06:29 ollama[11902]: llm_load_print_meta: freq_scale_train = 1
Nov 08 07:06:29 ollama[11902]: llm_load_print_meta: n_ctx_orig_yarn = 131072
Nov 08 07:06:29 ollama[11902]: llm_load_print_meta: rope_finetuned = unknown
Nov 08 07:06:29 ollama[11902]: llm_load_print_meta: ssm_d_conv = 0
Nov 08 07:06:29 ollama[11902]: llm_load_print_meta: ssm_d_inner = 0
Nov 08 07:06:29 ollama[11902]: llm_load_print_meta: ssm_d_state = 0
Nov 08 07:06:29 ollama[11902]: llm_load_print_meta: ssm_dt_rank = 0
Nov 08 07:06:29 ollama[11902]: llm_load_print_meta: ssm_dt_b_c_rms = 0
Nov 08 07:06:29 ollama[11902]: llm_load_print_meta: model type = 11B
Nov 08 07:06:29 ollama[11902]: llm_load_print_meta: model ftype = Q4_K - Medium
Nov 08 07:06:29 ollama[11902]: llm_load_print_meta: model params = 9.78 B
Nov 08 07:06:29 ollama[11902]: llm_load_print_meta: model size = 5.55 GiB (4.87 BPW)
Nov 08 07:06:29 ollama[11902]: llm_load_print_meta: general.name = Model
Nov 08 07:06:29 ollama[11902]: llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>'
Nov 08 07:06:29 ollama[11902]: llm_load_print_meta: EOS token = 128009 '<|eot_id|>'
Nov 08 07:06:29 ollama[11902]: llm_load_print_meta: PAD token = 128004 '<|finetune_right_pad_id|>'
Nov 08 07:06:29 ollama[11902]: llm_load_print_meta: LF token = 128 'Ä'
Nov 08 07:06:29 ollama[11902]: llm_load_print_meta: EOT token = 128009 '<|eot_id|>'
Nov 08 07:06:29 ollama[11902]: llm_load_print_meta: EOM token = 128008 '<|eom_id|>'
Nov 08 07:06:29 ollama[11902]: llm_load_print_meta: EOG token = 128008 '<|eom_id|>'
Nov 08 07:06:29 ollama[11902]: llm_load_print_meta: EOG token = 128009 '<|eot_id|>'
Nov 08 07:06:29 ollama[11902]: llm_load_print_meta: max token length = 256
Nov 08 07:06:29 ollama[11902]: llama_model_load: vocab mismatch 128256 !- 128257 ...
Nov 08 07:06:29 ollama[11902]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
Nov 08 07:06:29 ollama[11902]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
Nov 08 07:06:29 ollama[11902]: ggml_cuda_init: found 2 CUDA devices:
Nov 08 07:06:29 ollama[11902]: Device 0: NVIDIA GeForce RTX 3070, compute capability 8.6, VMM: yes
Nov 08 07:06:29 ollama[11902]: Device 1: NVIDIA GeForce RTX 3070, compute capability 8.6, VMM: yes
Nov 08 07:06:29 ollama[11902]: llm_load_tensors: ggml ctx size = 0.54 MiB
Nov 08 07:06:29 ollama[11902]: llm_load_tensors: offloading 40 repeating layers to GPU
Nov 08 07:06:29 ollama[11902]: llm_load_tensors: offloading non-repeating layers to GPU
Nov 08 07:06:29 ollama[11902]: llm_load_tensors: offloaded 41/41 layers to GPU
Nov 08 07:06:29 ollama[11902]: llm_load_tensors: CPU buffer size = 281.83 MiB
Nov 08 07:06:29 ollama[11902]: llm_load_tensors: CUDA0 buffer size = 1628.66 MiB
Nov 08 07:06:29 ollama[11902]: llm_load_tensors: CUDA1 buffer size = 3768.85 MiB
Nov 08 07:06:35 ollama[11902]: llama_new_context_with_model: n_ctx = 2048
Nov 08 07:06:35 ollama[11902]: llama_new_context_with_model: n_batch = 512
Nov 08 07:06:35 ollama[11902]: llama_new_context_with_model: n_ubatch = 512
Nov 08 07:06:35 ollama[11902]: llama_new_context_with_model: flash_attn = 0
Nov 08 07:06:35 ollama[11902]: llama_new_context_with_model: freq_base = 500000.0
Nov 08 07:06:35 ollama[11902]: llama_new_context_with_model: freq_scale = 1
Nov 08 07:06:35 ollama[11902]: llama_kv_cache_init: CUDA0 KV buffer size = 188.06 MiB
Nov 08 07:06:35 ollama[11902]: llama_kv_cache_init: CUDA1 KV buffer size = 468.19 MiB
Nov 08 07:06:35 ollama[11902]: llama_new_context_with_model: KV self size = 656.25 MiB, K (f16): 328.12 MiB, V (f16): 328.12 MiB
Nov 08 07:06:35 ollama[11902]: llama_new_context_with_model: CUDA_Host output buffer size = 0.50 MiB
Nov 08 07:06:35 ollama[11902]: llama_new_context_with_model: pipeline parallelism enabled (n_copies=4)
Nov 08 07:06:35 ollama[11902]: llama_new_context_with_model: CUDA0 compute buffer size = 208.01 MiB
Nov 08 07:06:35 ollama[11902]: llama_new_context_with_model: CUDA1 compute buffer size = 306.52 MiB
Nov 08 07:06:35 ollama[11902]: llama_new_context_with_model: CUDA_Host compute buffer size = 24.02 MiB
Nov 08 07:06:35 ollama[11902]: llama_new_context_with_model: graph nodes = 1030
Nov 08 07:06:35 ollama[11902]: llama_new_context_with_model: graph splits = 3
Nov 08 07:06:35 ollama[11902]: mllama_model_load: model name: Llama-3.2-11B-Vision-Instruct
Nov 08 07:06:35 ollama[11902]: mllama_model_load: description: vision encoder for Mllama
Nov 08 07:06:35 ollama[11902]: mllama_model_load: GGUF version: 3
Nov 08 07:06:35 ollama[11902]: mllama_model_load: alignment: 32
Nov 08 07:06:35 ollama[11902]: mllama_model_load: n_tensors: 512
Nov 08 07:06:35 ollama[11902]: mllama_model_load: n_kv: 17
Nov 08 07:06:35 ollama[11902]: mllama_model_load: ftype: f16
Nov 08 07:06:35 ollama[11902]: mllama_model_load:
Nov 08 07:06:35 ollama[11902]: mllama_model_load: vision using CUDA backend
Nov 08 07:06:36 ollama[11902]: mllama_model_load: compute allocated memory: 2853.34 MB
Nov 08 07:06:36 ollama[11902]: time=2024-11-08T07:06:36.555Z level=INFO source=server.go:606 msg="llama runner started in 7.52 seconds"
Nov 08 07:06:40 ollama[11902]: CUDA error: unspecified launch failure
Nov 08 07:06:40 ollama[11902]: current device: 1, in function ggml_backend_cuda_synchronize at ggml-cuda.cu:2508
Nov 08 07:06:40 ollama[11902]: cudaStreamSynchronize(cuda_ctx->stream())
Nov 08 07:06:40 ollama[11902]: ggml-cuda.cu:132: CUDA error
Nov 08 07:06:40 ollama[11902]: SIGSEGV: segmentation violation
Nov 08 07:06:40 ollama[11902]: PC=0x7f36e43b4c47 m=9 sigcode=1 addr=0x209e03fd4
Nov 08 07:06:40 ollama[11902]: signal arrived during cgo execution
Nov 08 07:06:40 ollama[11902]: goroutine 7 gp=0xc000164000 m=9 mp=0xc0001a2808 [syscall]:
Nov 08 07:06:40 ollama[11902]: runtime.cgocall(0x559312fb4eb0, 0xc000065b60)
Nov 08 07:06:40 ollama[11902]: runtime/cgocall.go:157 +0x4b fp=0xc000065b38 sp=0xc000065b00 pc=0x559312d373cb
Nov 08 07:06:40 ollama[11902]: github.com/ollama/ollama/llama._Cfunc_llama_decode(0x7f365005b650, {0xe, 0x7f36503ad760, 0x0, 0x0, 0x7f36503adf70, 0x7f365019ef40, 0x7f365019f750, 0x7f3642c4d360, 0x0, ...})
Nov 08 07:06:40 ollama[11902]: _cgo_gotypes.go:543 +0x52 fp=0xc000065b60 sp=0xc000065b38 pc=0x559312e34952
Nov 08 07:06:40 ollama[11902]: github.com/ollama/ollama/llama.(*Context).Decode.func1(0x559312fb0ceb?, 0x7f365005b650?)
Nov 08 07:06:40 ollama[11902]: github.com/ollama/ollama/llama/llama.go:167 +0xd8 fp=0xc000065c80 sp=0xc000065b60 pc=0x559312e36e78
Nov 08 07:06:40 ollama[11902]: github.com/ollama/ollama/llama.(*Context).Decode(0xc0000163c0?, 0x1?)
Nov 08 07:06:40 ollama[11902]: github.com/ollama/ollama/llama/llama.go:167 +0x17 fp=0xc000065cc8 sp=0xc000065c80 pc=0x559312e36cd7
Nov 08 07:06:40 ollama[11902]: main.(*Server).processBatch(0xc000128120, 0xc0000c0000, 0xc0000c0070)
Nov 08 07:06:40 ollama[11902]: github.com/ollama/ollama/llama/runner/runner.go:424 +0x29e fp=0xc000065ed0 sp=0xc000065cc8 pc=0x559312fafd1e
Nov 08 07:06:40 ollama[11902]: main.(*Server).run(0xc000128120, {0x5593132eea40, 0xc00007c0a0})
Nov 08 07:06:40 ollama[11902]: github.com/ollama/ollama/llama/runner/runner.go:338 +0x1a5 fp=0xc000065fb8 sp=0xc000065ed0 pc=0x559312faf705
Nov 08 07:06:40 ollama[11902]: main.main.gowrap2()
Nov 08 07:06:40 ollama[11902]: github.com/ollama/ollama/llama/runner/runner.go:907 +0x28 fp=0xc000065fe0 sp=0xc000065fb8 pc=0x559312fb3ee8
Nov 08 07:06:40 ollama[11902]: runtime.goexit({})
Nov 08 07:06:40 ollama[11902]: runtime/asm_amd64.s:1695 +0x1 fp=0xc000065fe8 sp=0xc000065fe0 pc=0x559312d9fde1
Nov 08 07:06:40 ollama[11902]: created by main.main in goroutine 1
Nov 08 07:06:40 ollama[11902]: github.com/ollama/ollama/llama/runner/runner.go:907 +0xcab
Nov 08 07:06:40 ollama[11902]: goroutine 1 gp=0xc0000061c0 m=nil [IO wait]:
Nov 08 07:06:40 ollama[11902]: runtime.gopark(0xc000030008?, 0x0?, 0xc0?, 0x61?, 0xc0000298c0?)
Nov 08 07:06:40 ollama[11902]: runtime/proc.go:402 +0xce fp=0xc00014d888 sp=0xc00014d868 pc=0x559312d6e00e
Nov 08 07:06:40 ollama[11902]: runtime.netpollblock(0xc000029920?, 0x12d36b26?, 0x93?)
Nov 08 07:06:40 ollama[11902]: runtime/netpoll.go:573 +0xf7 fp=0xc00014d8c0 sp=0xc00014d888 pc=0x559312d66257
Nov 08 07:06:40 ollama[11902]: internal/poll.runtime_pollWait(0x7f36dcfc9860, 0x72)
Nov 08 07:06:40 ollama[11902]: runtime/netpoll.go:345 +0x85 fp=0xc00014d8e0 sp=0xc00014d8c0 pc=0x559312d9aaa5
Nov 08 07:06:40 ollama[11902]: internal/poll.(*pollDesc).wait(0x3?, 0x3fe?, 0x0)
Nov 08 07:06:40 ollama[11902]: internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc00014d908 sp=0xc00014d8e0 pc=0x559312dea9c7
Nov 08 07:06:40 ollama[11902]: internal/poll.(*pollDesc).waitRead(...)
Nov 08 07:06:40 ollama[11902]: internal/poll/fd_poll_runtime.go:89
Nov 08 07:06:40 ollama[11902]: internal/poll.(*FD).Accept(0xc000152100)
Nov 08 07:06:40 ollama[11902]: internal/poll/fd_unix.go:611 +0x2ac fp=0xc00014d9b0 sp=0xc00014d908 pc=0x559312debe8c
Nov 08 07:06:40 ollama[11902]: net.(*netFD).accept(0xc000152100)
Nov 08 07:06:40 ollama[11902]: net/fd_unix.go:172 +0x29 fp=0xc00014da68 sp=0xc00014d9b0 pc=0x559312e5a8a9
Nov 08 07:06:40 ollama[11902]: net.(*TCPListener).accept(0xc000070200)
Nov 08 07:06:40 ollama[11902]: net/tcpsock_posix.go:159 +0x1e fp=0xc00014da90 sp=0xc00014da68 pc=0x559312e6b5de
Nov 08 07:06:40 ollama[11902]: net.(*TCPListener).Accept(0xc000070200)
Nov 08 07:06:40 ollama[11902]: net/tcpsock.go:327 +0x30 fp=0xc00014dac0 sp=0xc00014da90 pc=0x559312e6a930
Nov 08 07:06:40 ollama[11902]: net/http.(*onceCloseListener).Accept(0xc0001281b0?)
Nov 08 07:06:40 ollama[11902]: <autogenerated>:1 +0x24 fp=0xc00014dad8 sp=0xc00014dac0 pc=0x559312f91a44
Nov 08 07:06:40 ollama[11902]: net/http.(*Server).Serve(0xc0001620f0, {0x5593132ee400, 0xc000070200})
Nov 08 07:06:40 ollama[11902]: net/http/server.go:3260 +0x33e fp=0xc00014dc08 sp=0xc00014dad8 pc=0x559312f8885e
Nov 08 07:06:40 ollama[11902]: main.main()
Nov 08 07:06:40 ollama[11902]: github.com/ollama/ollama/llama/runner/runner.go:927 +0x104c fp=0xc00014df50 sp=0xc00014dc08 pc=0x559312fb3c6c
Nov 08 07:06:40 ollama[11902]: runtime.main()
Nov 08 07:06:40 ollama[11902]: runtime/proc.go:271 +0x29d fp=0xc00014dfe0 sp=0xc00014df50 pc=0x559312d6dbdd
Nov 08 07:06:40 ollama[11902]: runtime.goexit({})
Nov 08 07:06:40 ollama[11902]: runtime/asm_amd64.s:1695 +0x1 fp=0xc00014dfe8 sp=0xc00014dfe0 pc=0x559312d9fde1
Nov 08 07:06:40 ollama[11902]: goroutine 2 gp=0xc000006c40 m=nil [force gc (idle)]:
Nov 08 07:06:40 ollama[11902]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
Nov 08 07:06:40 ollama[11902]: runtime/proc.go:402 +0xce fp=0xc000050fa8 sp=0xc000050f88 pc=0x559312d6e00e
Nov 08 07:06:40 ollama[11902]: runtime.goparkunlock(...)
Nov 08 07:06:40 ollama[11902]: runtime/proc.go:408
Nov 08 07:06:40 ollama[11902]: runtime.forcegchelper()
Nov 08 07:06:40 ollama[11902]: runtime/proc.go:326 +0xb8 fp=0xc000050fe0 sp=0xc000050fa8 pc=0x559312d6de98
Nov 08 07:06:40 ollama[11902]: runtime.goexit({})
Nov 08 07:06:40 ollama[11902]: runtime/asm_amd64.s:1695 +0x1 fp=0xc000050fe8 sp=0xc000050fe0 pc=0x559312d9fde1
Nov 08 07:06:40 ollama[11902]: created by runtime.init.6 in goroutine 1
Nov 08 07:06:40 ollama[11902]: runtime/proc.go:314 +0x1a
Nov 08 07:06:40 ollama[11902]: goroutine 3 gp=0xc000007180 m=nil [GC sweep wait]:
Nov 08 07:06:40 ollama[11902]: runtime.gopark(0x1?, 0x0?, 0x0?, 0x0?, 0x0?)
Nov 08 07:06:40 ollama[11902]: runtime/proc.go:402 +0xce fp=0xc000051780 sp=0xc000051760 pc=0x559312d6e00e
Nov 08 07:06:40 ollama[11902]: runtime.goparkunlock(...)
Nov 08 07:06:40 ollama[11902]: runtime/proc.go:408
Nov 08 07:06:40 ollama[11902]: runtime.bgsweep(0xc000022070)
Nov 08 07:06:40 ollama[11902]: runtime/mgcsweep.go:318 +0xdf fp=0xc0000517c8 sp=0xc000051780 pc=0x559312d58b9f
Nov 08 07:06:40 ollama[11902]: runtime.gcenable.gowrap1()
Nov 08 07:06:40 ollama[11902]: runtime/mgc.go:203 +0x25 fp=0xc0000517e0 sp=0xc0000517c8 pc=0x559312d4d685
Nov 08 07:06:40 ollama[11902]: runtime.goexit({})
Nov 08 07:06:40 ollama[11902]: runtime/asm_amd64.s:1695 +0x1 fp=0xc0000517e8 sp=0xc0000517e0 pc=0x559312d9fde1
Nov 08 07:06:40 ollama[11902]: created by runtime.gcenable in goroutine 1
Nov 08 07:06:40 ollama[11902]: runtime/mgc.go:203 +0x66
Nov 08 07:06:40 ollama[11902]: goroutine 4 gp=0xc000007340 m=nil [GC scavenge wait]:
Nov 08 07:06:40 ollama[11902]: runtime.gopark(0x10000?, 0x5593131eff98?, 0x0?, 0x0?, 0x0?)
Nov 08 07:06:40 ollama[11902]: runtime/proc.go:402 +0xce fp=0xc000051f78 sp=0xc000051f58 pc=0x559312d6e00e
Nov 08 07:06:40 ollama[11902]: runtime.goparkunlock(...)
Nov 08 07:06:40 ollama[11902]: runtime/proc.go:408
Nov 08 07:06:40 ollama[11902]: runtime.(*scavengerState).park(0x5593134bc4c0)
Nov 08 07:06:40 ollama[11902]: runtime/mgcscavenge.go:425 +0x49 fp=0xc000051fa8 sp=0xc000051f78 pc=0x559312d56549
Nov 08 07:06:40 ollama[11902]: runtime.bgscavenge(0xc000022070)
Nov 08 07:06:40 ollama[11902]: runtime/mgcscavenge.go:658 +0x59 fp=0xc000051fc8 sp=0xc000051fa8 pc=0x559312d56af9
Nov 08 07:06:40 ollama[11902]: runtime.gcenable.gowrap2()
Nov 08 07:06:40 ollama[11902]: runtime/mgc.go:204 +0x25 fp=0xc000051fe0 sp=0xc000051fc8 pc=0x559312d4d625
Nov 08 07:06:40 ollama[11902]: runtime.goexit({})
Nov 08 07:06:40 ollama[11902]: runtime/asm_amd64.s:1695 +0x1 fp=0xc000051fe8 sp=0xc000051fe0 pc=0x559312d9fde1
Nov 08 07:06:40 ollama[11902]: created by runtime.gcenable in goroutine 1
Nov 08 07:06:40 ollama[11902]: runtime/mgc.go:204 +0xa5
Nov 08 07:06:40 ollama[11902]: goroutine 5 gp=0xc000007c00 m=nil [finalizer wait]:
Nov 08 07:06:40 ollama[11902]: runtime.gopark(0x0?, 0x5593132ea600?, 0x0?, 0xc0?, 0x1000000010?)
Nov 08 07:06:40 ollama[11902]: runtime/proc.go:402 +0xce fp=0xc000050620 sp=0xc000050600 pc=0x559312d6e00e
Nov 08 07:06:40 ollama[11902]: runtime.runfinq()
Nov 08 07:06:40 ollama[11902]: runtime/mfinal.go:194 +0x107 fp=0xc0000507e0 sp=0xc000050620 pc=0x559312d4c6c7
Nov 08 07:06:40 ollama[11902]: runtime.goexit({})
Nov 08 07:06:40 ollama[11902]: runtime/asm_amd64.s:1695 +0x1 fp=0xc0000507e8 sp=0xc0000507e0 pc=0x559312d9fde1
Nov 08 07:06:40 ollama[11902]: created by runtime.createfing in goroutine 1
Nov 08 07:06:40 ollama[11902]: runtime/mfinal.go:164 +0x3d
Nov 08 07:06:40 ollama[11902]: goroutine 49 gp=0xc000007dc0 m=nil [GC worker (idle)]:
Nov 08 07:06:40 ollama[11902]: runtime.gopark(0x2736bc2b5abd3?, 0xc00002e9d0?, 0x0?, 0x0?, 0x5593135a5060?)
Nov 08 07:06:40 ollama[11902]: runtime/proc.go:402 +0xce fp=0xc000052f50 sp=0xc000052f30 pc=0x559312d6e00e
Nov 08 07:06:40 ollama[11902]: runtime.gcBgMarkWorker()
Nov 08 07:06:40 ollama[11902]: runtime/mgc.go:1310 +0xe5 fp=0xc000052fe0 sp=0xc000052f50 pc=0x559312d4f585
Nov 08 07:06:40 ollama[11902]: runtime.goexit({})
Nov 08 07:06:40 ollama[11902]: runtime/asm_amd64.s:1695 +0x1 fp=0xc000052fe8 sp=0xc000052fe0 pc=0x559312d9fde1
Nov 08 07:06:40 ollama[11902]: created by runtime.gcBgMarkStartWorkers in goroutine 8
Nov 08 07:06:40 ollama[11902]: runtime/mgc.go:1234 +0x1c
Nov 08 07:06:40 ollama[11902]: goroutine 8 gp=0xc0001641c0 m=nil [select]:
Nov 08 07:06:40 ollama[11902]: runtime.gopark(0xc000029a80?, 0x2?, 0x60?, 0x0?, 0xc000029824?)
Nov 08 07:06:40 ollama[11902]: runtime/proc.go:402 +0xce fp=0xc000029698 sp=0xc000029678 pc=0x559312d6e00e
Nov 08 07:06:40 ollama[11902]: runtime.selectgo(0xc000029a80, 0xc000029820, 0x14?, 0x0, 0x1?, 0x1)
Nov 08 07:06:40 ollama[11902]: runtime/select.go:327 +0x725 fp=0xc0000297b8 sp=0xc000029698 pc=0x559312d7f3e5
Nov 08 07:06:40 ollama[11902]: main.(*Server).completion(0xc000128120, {0x5593132ee5b0, 0xc0001507e0}, 0xc00012a900)
Nov 08 07:06:40 ollama[11902]: github.com/ollama/ollama/llama/runner/runner.go:652 +0x8fe fp=0xc000029ab8 sp=0xc0000297b8 pc=0x559312fb167e
Nov 08 07:06:40 ollama[11902]: main.(*Server).completion-fm({0x5593132ee5b0?, 0xc0001507e0?}, 0x559312f8cb8d?)
Nov 08 07:06:40 ollama[11902]: <autogenerated>:1 +0x36 fp=0xc000029ae8 sp=0xc000029ab8 pc=0x559312fb46d6
Nov 08 07:06:40 ollama[11902]: net/http.HandlerFunc.ServeHTTP(0xc00010cc30?, {0x5593132ee5b0?, 0xc0001507e0?}, 0x10?)
Nov 08 07:06:40 ollama[11902]: net/http/server.go:2171 +0x29 fp=0xc000029b10 sp=0xc000029ae8 pc=0x559312f85629
Nov 08 07:06:40 ollama[11902]: net/http.(*ServeMux).ServeHTTP(0x559312d40f85?, {0x5593132ee5b0, 0xc0001507e0}, 0xc00012a900)
Nov 08 07:06:40 ollama[11902]: net/http/server.go:2688 +0x1ad fp=0xc000029b60 sp=0xc000029b10 pc=0x559312f874ad
Nov 08 07:06:40 ollama[11902]: net/http.serverHandler.ServeHTTP({0x5593132ed900?}, {0x5593132ee5b0?, 0xc0001507e0?}, 0x6?)
Nov 08 07:06:40 ollama[11902]: net/http/server.go:3142 +0x8e fp=0xc000029b90 sp=0xc000029b60 pc=0x559312f884ce
Nov 08 07:06:40 ollama[11902]: net/http.(*conn).serve(0xc0001281b0, {0x5593132eea08, 0xc00010ae10})
Nov 08 07:06:40 ollama[11902]: net/http/server.go:2044 +0x5e8 fp=0xc000029fb8 sp=0xc000029b90 pc=0x559312f84268
Nov 08 07:06:40 ollama[11902]: net/http.(*Server).Serve.gowrap3()
Nov 08 07:06:40 ollama[11902]: net/http/server.go:3290 +0x28 fp=0xc000029fe0 sp=0xc000029fb8 pc=0x559312f88c48
Nov 08 07:06:40 ollama[11902]: runtime.goexit({})
Nov 08 07:06:40 ollama[11902]: runtime/asm_amd64.s:1695 +0x1 fp=0xc000029fe8 sp=0xc000029fe0 pc=0x559312d9fde1
Nov 08 07:06:40 ollama[11902]: created by net/http.(*Server).Serve in goroutine 1
Nov 08 07:06:40 ollama[11902]: net/http/server.go:3290 +0x4b4
Nov 08 07:06:40 ollama[11902]: goroutine 33 gp=0xc000164380 m=nil [GC worker (idle)]:
Nov 08 07:06:40 ollama[11902]: runtime.gopark(0x2736bc93ca49d?, 0x1?, 0x76?, 0x48?, 0x5593135a5060?)
Nov 08 07:06:40 ollama[11902]: runtime/proc.go:402 +0xce fp=0xc000052750 sp=0xc000052730 pc=0x559312d6e00e
Nov 08 07:06:40 ollama[11902]: runtime.gcBgMarkWorker()
Nov 08 07:06:40 ollama[11902]: runtime/mgc.go:1310 +0xe5 fp=0xc0000527e0 sp=0xc000052750 pc=0x559312d4f585
Nov 08 07:06:40 ollama[11902]: runtime.goexit({})
Nov 08 07:06:40 ollama[11902]: runtime/asm_amd64.s:1695 +0x1 fp=0xc0000527e8 sp=0xc0000527e0 pc=0x559312d9fde1
Nov 08 07:06:40 ollama[11902]: created by runtime.gcBgMarkStartWorkers in goroutine 8
Nov 08 07:06:40 ollama[11902]: runtime/mgc.go:1234 +0x1c
Nov 08 07:06:40 ollama[11902]: goroutine 66 gp=0xc00019a000 m=nil [IO wait]:
Nov 08 07:06:40 ollama[11902]: runtime.gopark(0x10?, 0x10?, 0xf0?, 0xc5?, 0xb?)
Nov 08 07:06:40 ollama[11902]: runtime/proc.go:402 +0xce fp=0xc00004c5a8 sp=0xc00004c588 pc=0x559312d6e00e
Nov 08 07:06:40 ollama[11902]: runtime.netpollblock(0x559312dd4558?, 0x12d36b26?, 0x93?)
Nov 08 07:06:40 ollama[11902]: runtime/netpoll.go:573 +0xf7 fp=0xc00004c5e0 sp=0xc00004c5a8 pc=0x559312d66257
Nov 08 07:06:40 ollama[11902]: internal/poll.runtime_pollWait(0x7f36dcfc9768, 0x72)
Nov 08 07:06:40 ollama[11902]: runtime/netpoll.go:345 +0x85 fp=0xc00004c600 sp=0xc00004c5e0 pc=0x559312d9aaa5
Nov 08 07:06:40 ollama[11902]: internal/poll.(*pollDesc).wait(0xc000152180?, 0xc00010af41?, 0x0)
Nov 08 07:06:40 ollama[11902]: internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc00004c628 sp=0xc00004c600 pc=0x559312dea9c7
Nov 08 07:06:40 ollama[11902]: internal/poll.(*pollDesc).waitRead(...)
Nov 08 07:06:40 ollama[11902]: internal/poll/fd_poll_runtime.go:89
Nov 08 07:06:40 ollama[11902]: internal/poll.(*FD).Read(0xc000152180, {0xc00010af41, 0x1, 0x1})
Nov 08 07:06:40 ollama[11902]: internal/poll/fd_unix.go:164 +0x27a fp=0xc00004c6c0 sp=0xc00004c628 pc=0x559312deb51a
Nov 08 07:06:40 ollama[11902]: net.(*netFD).Read(0xc000152180, {0xc00010af41?, 0xc00004c748?, 0x559312d9c6d0?})
Nov 08 07:06:40 ollama[11902]: net/fd_posix.go:55 +0x25 fp=0xc00004c708 sp=0xc00004c6c0 pc=0x559312e597a5
Nov 08 07:06:40 ollama[11902]: net.(*conn).Read(0xc0000540a8, {0xc00010af41?, 0x0?, 0x5593135a5060?})
Nov 08 07:06:40 ollama[11902]: net/net.go:185 +0x45 fp=0xc00004c750 sp=0xc00004c708 pc=0x559312e63a65
Nov 08 07:06:40 ollama[11902]: net.(*TCPConn).Read(0x55931347d840?, {0xc00010af41?, 0x0?, 0x0?})
Nov 08 07:06:40 ollama[11902]: <autogenerated>:1 +0x25 fp=0xc00004c780 sp=0xc00004c750 pc=0x559312e6f445
Nov 08 07:06:40 ollama[11902]: net/http.(*connReader).backgroundRead(0xc00010af30)
Nov 08 07:06:40 ollama[11902]: net/http/server.go:681 +0x37 fp=0xc00004c7c8 sp=0xc00004c780 pc=0x559312f7e1d7
Nov 08 07:06:40 ollama[11902]: net/http.(*connReader).startBackgroundRead.gowrap2()
Nov 08 07:06:40 ollama[11902]: net/http/server.go:677 +0x25 fp=0xc00004c7e0 sp=0xc00004c7c8 pc=0x559312f7e105
Nov 08 07:06:40 ollama[11902]: runtime.goexit({})
Nov 08 07:06:40 ollama[11902]: runtime/asm_amd64.s:1695 +0x1 fp=0xc00004c7e8 sp=0xc00004c7e0 pc=0x559312d9fde1
Nov 08 07:06:40 ollama[11902]: created by net/http.(*connReader).startBackgroundRead in goroutine 8
Nov 08 07:06:40 ollama[11902]: net/http/server.go:677 +0xba
Nov 08 07:06:40 ollama[11902]: goroutine 32 gp=0xc00008a380 m=nil [GC worker (idle)]:
Nov 08 07:06:40 ollama[11902]: runtime.gopark(0x2736bc93d3ed0?, 0xc00002e9d0?, 0x0?, 0x0?, 0x5593135a5060?)
Nov 08 07:06:40 ollama[11902]: runtime/proc.go:402 +0xce fp=0xc0000b2750 sp=0xc0000b2730 pc=0x559312d6e00e
Nov 08 07:06:40 ollama[11902]: runtime.gcBgMarkWorker()
Nov 08 07:06:40 ollama[11902]: runtime/mgc.go:1310 +0xe5 fp=0xc0000b27e0 sp=0xc0000b2750 pc=0x559312d4f585
Nov 08 07:06:40 ollama[11902]: runtime.goexit({})
Nov 08 07:06:40 ollama[11902]: runtime/asm_amd64.s:1695 +0x1 fp=0xc0000b27e8 sp=0xc0000b27e0 pc=0x559312d9fde1
Nov 08 07:06:40 ollama[11902]: created by runtime.gcBgMarkStartWorkers in goroutine 8
Nov 08 07:06:40 ollama[11902]: runtime/mgc.go:1234 +0x1c
Nov 08 07:06:40 ollama[11902]: goroutine 48 gp=0xc00008a540 m=nil [GC worker (idle)]:
Nov 08 07:06:40 ollama[11902]: runtime.gopark(0x2736bc93d3fdc?, 0x3?, 0x98?, 0x4d?, 0x5593135a5060?)
Nov 08 07:06:40 ollama[11902]: runtime/proc.go:402 +0xce fp=0xc0000b2f50 sp=0xc0000b2f30 pc=0x559312d6e00e
Nov 08 07:06:40 ollama[11902]: runtime.gcBgMarkWorker()
Nov 08 07:06:40 ollama[11902]: runtime/mgc.go:1310 +0xe5 fp=0xc0000b2fe0 sp=0xc0000b2f50 pc=0x559312d4f585
Nov 08 07:06:40 ollama[11902]: runtime.goexit({})
Nov 08 07:06:40 ollama[11902]: runtime/asm_amd64.s:1695 +0x1 fp=0xc0000b2fe8 sp=0xc0000b2fe0 pc=0x559312d9fde1
Nov 08 07:06:40 ollama[11902]: created by runtime.gcBgMarkStartWorkers in goroutine 8
Nov 08 07:06:40 ollama[11902]: runtime/mgc.go:1234 +0x1c
Nov 08 07:06:40 ollama[11902]: goroutine 50 gp=0xc00008a700 m=nil [GC worker (idle)]:
Nov 08 07:06:40 ollama[11902]: runtime.gopark(0x2736bc93d3def?, 0xc00002e9d0?, 0x0?, 0x0?, 0x5593135a5060?)
Nov 08 07:06:40 ollama[11902]: runtime/proc.go:402 +0xce fp=0xc0000b3750 sp=0xc0000b3730 pc=0x559312d6e00e
Nov 08 07:06:40 ollama[11902]: runtime.gcBgMarkWorker()
Nov 08 07:06:40 ollama[11902]: runtime/mgc.go:1310 +0xe5 fp=0xc0000b37e0 sp=0xc0000b3750 pc=0x559312d4f585
Nov 08 07:06:40 ollama[11902]: runtime.goexit({})
Nov 08 07:06:40 ollama[11902]: runtime/asm_amd64.s:1695 +0x1 fp=0xc0000b37e8 sp=0xc0000b37e0 pc=0x559312d9fde1
Nov 08 07:06:40 ollama[11902]: created by runtime.gcBgMarkStartWorkers in goroutine 8
Nov 08 07:06:40 ollama[11902]: runtime/mgc.go:1234 +0x1c
Nov 08 07:06:40 ollama[11902]: goroutine 12 gp=0xc000164540 m=nil [GC worker (idle)]:
Nov 08 07:06:40 ollama[11902]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
Nov 08 07:06:40 ollama[11902]: runtime/proc.go:402 +0xce fp=0xc000053750 sp=0xc000053730 pc=0x559312d6e00e
Nov 08 07:06:40 ollama[11902]: runtime.gcBgMarkWorker()
Nov 08 07:06:40 ollama[11902]: runtime/mgc.go:1310 +0xe5 fp=0xc0000537e0 sp=0xc000053750 pc=0x559312d4f585
Nov 08 07:06:40 ollama[11902]: runtime.goexit({})
Nov 08 07:06:40 ollama[11902]: runtime/asm_amd64.s:1695 +0x1 fp=0xc0000537e8 sp=0xc0000537e0 pc=0x559312d9fde1
Nov 08 07:06:40 ollama[11902]: created by runtime.gcBgMarkStartWorkers in goroutine 8
Nov 08 07:06:40 ollama[11902]: runtime/mgc.go:1234 +0x1c
Nov 08 07:06:40 ollama[11902]: goroutine 13 gp=0xc000164700 m=nil [GC worker (idle)]:
Nov 08 07:06:40 ollama[11902]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
Nov 08 07:06:40 ollama[11902]: runtime/proc.go:402 +0xce fp=0xc000053f50 sp=0xc000053f30 pc=0x559312d6e00e
Nov 08 07:06:40 ollama[11902]: runtime.gcBgMarkWorker()
Nov 08 07:06:40 ollama[11902]: runtime/mgc.go:1310 +0xe5 fp=0xc000053fe0 sp=0xc000053f50 pc=0x559312d4f585
Nov 08 07:06:40 ollama[11902]: runtime.goexit({})
Nov 08 07:06:40 ollama[11902]: runtime/asm_amd64.s:1695 +0x1 fp=0xc000053fe8 sp=0xc000053fe0 pc=0x559312d9fde1
Nov 08 07:06:40 ollama[11902]: created by runtime.gcBgMarkStartWorkers in goroutine 8
Nov 08 07:06:40 ollama[11902]: runtime/mgc.go:1234 +0x1c
Nov 08 07:06:40 ollama[11902]: goroutine 14 gp=0xc0001648c0 m=nil [GC worker (idle)]:
Nov 08 07:06:40 ollama[11902]: runtime.gopark(0x2736bc93ca34d?, 0x0?, 0x0?, 0x0?, 0x0?)
Nov 08 07:06:40 ollama[11902]: runtime/proc.go:402 +0xce fp=0xc0000ae750 sp=0xc0000ae730 pc=0x559312d6e00e
Nov 08 07:06:40 ollama[11902]: runtime.gcBgMarkWorker()
Nov 08 07:06:40 ollama[11902]: runtime/mgc.go:1310 +0xe5 fp=0xc0000ae7e0 sp=0xc0000ae750 pc=0x559312d4f585
Nov 08 07:06:40 ollama[11902]: runtime.goexit({})
Nov 08 07:06:40 ollama[11902]: runtime/asm_amd64.s:1695 +0x1 fp=0xc0000ae7e8 sp=0xc0000ae7e0 pc=0x559312d9fde1
Nov 08 07:06:40 ollama[11902]: created by runtime.gcBgMarkStartWorkers in goroutine 8
Nov 08 07:06:40 ollama[11902]: runtime/mgc.go:1234 +0x1c
Nov 08 07:06:40 ollama[11902]: rax 0x209e03fd4
Nov 08 07:06:40 ollama[11902]: rbx 0x7f364255c000
Nov 08 07:06:40 ollama[11902]: rcx 0xff5
Nov 08 07:06:40 ollama[11902]: rdx 0x7f3642118290
Nov 08 07:06:40 ollama[11902]: rdi 0x7f36421182a0
Nov 08 07:06:40 ollama[11902]: rsi 0x0
Nov 08 07:06:40 ollama[11902]: rbp 0x7f361ffd5240
Nov 08 07:06:40 ollama[11902]: rsp 0x7f361ffd5220
Nov 08 07:06:40 ollama[11902]: r8 0x1
Nov 08 07:06:40 ollama[11902]: r9 0x7f36423d2790
Nov 08 07:06:40 ollama[11902]: r10 0x0
Nov 08 07:06:40 ollama[11902]: r11 0x246
Nov 08 07:06:40 ollama[11902]: r12 0x7f34d7569420
Nov 08 07:06:40 ollama[11902]: r13 0x7f36421182a0
Nov 08 07:06:40 ollama[11902]: r14 0x0
Nov 08 07:06:40 ollama[11902]: r15 0x7f372f6497f0
Nov 08 07:06:40 ollama[11902]: rip 0x7f36e43b4c47
Nov 08 07:06:40 ollama[11902]: rflags 0x10297
Nov 08 07:06:40 ollama[11902]: cs 0x33
Nov 08 07:06:40 ollama[11902]: fs 0x0
Nov 08 07:06:40 ollama[11902]: gs 0x0
Nov 08 07:06:40 ollama[11902]: SIGABRT: abort
Nov 08 07:06:40 ollama[11902]: PC=0x7f36bec419fc m=9 sigcode=18446744073709551610
Nov 08 07:06:40 ollama[11902]: signal arrived during cgo execution
Nov 08 07:06:40 ollama[11902]: goroutine 7 gp=0xc000164000 m=9 mp=0xc0001a2808 [syscall]:
Nov 08 07:06:40 ollama[11902]: runtime.cgocall(0x559312fb4eb0, 0xc000065b60)
Nov 08 07:06:40 ollama[11902]: runtime/cgocall.go:157 +0x4b fp=0xc000065b38 sp=0xc000065b00 pc=0x559312d373cb
Nov 08 07:06:40 ollama[11902]: github.com/ollama/ollama/llama._Cfunc_llama_decode(0x7f365005b650, {0xe, 0x7f36503ad760, 0x0, 0x0, 0x7f36503adf70, 0x7f365019ef40, 0x7f365019f750, 0x7f3642c4d360, 0x0, ...})
Nov 08 07:06:40 ollama[11902]: _cgo_gotypes.go:543 +0x52 fp=0xc000065b60 sp=0xc000065b38 pc=0x559312e34952
Nov 08 07:06:40 ollama[11902]: github.com/ollama/ollama/llama.(*Context).Decode.func1(0x559312fb0ceb?, 0x7f365005b650?)
Nov 08 07:06:40 ollama[11902]: github.com/ollama/ollama/llama/llama.go:167 +0xd8 fp=0xc000065c80 sp=0xc000065b60 pc=0x559312e36e78
Nov 08 07:06:40 ollama[11902]: github.com/ollama/ollama/llama.(*Context).Decode(0xc0000163c0?, 0x1?)
Nov 08 07:06:40 ollama[11902]: github.com/ollama/ollama/llama/llama.go:167 +0x17 fp=0xc000065cc8 sp=0xc000065c80 pc=0x559312e36cd7
Nov 08 07:06:40 ollama[11902]: main.(*Server).processBatch(0xc000128120, 0xc0000c0000, 0xc0000c0070)
Nov 08 07:06:40 ollama[11902]: github.com/ollama/ollama/llama/runner/runner.go:424 +0x29e fp=0xc000065ed0 sp=0xc000065cc8 pc=0x559312fafd1e
Nov 08 07:06:40 ollama[11902]: main.(*Server).run(0xc000128120, {0x5593132eea40, 0xc00007c0a0})
Nov 08 07:06:40 ollama[11902]: github.com/ollama/ollama/llama/runner/runner.go:338 +0x1a5 fp=0xc000065fb8 sp=0xc000065ed0 pc=0x559312faf705
Nov 08 07:06:40 ollama[11902]: main.main.gowrap2()
Nov 08 07:06:40 ollama[11902]: github.com/ollama/ollama/llama/runner/runner.go:907 +0x28 fp=0xc000065fe0 sp=0xc000065fb8 pc=0x559312fb3ee8
Nov 08 07:06:40 ollama[11902]: runtime.goexit({})
Nov 08 07:06:40 ollama[11902]: runtime/asm_amd64.s:1695 +0x1 fp=0xc000065fe8 sp=0xc000065fe0 pc=0x559312d9fde1
Nov 08 07:06:40 ollama[11902]: created by main.main in goroutine 1
Nov 08 07:06:40 ollama[11902]: github.com/ollama/ollama/llama/runner/runner.go:907 +0xcab
Nov 08 07:06:40 ollama[11902]: goroutine 1 gp=0xc0000061c0 m=nil [IO wait]:
Nov 08 07:06:40 ollama[11902]: runtime.gopark(0xc000030008?, 0x0?, 0xc0?, 0x61?, 0xc0000298c0?)
Nov 08 07:06:40 ollama[11902]: runtime/proc.go:402 +0xce fp=0xc00014d888 sp=0xc00014d868 pc=0x559312d6e00e
Nov 08 07:06:40 ollama[11902]: runtime.netpollblock(0xc000029920?, 0x12d36b26?, 0x93?)
Nov 08 07:06:40 ollama[11902]: runtime/netpoll.go:573 +0xf7 fp=0xc00014d8c0 sp=0xc00014d888 pc=0x559312d66257
Nov 08 07:06:40 ollama[11902]: internal/poll.runtime_pollWait(0x7f36dcfc9860, 0x72)
Nov 08 07:06:40 ollama[11902]: runtime/netpoll.go:345 +0x85 fp=0xc00014d8e0 sp=0xc00014d8c0 pc=0x559312d9aaa5
Nov 08 07:06:40 ollama[11902]: internal/poll.(*pollDesc).wait(0x3?, 0x3fe?, 0x0)
Nov 08 07:06:40 ollama[11902]: internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc00014d908 sp=0xc00014d8e0 pc=0x559312dea9c7
Nov 08 07:06:40 ollama[11902]: internal/poll.(*pollDesc).waitRead(...)
Nov 08 07:06:40 ollama[11902]: internal/poll/fd_poll_runtime.go:89
Nov 08 07:06:40 ollama[11902]: internal/poll.(*FD).Accept(0xc000152100)
Nov 08 07:06:40 ollama[11902]: internal/poll/fd_unix.go:611 +0x2ac fp=0xc00014d9b0 sp=0xc00014d908 pc=0x559312debe8c
Nov 08 07:06:40 ollama[11902]: net.(*netFD).accept(0xc000152100)
Nov 08 07:06:40 ollama[11902]: net/fd_unix.go:172 +0x29 fp=0xc00014da68 sp=0xc00014d9b0 pc=0x559312e5a8a9
Nov 08 07:06:40 ollama[11902]: net.(*TCPListener).accept(0xc000070200)
Nov 08 07:06:40 ollama[11902]: net/tcpsock_posix.go:159 +0x1e fp=0xc00014da90 sp=0xc00014da68 pc=0x559312e6b5de
Nov 08 07:06:40 ollama[11902]: net.(*TCPListener).Accept(0xc000070200)
Nov 08 07:06:40 ollama[11902]: net/tcpsock.go:327 +0x30 fp=0xc00014dac0 sp=0xc00014da90 pc=0x559312e6a930
Nov 08 07:06:40 ollama[11902]: net/http.(*onceCloseListener).Accept(0xc0001281b0?)
Nov 08 07:06:40 ollama[11902]: <autogenerated>:1 +0x24 fp=0xc00014dad8 sp=0xc00014dac0 pc=0x559312f91a44
Nov 08 07:06:40 ollama[11902]: net/http.(*Server).Serve(0xc0001620f0, {0x5593132ee400, 0xc000070200})
Nov 08 07:06:40 ollama[11902]: net/http/server.go:3260 +0x33e fp=0xc00014dc08 sp=0xc00014dad8 pc=0x559312f8885e
Nov 08 07:06:40 ollama[11902]: main.main()
Nov 08 07:06:40 ollama[11902]: github.com/ollama/ollama/llama/runner/runner.go:927 +0x104c fp=0xc00014df50 sp=0xc00014dc08 pc=0x559312fb3c6c
Nov 08 07:06:40 ollama[11902]: runtime.main()
Nov 08 07:06:40 ollama[11902]: runtime/proc.go:271 +0x29d fp=0xc00014dfe0 sp=0xc00014df50 pc=0x559312d6dbdd
Nov 08 07:06:40 ollama[11902]: runtime.goexit({})
Nov 08 07:06:40 ollama[11902]: runtime/asm_amd64.s:1695 +0x1 fp=0xc00014dfe8 sp=0xc00014dfe0 pc=0x559312d9fde1
Nov 08 07:06:40 ollama[11902]: goroutine 2 gp=0xc000006c40 m=nil [force gc (idle)]:
Nov 08 07:06:40 ollama[11902]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
Nov 08 07:06:40 ollama[11902]: runtime/proc.go:402 +0xce fp=0xc000050fa8 sp=0xc000050f88 pc=0x559312d6e00e
Nov 08 07:06:40 ollama[11902]: runtime.goparkunlock(...)
Nov 08 07:06:40 ollama[11902]: runtime/proc.go:408
Nov 08 07:06:40 ollama[11902]: runtime.forcegchelper()
Nov 08 07:06:40 ollama[11902]: runtime/proc.go:326 +0xb8 fp=0xc000050fe0 sp=0xc000050fa8 pc=0x559312d6de98
Nov 08 07:06:40 ollama[11902]: runtime.goexit({})
Nov 08 07:06:40 ollama[11902]: runtime/asm_amd64.s:1695 +0x1 fp=0xc000050fe8 sp=0xc000050fe0 pc=0x559312d9fde1
Nov 08 07:06:40 ollama[11902]: created by runtime.init.6 in goroutine 1
Nov 08 07:06:40 ollama[11902]: runtime/proc.go:314 +0x1a
Nov 08 07:06:40 ollama[11902]: goroutine 3 gp=0xc000007180 m=nil [GC sweep wait]:
Nov 08 07:06:40 ollama[11902]: runtime.gopark(0x1?, 0x0?, 0x0?, 0x0?, 0x0?)
Nov 08 07:06:40 ollama[11902]: runtime/proc.go:402 +0xce fp=0xc000051780 sp=0xc000051760 pc=0x559312d6e00e
Nov 08 07:06:40 ollama[11902]: runtime.goparkunlock(...)
Nov 08 07:06:40 ollama[11902]: runtime/proc.go:408
Nov 08 07:06:40 ollama[11902]: runtime.bgsweep(0xc000022070)
Nov 08 07:06:40 ollama[11902]: runtime/mgcsweep.go:318 +0xdf fp=0xc0000517c8 sp=0xc000051780 pc=0x559312d58b9f
Nov 08 07:06:40 ollama[11902]: runtime.gcenable.gowrap1()
Nov 08 07:06:40 ollama[11902]: runtime/mgc.go:203 +0x25 fp=0xc0000517e0 sp=0xc0000517c8 pc=0x559312d4d685
Nov 08 07:06:40 ollama[11902]: runtime.goexit({})
Nov 08 07:06:40 ollama[11902]: runtime/asm_amd64.s:1695 +0x1 fp=0xc0000517e8 sp=0xc0000517e0 pc=0x559312d9fde1
Nov 08 07:06:40 ollama[11902]: created by runtime.gcenable in goroutine 1
Nov 08 07:06:40 ollama[11902]: runtime/mgc.go:203 +0x66
Nov 08 07:06:40 ollama[11902]: goroutine 4 gp=0xc000007340 m=nil [GC scavenge wait]:
Nov 08 07:06:40 ollama[11902]: runtime.gopark(0x10000?, 0x5593131eff98?, 0x0?, 0x0?, 0x0?)
Nov 08 07:06:40 ollama[11902]: runtime/proc.go:402 +0xce fp=0xc000051f78 sp=0xc000051f58 pc=0x559312d6e00e
Nov 08 07:06:40 ollama[11902]: runtime.goparkunlock(...)
Nov 08 07:06:40 ollama[11902]: runtime/proc.go:408
Nov 08 07:06:40 ollama[11902]: runtime.(*scavengerState).park(0x5593134bc4c0)
Nov 08 07:06:40 ollama[11902]: runtime/mgcscavenge.go:425 +0x49 fp=0xc000051fa8 sp=0xc000051f78 pc=0x559312d56549
Nov 08 07:06:40 ollama[11902]: runtime.bgscavenge(0xc000022070)
Nov 08 07:06:40 ollama[11902]: runtime/mgcscavenge.go:658 +0x59 fp=0xc000051fc8 sp=0xc000051fa8 pc=0x559312d56af9
Nov 08 07:06:40 ollama[11902]: runtime.gcenable.gowrap2()
Nov 08 07:06:40 ollama[11902]: runtime/mgc.go:204 +0x25 fp=0xc000051fe0 sp=0xc000051fc8 pc=0x559312d4d625
Nov 08 07:06:40 ollama[11902]: runtime.goexit({})
Nov 08 07:06:40 ollama[11902]: runtime/asm_amd64.s:1695 +0x1 fp=0xc000051fe8 sp=0xc000051fe0 pc=0x559312d9fde1
Nov 08 07:06:40 ollama[11902]: created by runtime.gcenable in goroutine 1
Nov 08 07:06:40 ollama[11902]: runtime/mgc.go:204 +0xa5
Nov 08 07:06:40 ollama[11902]: goroutine 5 gp=0xc000007c00 m=nil [finalizer wait]:
Nov 08 07:06:40 ollama[11902]: runtime.gopark(0x0?, 0x5593132ea600?, 0x0?, 0xc0?, 0x1000000010?)
Nov 08 07:06:40 ollama[11902]: runtime/proc.go:402 +0xce fp=0xc000050620 sp=0xc000050600 pc=0x559312d6e00e
Nov 08 07:06:40 ollama[11902]: runtime.runfinq()
Nov 08 07:06:40 ollama[11902]: runtime/mfinal.go:194 +0x107 fp=0xc0000507e0 sp=0xc000050620 pc=0x559312d4c6c7
Nov 08 07:06:40 ollama[11902]: runtime.goexit({})
Nov 08 07:06:40 ollama[11902]: runtime/asm_amd64.s:1695 +0x1 fp=0xc0000507e8 sp=0xc0000507e0 pc=0x559312d9fde1
Nov 08 07:06:40 ollama[11902]: created by runtime.createfing in goroutine 1
Nov 08 07:06:40 ollama[11902]: runtime/mfinal.go:164 +0x3d
Nov 08 07:06:40 ollama[11902]: goroutine 49 gp=0xc000007dc0 m=nil [GC worker (idle)]:
Nov 08 07:06:40 ollama[11902]: runtime.gopark(0x2736bc2b5abd3?, 0xc00002e9d0?, 0x0?, 0x0?, 0x5593135a5060?)
Nov 08 07:06:40 ollama[11902]: runtime/proc.go:402 +0xce fp=0xc000052f50 sp=0xc000052f30 pc=0x559312d6e00e
Nov 08 07:06:40 ollama[11902]: runtime.gcBgMarkWorker()
Nov 08 07:06:40 ollama[11902]: runtime/mgc.go:1310 +0xe5 fp=0xc000052fe0 sp=0xc000052f50 pc=0x559312d4f585
Nov 08 07:06:40 ollama[11902]: runtime.goexit({})
Nov 08 07:06:40 ollama[11902]: runtime/asm_amd64.s:1695 +0x1 fp=0xc000052fe8 sp=0xc000052fe0 pc=0x559312d9fde1
Nov 08 07:06:40 ollama[11902]: created by runtime.gcBgMarkStartWorkers in goroutine 8
Nov 08 07:06:40 ollama[11902]: runtime/mgc.go:1234 +0x1c
Nov 08 07:06:40 ollama[11902]: goroutine 8 gp=0xc0001641c0 m=nil [select]:
Nov 08 07:06:40 ollama[11902]: runtime.gopark(0xc000029a80?, 0x2?, 0x60?, 0x0?, 0xc000029824?)
Nov 08 07:06:40 ollama[11902]: runtime/proc.go:402 +0xce fp=0xc000029698 sp=0xc000029678 pc=0x559312d6e00e
Nov 08 07:06:40 ollama[11902]: runtime.selectgo(0xc000029a80, 0xc000029820, 0x14?, 0x0, 0x1?, 0x1)
Nov 08 07:06:40 ollama[11902]: runtime/select.go:327 +0x725 fp=0xc0000297b8 sp=0xc000029698 pc=0x559312d7f3e5
Nov 08 07:06:40 ollama[11902]: main.(*Server).completion(0xc000128120, {0x5593132ee5b0, 0xc0001507e0}, 0xc00012a900)
Nov 08 07:06:40 ollama[11902]: github.com/ollama/ollama/llama/runner/runner.go:652 +0x8fe fp=0xc000029ab8 sp=0xc0000297b8 pc=0x559312fb167e
Nov 08 07:06:40 ollama[11902]: main.(*Server).completion-fm({0x5593132ee5b0?, 0xc0001507e0?}, 0x559312f8cb8d?)
Nov 08 07:06:40 ollama[11902]: <autogenerated>:1 +0x36 fp=0xc000029ae8 sp=0xc000029ab8 pc=0x559312fb46d6
Nov 08 07:06:40 ollama[11902]: net/http.HandlerFunc.ServeHTTP(0xc00010cc30?, {0x5593132ee5b0?, 0xc0001507e0?}, 0x10?)
Nov 08 07:06:40 ollama[11902]: net/http/server.go:2171 +0x29 fp=0xc000029b10 sp=0xc000029ae8 pc=0x559312f85629
Nov 08 07:06:40 ollama[11902]: net/http.(*ServeMux).ServeHTTP(0x559312d40f85?, {0x5593132ee5b0, 0xc0001507e0}, 0xc00012a900)
Nov 08 07:06:40 ollama[11902]: net/http/server.go:2688 +0x1ad fp=0xc000029b60 sp=0xc000029b10 pc=0x559312f874ad
Nov 08 07:06:40 ollama[11902]: net/http.serverHandler.ServeHTTP({0x5593132ed900?}, {0x5593132ee5b0?, 0xc0001507e0?}, 0x6?)
Nov 08 07:06:40 ollama[11902]: net/http/server.go:3142 +0x8e fp=0xc000029b90 sp=0xc000029b60 pc=0x559312f884ce
Nov 08 07:06:40 ollama[11902]: net/http.(*conn).serve(0xc0001281b0, {0x5593132eea08, 0xc00010ae10})
Nov 08 07:06:40 ollama[11902]: net/http/server.go:2044 +0x5e8 fp=0xc000029fb8 sp=0xc000029b90 pc=0x559312f84268
Nov 08 07:06:40 ollama[11902]: net/http.(*Server).Serve.gowrap3()
Nov 08 07:06:40 ollama[11902]: net/http/server.go:3290 +0x28 fp=0xc000029fe0 sp=0xc000029fb8 pc=0x559312f88c48
Nov 08 07:06:40 ollama[11902]: runtime.goexit({})
Nov 08 07:06:40 ollama[11902]: runtime/asm_amd64.s:1695 +0x1 fp=0xc000029fe8 sp=0xc000029fe0 pc=0x559312d9fde1
Nov 08 07:06:40 ollama[11902]: created by net/http.(*Server).Serve in goroutine 1
Nov 08 07:06:40 ollama[11902]: net/http/server.go:3290 +0x4b4
Nov 08 07:06:40 ollama[11902]: goroutine 33 gp=0xc000164380 m=nil [GC worker (idle)]:
Nov 08 07:06:40 ollama[11902]: runtime.gopark(0x2736bc93ca49d?, 0x1?, 0x76?, 0x48?, 0x5593135a5060?)
Nov 08 07:06:40 ollama[11902]: runtime/proc.go:402 +0xce fp=0xc000052750 sp=0xc000052730 pc=0x559312d6e00e
Nov 08 07:06:40 ollama[11902]: runtime.gcBgMarkWorker()
Nov 08 07:06:40 ollama[11902]: runtime/mgc.go:1310 +0xe5 fp=0xc0000527e0 sp=0xc000052750 pc=0x559312d4f585
Nov 08 07:06:40 ollama[11902]: runtime.goexit({})
Nov 08 07:06:40 ollama[11902]: runtime/asm_amd64.s:1695 +0x1 fp=0xc0000527e8 sp=0xc0000527e0 pc=0x559312d9fde1
Nov 08 07:06:40 ollama[11902]: created by runtime.gcBgMarkStartWorkers in goroutine 8
Nov 08 07:06:40 ollama[11902]: runtime/mgc.go:1234 +0x1c
Nov 08 07:06:40 ollama[11902]: goroutine 66 gp=0xc00019a000 m=nil [IO wait]:
Nov 08 07:06:40 ollama[11902]: runtime.gopark(0x10?, 0x10?, 0xf0?, 0xc5?, 0xb?)
Nov 08 07:06:40 ollama[11902]: runtime/proc.go:402 +0xce fp=0xc00004c5a8 sp=0xc00004c588 pc=0x559312d6e00e
Nov 08 07:06:40 ollama[11902]: runtime.netpollblock(0x559312dd4558?, 0x12d36b26?, 0x93?)
Nov 08 07:06:40 ollama[11902]: runtime/netpoll.go:573 +0xf7 fp=0xc00004c5e0 sp=0xc00004c5a8 pc=0x559312d66257
Nov 08 07:06:40 ollama[11902]: internal/poll.runtime_pollWait(0x7f36dcfc9768, 0x72)
Nov 08 07:06:40 ollama[11902]: runtime/netpoll.go:345 +0x85 fp=0xc00004c600 sp=0xc00004c5e0 pc=0x559312d9aaa5
Nov 08 07:06:40 ollama[11902]: internal/poll.(*pollDesc).wait(0xc000152180?, 0xc00010af41?, 0x0)
Nov 08 07:06:40 ollama[11902]: internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc00004c628 sp=0xc00004c600 pc=0x559312dea9c7
Nov 08 07:06:40 ollama[11902]: internal/poll.(*pollDesc).waitRead(...)
Nov 08 07:06:40 ollama[11902]: internal/poll/fd_poll_runtime.go:89
Nov 08 07:06:40 ollama[11902]: internal/poll.(*FD).Read(0xc000152180, {0xc00010af41, 0x1, 0x1})
Nov 08 07:06:40 ollama[11902]: internal/poll/fd_unix.go:164 +0x27a fp=0xc00004c6c0 sp=0xc00004c628 pc=0x559312deb51a
Nov 08 07:06:40 ollama[11902]: net.(*netFD).Read(0xc000152180, {0xc00010af41?, 0xc00004c748?, 0x559312d9c6d0?})
Nov 08 07:06:40 ollama[11902]: net/fd_posix.go:55 +0x25 fp=0xc00004c708 sp=0xc00004c6c0 pc=0x559312e597a5
Nov 08 07:06:40 ollama[11902]: net.(*conn).Read(0xc0000540a8, {0xc00010af41?, 0x0?, 0x5593135a5060?})
Nov 08 07:06:40 ollama[11902]: net/net.go:185 +0x45 fp=0xc00004c750 sp=0xc00004c708 pc=0x559312e63a65
Nov 08 07:06:40 ollama[11902]: net.(*TCPConn).Read(0x55931347d840?, {0xc00010af41?, 0x0?, 0x0?})
Nov 08 07:06:40 ollama[11902]: <autogenerated>:1 +0x25 fp=0xc00004c780 sp=0xc00004c750 pc=0x559312e6f445
Nov 08 07:06:40 ollama[11902]: net/http.(*connReader).backgroundRead(0xc00010af30)
Nov 08 07:06:40 ollama[11902]: net/http/server.go:681 +0x37 fp=0xc00004c7c8 sp=0xc00004c780 pc=0x559312f7e1d7
Nov 08 07:06:40 ollama[11902]: net/http.(*connReader).startBackgroundRead.gowrap2()
Nov 08 07:06:40 ollama[11902]: net/http/server.go:677 +0x25 fp=0xc00004c7e0 sp=0xc00004c7c8 pc=0x559312f7e105
Nov 08 07:06:40 ollama[11902]: runtime.goexit({})
Nov 08 07:06:40 ollama[11902]: runtime/asm_amd64.s:1695 +0x1 fp=0xc00004c7e8 sp=0xc00004c7e0 pc=0x559312d9fde1
Nov 08 07:06:40 ollama[11902]: created by net/http.(*connReader).startBackgroundRead in goroutine 8
Nov 08 07:06:40 ollama[11902]: net/http/server.go:677 +0xba
Nov 08 07:06:40 ollama[11902]: goroutine 32 gp=0xc00008a380 m=nil [GC worker (idle)]:
Nov 08 07:06:40 ollama[11902]: runtime.gopark(0x2736bc93d3ed0?, 0xc00002e9d0?, 0x0?, 0x0?, 0x5593135a5060?)
Nov 08 07:06:40 ollama[11902]: runtime/proc.go:402 +0xce fp=0xc0000b2750 sp=0xc0000b2730 pc=0x559312d6e00e
Nov 08 07:06:40 ollama[11902]: runtime.gcBgMarkWorker()
Nov 08 07:06:40 ollama[11902]: runtime/mgc.go:1310 +0xe5 fp=0xc0000b27e0 sp=0xc0000b2750 pc=0x559312d4f585
Nov 08 07:06:40 ollama[11902]: runtime.goexit({})
Nov 08 07:06:40 ollama[11902]: runtime/asm_amd64.s:1695 +0x1 fp=0xc0000b27e8 sp=0xc0000b27e0 pc=0x559312d9fde1
Nov 08 07:06:40 ollama[11902]: created by runtime.gcBgMarkStartWorkers in goroutine 8
Nov 08 07:06:40 ollama[11902]: runtime/mgc.go:1234 +0x1c
Nov 08 07:06:40 ollama[11902]: goroutine 48 gp=0xc00008a540 m=nil [GC worker (idle)]:
Nov 08 07:06:40 ollama[11902]: runtime.gopark(0x2736bc93d3fdc?, 0x3?, 0x98?, 0x4d?, 0x5593135a5060?)
Nov 08 07:06:40 ollama[11902]: runtime/proc.go:402 +0xce fp=0xc0000b2f50 sp=0xc0000b2f30 pc=0x559312d6e00e
Nov 08 07:06:40 ollama[11902]: runtime.gcBgMarkWorker()
Nov 08 07:06:40 ollama[11902]: runtime/mgc.go:1310 +0xe5 fp=0xc0000b2fe0 sp=0xc0000b2f50 pc=0x559312d4f585
Nov 08 07:06:40 ollama[11902]: runtime.goexit({})
Nov 08 07:06:40 ollama[11902]: runtime/asm_amd64.s:1695 +0x1 fp=0xc0000b2fe8 sp=0xc0000b2fe0 pc=0x559312d9fde1
Nov 08 07:06:40 ollama[11902]: created by runtime.gcBgMarkStartWorkers in goroutine 8
Nov 08 07:06:40 ollama[11902]: runtime/mgc.go:1234 +0x1c
Nov 08 07:06:40 ollama[11902]: goroutine 50 gp=0xc00008a700 m=nil [GC worker (idle)]:
Nov 08 07:06:40 ollama[11902]: runtime.gopark(0x2736bc93d3def?, 0xc00002e9d0?, 0x0?, 0x0?, 0x5593135a5060?)
Nov 08 07:06:40 ollama[11902]: runtime/proc.go:402 +0xce fp=0xc0000b3750 sp=0xc0000b3730 pc=0x559312d6e00e
Nov 08 07:06:40 ollama[11902]: runtime.gcBgMarkWorker()
Nov 08 07:06:40 ollama[11902]: runtime/mgc.go:1310 +0xe5 fp=0xc0000b37e0 sp=0xc0000b3750 pc=0x559312d4f585
Nov 08 07:06:40 ollama[11902]: runtime.goexit({})
Nov 08 07:06:40 ollama[11902]: runtime/asm_amd64.s:1695 +0x1 fp=0xc0000b37e8 sp=0xc0000b37e0 pc=0x559312d9fde1
Nov 08 07:06:40 ollama[11902]: created by runtime.gcBgMarkStartWorkers in goroutine 8
Nov 08 07:06:40 ollama[11902]: runtime/mgc.go:1234 +0x1c
Nov 08 07:06:40 ollama[11902]: goroutine 12 gp=0xc000164540 m=nil [GC worker (idle)]:
Nov 08 07:06:40 ollama[11902]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
Nov 08 07:06:40 ollama[11902]: runtime/proc.go:402 +0xce fp=0xc000053750 sp=0xc000053730 pc=0x559312d6e00e
Nov 08 07:06:40 ollama[11902]: runtime.gcBgMarkWorker()
Nov 08 07:06:40 ollama[11902]: runtime/mgc.go:1310 +0xe5 fp=0xc0000537e0 sp=0xc000053750 pc=0x559312d4f585
Nov 08 07:06:40 ollama[11902]: runtime.goexit({})
Nov 08 07:06:40 ollama[11902]: runtime/asm_amd64.s:1695 +0x1 fp=0xc0000537e8 sp=0xc0000537e0 pc=0x559312d9fde1
Nov 08 07:06:40 ollama[11902]: created by runtime.gcBgMarkStartWorkers in goroutine 8
Nov 08 07:06:40 ollama[11902]: runtime/mgc.go:1234 +0x1c
Nov 08 07:06:40 ollama[11902]: goroutine 13 gp=0xc000164700 m=nil [GC worker (idle)]:
Nov 08 07:06:40 ollama[11902]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
Nov 08 07:06:40 ollama[11902]: runtime/proc.go:402 +0xce fp=0xc000053f50 sp=0xc000053f30 pc=0x559312d6e00e
Nov 08 07:06:40 ollama[11902]: runtime.gcBgMarkWorker()
Nov 08 07:06:40 ollama[11902]: runtime/mgc.go:1310 +0xe5 fp=0xc000053fe0 sp=0xc000053f50 pc=0x559312d4f585
Nov 08 07:06:40 ollama[11902]: runtime.goexit({})
Nov 08 07:06:40 ollama[11902]: runtime/asm_amd64.s:1695 +0x1 fp=0xc000053fe8 sp=0xc000053fe0 pc=0x559312d9fde1
Nov 08 07:06:40 ollama[11902]: created by runtime.gcBgMarkStartWorkers in goroutine 8
Nov 08 07:06:40 ollama[11902]: runtime/mgc.go:1234 +0x1c
Nov 08 07:06:40 ollama[11902]: goroutine 14 gp=0xc0001648c0 m=nil [GC worker (idle)]:
Nov 08 07:06:40 ollama[11902]: runtime.gopark(0x2736bc93ca34d?, 0x0?, 0x0?, 0x0?, 0x0?)
Nov 08 07:06:40 ollama[11902]: runtime/proc.go:402 +0xce fp=0xc0000ae750 sp=0xc0000ae730 pc=0x559312d6e00e
Nov 08 07:06:40 ollama[11902]: runtime.gcBgMarkWorker()
Nov 08 07:06:40 ollama[11902]: runtime/mgc.go:1310 +0xe5 fp=0xc0000ae7e0 sp=0xc0000ae750 pc=0x559312d4f585
Nov 08 07:06:40 ollama[11902]: runtime.goexit({})
Nov 08 07:06:40 ollama[11902]: runtime/asm_amd64.s:1695 +0x1 fp=0xc0000ae7e8 sp=0xc0000ae7e0 pc=0x559312d9fde1
Nov 08 07:06:40 ollama[11902]: created by runtime.gcBgMarkStartWorkers in goroutine 8
Nov 08 07:06:40 ollama[11902]: runtime/mgc.go:1234 +0x1c
Nov 08 07:06:40 ollama[11902]: rax 0x0
Nov 08 07:06:40 ollama[11902]: rbx 0x7f361fffe000
Nov 08 07:06:40 ollama[11902]: rcx 0x7f36bec419fc
Nov 08 07:06:40 ollama[11902]: rdx 0x6
Nov 08 07:06:40 ollama[11902]: rdi 0x839d
Nov 08 07:06:40 ollama[11902]: rsi 0x83aa
Nov 08 07:06:40 ollama[11902]: rbp 0x83aa
Nov 08 07:06:40 ollama[11902]: rsp 0x7f361ffd52b0
Nov 08 07:06:40 ollama[11902]: r8 0x7f361ffd5380
Nov 08 07:06:40 ollama[11902]: r9 0x7f361ffd5350
Nov 08 07:06:40 ollama[11902]: r10 0x8
Nov 08 07:06:40 ollama[11902]: r11 0x246
Nov 08 07:06:40 ollama[11902]: r12 0x6
Nov 08 07:06:40 ollama[11902]: r13 0x16
Nov 08 07:06:40 ollama[11902]: r14 0x7f36e609f54b
Nov 08 07:06:40 ollama[11902]: r15 0x7f365005b650
Nov 08 07:06:40 ollama[11902]: rip 0x7f36bec419fc
Nov 08 07:06:40 ollama[11902]: rflags 0x246
Nov 08 07:06:40 ollama[11902]: cs 0x33
Nov 08 07:06:40 ollama[11902]: fs 0x0
Nov 08 07:06:40 ollama[11902]: gs 0x0
```
**System Information**:
**OS**: Ubuntu 22.04.5 LTS
**CPU**: Intel(R) Core(TM) i3-10300 CPU @ 3.70GHz
**GPU**: 2x NVIDIA GeForce RTX 3070
**Driver Version**: 550.120
**CUDA Version**: 12.4
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.4.0
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7568/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7568/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/565
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/565/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/565/comments
|
https://api.github.com/repos/ollama/ollama/issues/565/events
|
https://github.com/ollama/ollama/pull/565
| 1,907,314,573
|
PR_kwDOJ0Z1Ps5a6FaR
| 565
|
Add support for GBNF grammar definitions
|
{
"login": "SyrupThinker",
"id": 7753242,
"node_id": "MDQ6VXNlcjc3NTMyNDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/7753242?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SyrupThinker",
"html_url": "https://github.com/SyrupThinker",
"followers_url": "https://api.github.com/users/SyrupThinker/followers",
"following_url": "https://api.github.com/users/SyrupThinker/following{/other_user}",
"gists_url": "https://api.github.com/users/SyrupThinker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SyrupThinker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SyrupThinker/subscriptions",
"organizations_url": "https://api.github.com/users/SyrupThinker/orgs",
"repos_url": "https://api.github.com/users/SyrupThinker/repos",
"events_url": "https://api.github.com/users/SyrupThinker/events{/privacy}",
"received_events_url": "https://api.github.com/users/SyrupThinker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 11
| 2023-09-21T16:05:40
| 2024-08-07T16:57:48
| 2024-07-29T09:31:06
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/565",
"html_url": "https://github.com/ollama/ollama/pull/565",
"diff_url": "https://github.com/ollama/ollama/pull/565.diff",
"patch_url": "https://github.com/ollama/ollama/pull/565.patch",
"merged_at": null
}
|
This PR exposes the llama.cpp `grammar` parameter in the generate API.
It allows the user to provide a [GBNF grammar](https://github.com/ggerganov/llama.cpp/tree/master/grammars) to constrain the output of an LLM.
This can be used to, for example, reliably generate structured data like JSON:
```
>>> Generate a list of 5 random mock users that contain a firstname, lastname, birthday, created_at and email field. The created_at field should be RFC3339 and lie in the range of 2000 to 2020. Emails should use multiple subdomains under the example.com domain. The result should be in a JSON object with a users key.
{
"users": [
{
"firstname": "Emma",
"lastname": "Brown",
"birthday": "1993-08-12T00:00:00Z",
"created_at": "2017-04-15T13:00:00Z",
"email": "emma.brown@example.co.uk"
},
{
"firstname": "Olivia",
"lastname": "Jones",
"birthday": "1996-03-25T00:00:00Z",
"created_at": "2018-02-17T14:00:00Z",
"email": "olivia.jones@example.edu"
},
{
"firstname": "Ava",
"lastname": "Smith",
"birthday": "1997-08-24T00:00:00Z",
"created_at": "2019-05-12T15:00:00Z",
"email": "ava.smith@example.net"
},
{
"firstname": "Sophia",
"lastname": "Johnson",
"birthday": "1998-04-26T00:00:00Z",
"created_at": "2020-03-07T16:00:00Z",
"email": "sophia.johnson@example.org"
},
{
"firstname": "Mia",
"lastname": "Williams",
"birthday": "1999-07-23T00:00:00Z",
"created_at": "2020-12-08T17:00:00Z",
"email": "mia.williams@example.com"
}
]
}
>>> Create a JSON object that contains the latest birthday and the earliest created_at date, omit the time.
{
"latest_birthday": "1999-07-23",
"earliest_created_at": "2000-01-01"
}
```
*Generated with the examples/json Modelfile, first attempt, not cherry-picked*
**A note for potential users**
The generated documents are first try valid JSON, without extra tuning.
But note how the LLM used different TLD's, not subdomains. The `earliest_created_at` is also not as intended, the instruction is ambiguous.
This only ensures that the grammar is followed, the semantics might still be wrong.
|
{
"login": "SyrupThinker",
"id": 7753242,
"node_id": "MDQ6VXNlcjc3NTMyNDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/7753242?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SyrupThinker",
"html_url": "https://github.com/SyrupThinker",
"followers_url": "https://api.github.com/users/SyrupThinker/followers",
"following_url": "https://api.github.com/users/SyrupThinker/following{/other_user}",
"gists_url": "https://api.github.com/users/SyrupThinker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SyrupThinker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SyrupThinker/subscriptions",
"organizations_url": "https://api.github.com/users/SyrupThinker/orgs",
"repos_url": "https://api.github.com/users/SyrupThinker/repos",
"events_url": "https://api.github.com/users/SyrupThinker/events{/privacy}",
"received_events_url": "https://api.github.com/users/SyrupThinker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/565/reactions",
"total_count": 70,
"+1": 10,
"-1": 0,
"laugh": 0,
"hooray": 23,
"confused": 0,
"heart": 25,
"rocket": 10,
"eyes": 2
}
|
https://api.github.com/repos/ollama/ollama/issues/565/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4584
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4584/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4584/comments
|
https://api.github.com/repos/ollama/ollama/issues/4584/events
|
https://github.com/ollama/ollama/issues/4584
| 2,311,923,928
|
I_kwDOJ0Z1Ps6JzSjY
| 4,584
|
FileNotFoundError when running convert.py script
|
{
"login": "hhtao",
"id": 9322608,
"node_id": "MDQ6VXNlcjkzMjI2MDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/9322608?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hhtao",
"html_url": "https://github.com/hhtao",
"followers_url": "https://api.github.com/users/hhtao/followers",
"following_url": "https://api.github.com/users/hhtao/following{/other_user}",
"gists_url": "https://api.github.com/users/hhtao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hhtao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hhtao/subscriptions",
"organizations_url": "https://api.github.com/users/hhtao/orgs",
"repos_url": "https://api.github.com/users/hhtao/repos",
"events_url": "https://api.github.com/users/hhtao/events{/privacy}",
"received_events_url": "https://api.github.com/users/hhtao/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 0
| 2024-05-23T04:38:52
| 2024-05-23T04:38:52
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I am trying to convert a model using the convert.py script provided in the llama.cpp repository. However, I am encountering a FileNotFoundError. The script is unable to find the tokenizer file, even though tokenizer.json is present in the model directory. The error message is as follows:
python /mnt/part1/ollama/llm/llama.cpp/convert.py /mnt/part1/models/Qwen1.5-7B-Chat --outtype f16 --outfile converted.bin
INFO:convert:Loading model file /mnt/part1/models/Qwen1.5-7B-Chat/model-00001-of-00004.safetensors
INFO:convert:Loading model file /mnt/part1/models/Qwen1.5-7B-Chat/model-00001-of-00004.safetensors
INFO:convert:Loading model file /mnt/part1/models/Qwen1.5-7B-Chat/model-00002-of-00004.safetensors
INFO:convert:Loading model file /mnt/part1/models/Qwen1.5-7B-Chat/model-00003-of-00004.safetensors
INFO:convert:Loading model file /mnt/part1/models/Qwen1.5-7B-Chat/model-00004-of-00004.safetensors
INFO:convert:model parameters count : 7721324544 (8B)
INFO:convert:params = Params(n_vocab=151936, n_embd=4096, n_layer=32, n_ctx=32768, n_ff=11008, n_head=32, n_head_kv=32, n_experts=None, n_experts_used=None, f_norm_eps=1e-06, rope_scaling_type=None, f_rope_freq_base=1000000.0, f_rope_scale=None, n_orig_ctx=None, rope_finetuned=None, ftype=<GGMLFileType.MostlyF16: 1>, path_model=PosixPath('/mnt/part1/models/Qwen1.5-7B-Chat'))
Traceback (most recent call last):
File "/mnt/part1/ollama/llm/llama.cpp/convert.py", line 1714, in <module>
main()
File "/mnt/part1/ollama/llm/llama.cpp/convert.py", line 1671, in main
vocab, special_vocab = vocab_factory.load_vocab(vocab_types, model_parent_path)
File "/mnt/part1/ollama/llm/llama.cpp/convert.py", line 1522, in load_vocab
vocab = self._create_vocab_by_path(vocab_types)
File "/mnt/part1/ollama/llm/llama.cpp/convert.py", line 1512, in _create_vocab_by_path
raise FileNotFoundError(f"Could not find a tokenizer matching any of {vocab_types}")
FileNotFoundError: Could not find a tokenizer matching any of ['spm', 'hfft']
/mnt/part1/models/Qwen1.5-7B-Chat/
├── config.json
├── LICENSE
├── merges.txt
├── model-00001-of-00004.safetensors
├── model-00002-of-00004.safetensors
├── model-00003-of-00004.safetensors
├── model-00004-of-00004.safetensors
├── model.safetensors.index.json
├── README.md
├── tokenizer_config.json
├── tokenizer.json
├── vocab.json
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
ollama version is 0.1.33
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4584/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4584/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/8682
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8682/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8682/comments
|
https://api.github.com/repos/ollama/ollama/issues/8682/events
|
https://github.com/ollama/ollama/issues/8682
| 2,819,668,792
|
I_kwDOJ0Z1Ps6oELs4
| 8,682
|
GIN mode is hard-coded to debug mode
|
{
"login": "yoonsio",
"id": 24367477,
"node_id": "MDQ6VXNlcjI0MzY3NDc3",
"avatar_url": "https://avatars.githubusercontent.com/u/24367477?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yoonsio",
"html_url": "https://github.com/yoonsio",
"followers_url": "https://api.github.com/users/yoonsio/followers",
"following_url": "https://api.github.com/users/yoonsio/following{/other_user}",
"gists_url": "https://api.github.com/users/yoonsio/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yoonsio/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yoonsio/subscriptions",
"organizations_url": "https://api.github.com/users/yoonsio/orgs",
"repos_url": "https://api.github.com/users/yoonsio/repos",
"events_url": "https://api.github.com/users/yoonsio/events{/privacy}",
"received_events_url": "https://api.github.com/users/yoonsio/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 1
| 2025-01-30T01:04:50
| 2025-01-30T01:05:37
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Gin mode is hard-coded to gin.DebugMode and ignores `GIN_MODE` environment variable.
The server always displays this log on start up.
```
[GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.
```
### OS
_No response_
### GPU
_No response_
### CPU
_No response_
### Ollama version
master
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8682/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8682/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/8338
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8338/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8338/comments
|
https://api.github.com/repos/ollama/ollama/issues/8338/events
|
https://github.com/ollama/ollama/issues/8338
| 2,773,378,924
|
I_kwDOJ0Z1Ps6lTmds
| 8,338
|
Ollama structured outputs not working on Windows
|
{
"login": "mansibm6",
"id": 63543775,
"node_id": "MDQ6VXNlcjYzNTQzNzc1",
"avatar_url": "https://avatars.githubusercontent.com/u/63543775?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mansibm6",
"html_url": "https://github.com/mansibm6",
"followers_url": "https://api.github.com/users/mansibm6/followers",
"following_url": "https://api.github.com/users/mansibm6/following{/other_user}",
"gists_url": "https://api.github.com/users/mansibm6/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mansibm6/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mansibm6/subscriptions",
"organizations_url": "https://api.github.com/users/mansibm6/orgs",
"repos_url": "https://api.github.com/users/mansibm6/repos",
"events_url": "https://api.github.com/users/mansibm6/events{/privacy}",
"received_events_url": "https://api.github.com/users/mansibm6/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "https://api.github.com/users/ParthSareen/followers",
"following_url": "https://api.github.com/users/ParthSareen/following{/other_user}",
"gists_url": "https://api.github.com/users/ParthSareen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ParthSareen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ParthSareen/subscriptions",
"organizations_url": "https://api.github.com/users/ParthSareen/orgs",
"repos_url": "https://api.github.com/users/ParthSareen/repos",
"events_url": "https://api.github.com/users/ParthSareen/events{/privacy}",
"received_events_url": "https://api.github.com/users/ParthSareen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "https://api.github.com/users/ParthSareen/followers",
"following_url": "https://api.github.com/users/ParthSareen/following{/other_user}",
"gists_url": "https://api.github.com/users/ParthSareen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ParthSareen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ParthSareen/subscriptions",
"organizations_url": "https://api.github.com/users/ParthSareen/orgs",
"repos_url": "https://api.github.com/users/ParthSareen/repos",
"events_url": "https://api.github.com/users/ParthSareen/events{/privacy}",
"received_events_url": "https://api.github.com/users/ParthSareen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 4
| 2025-01-07T17:26:14
| 2025-01-08T23:01:49
| 2025-01-08T22:56:21
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Any workaround for this issue on Windows?
"""from ollama import chat
from pydantic import BaseModel
class Country(BaseModel):
name: str
capital: str
languages: list[str]
response = chat(
messages=[
{
'role': 'user',
'content': 'Tell me about Canada.',
}
],
model='llama3.1',
format=Country.model_json_schema(),
)
country = Country.model_validate_json(response.message.content)
print(country)"""
This is the example given by Ollama for using their structured ouptuts feature. This works for Mac doesn't work for Windows. All the dependencies are installed. Ollama is updated to the latest version. On Windows, it's giving me the error: 'dict' object has no attribute 'message.' But works fine on MacOS.
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.5.4
|
{
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "https://api.github.com/users/ParthSareen/followers",
"following_url": "https://api.github.com/users/ParthSareen/following{/other_user}",
"gists_url": "https://api.github.com/users/ParthSareen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ParthSareen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ParthSareen/subscriptions",
"organizations_url": "https://api.github.com/users/ParthSareen/orgs",
"repos_url": "https://api.github.com/users/ParthSareen/repos",
"events_url": "https://api.github.com/users/ParthSareen/events{/privacy}",
"received_events_url": "https://api.github.com/users/ParthSareen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8338/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8338/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4239
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4239/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4239/comments
|
https://api.github.com/repos/ollama/ollama/issues/4239/events
|
https://github.com/ollama/ollama/issues/4239
| 2,284,320,553
|
I_kwDOJ0Z1Ps6IJ_cp
| 4,239
|
AMD Vega64 gfx900 not supported on Windows
|
{
"login": "bryndin",
"id": 1129396,
"node_id": "MDQ6VXNlcjExMjkzOTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1129396?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bryndin",
"html_url": "https://github.com/bryndin",
"followers_url": "https://api.github.com/users/bryndin/followers",
"following_url": "https://api.github.com/users/bryndin/following{/other_user}",
"gists_url": "https://api.github.com/users/bryndin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bryndin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bryndin/subscriptions",
"organizations_url": "https://api.github.com/users/bryndin/orgs",
"repos_url": "https://api.github.com/users/bryndin/repos",
"events_url": "https://api.github.com/users/bryndin/events{/privacy}",
"received_events_url": "https://api.github.com/users/bryndin/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396191,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aXw",
"url": "https://api.github.com/repos/ollama/ollama/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
},
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg",
"url": "https://api.github.com/repos/ollama/ollama/labels/windows",
"name": "windows",
"color": "0052CC",
"default": false,
"description": ""
},
{
"id": 6433346500,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf3UTxA",
"url": "https://api.github.com/repos/ollama/ollama/labels/amd",
"name": "amd",
"color": "000000",
"default": false,
"description": "Issues relating to AMD GPUs and ROCm"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 3
| 2024-05-07T22:20:59
| 2024-07-22T16:49:12
| 2024-07-22T16:49:12
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
The Ollama lists Vega64 in the announcement, but running it says it's not supported.
Tried with fresh install of Ollama 0.1.33 and 0.1.34 on Win10.
Tried to install HIP (`AMD-Software-PRO-Edition-23.Q4-Win10-Win11-For-HIP.exe`) with no success either.
`time=2024-05-07T15:12:43.123-07:00 level=INFO source=gpu.go:122 msg="Detecting GPUs"
time=2024-05-07T15:12:43.159-07:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-05-07T15:12:43.178-07:00 level=INFO source=amd_windows.go:39 msg="AMD Driver: 50731541"
time=2024-05-07T15:12:43.181-07:00 level=INFO source=amd_windows.go:68 msg="detected hip devices" count=1
time=2024-05-07T15:12:43.181-07:00 level=INFO source=amd_windows.go:88 msg="hip device" id=0 name="Radeon RX Vega" gfx=gfx900:xnack-
time=2024-05-07T15:12:43.181-07:00 level=WARN source=amd_windows.go:104 msg="amdgpu is not supported" gpu=0 gpu_type=gfx900:xnack- library="C:\\Program Files\\AMD\\ROCm\\5.7\\bin" supported_types="[gfx1030 gfx1100 gfx1101 gfx1102 gfx906]"
time=2024-05-07T15:12:43.181-07:00 level=WARN source=amd_windows.go:106 msg="See https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md for HSA_OVERRIDE_GFX_VERSION usage"
`
### OS
Windows
### GPU
AMD
### CPU
AMD
### Ollama version
0.1.34
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4239/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4239/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5768
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5768/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5768/comments
|
https://api.github.com/repos/ollama/ollama/issues/5768/events
|
https://github.com/ollama/ollama/issues/5768
| 2,416,269,008
|
I_kwDOJ0Z1Ps6QBVbQ
| 5,768
|
ollama serve supports only llama3 for other models like gemma its 404 error
|
{
"login": "RakshitAralimatti",
"id": 170917018,
"node_id": "U_kgDOCi_8mg",
"avatar_url": "https://avatars.githubusercontent.com/u/170917018?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RakshitAralimatti",
"html_url": "https://github.com/RakshitAralimatti",
"followers_url": "https://api.github.com/users/RakshitAralimatti/followers",
"following_url": "https://api.github.com/users/RakshitAralimatti/following{/other_user}",
"gists_url": "https://api.github.com/users/RakshitAralimatti/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RakshitAralimatti/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RakshitAralimatti/subscriptions",
"organizations_url": "https://api.github.com/users/RakshitAralimatti/orgs",
"repos_url": "https://api.github.com/users/RakshitAralimatti/repos",
"events_url": "https://api.github.com/users/RakshitAralimatti/events{/privacy}",
"received_events_url": "https://api.github.com/users/RakshitAralimatti/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] |
closed
| false
| null |
[] | null | 14
| 2024-07-18T12:45:29
| 2024-11-06T01:05:49
| 2024-11-06T01:05:49
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Before I downloaded the Llama 3 and used it using the Ollama serve and made API calls using python.
Now I downloaded Gemma 2 and when I run Ollama serve and in API I use the model as gemma2 it shows 404 but when I run using llama3 it's working fine.
Thanks in advance
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5768/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5768/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3889
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3889/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3889/comments
|
https://api.github.com/repos/ollama/ollama/issues/3889/events
|
https://github.com/ollama/ollama/pull/3889
| 2,262,084,676
|
PR_kwDOJ0Z1Ps5tpT59
| 3,889
|
Remove trailing spaces
|
{
"login": "bsdnet",
"id": 4400805,
"node_id": "MDQ6VXNlcjQ0MDA4MDU=",
"avatar_url": "https://avatars.githubusercontent.com/u/4400805?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bsdnet",
"html_url": "https://github.com/bsdnet",
"followers_url": "https://api.github.com/users/bsdnet/followers",
"following_url": "https://api.github.com/users/bsdnet/following{/other_user}",
"gists_url": "https://api.github.com/users/bsdnet/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bsdnet/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bsdnet/subscriptions",
"organizations_url": "https://api.github.com/users/bsdnet/orgs",
"repos_url": "https://api.github.com/users/bsdnet/repos",
"events_url": "https://api.github.com/users/bsdnet/events{/privacy}",
"received_events_url": "https://api.github.com/users/bsdnet/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-04-24T20:00:05
| 2024-04-25T18:32:26
| 2024-04-25T18:32:26
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3889",
"html_url": "https://github.com/ollama/ollama/pull/3889",
"diff_url": "https://github.com/ollama/ollama/pull/3889.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3889.patch",
"merged_at": "2024-04-25T18:32:26"
}
|
Remove trailing spaces in the bash scripts
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3889/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3889/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7909
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7909/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7909/comments
|
https://api.github.com/repos/ollama/ollama/issues/7909/events
|
https://github.com/ollama/ollama/issues/7909
| 2,711,487,468
|
I_kwDOJ0Z1Ps6hngPs
| 7,909
|
ollama run quentinz/bge-large-zh-v1.5:latest Error: "quentinz/bge-large-zh-v1.5:latest" does not support generate
|
{
"login": "cqray1990",
"id": 32585434,
"node_id": "MDQ6VXNlcjMyNTg1NDM0",
"avatar_url": "https://avatars.githubusercontent.com/u/32585434?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cqray1990",
"html_url": "https://github.com/cqray1990",
"followers_url": "https://api.github.com/users/cqray1990/followers",
"following_url": "https://api.github.com/users/cqray1990/following{/other_user}",
"gists_url": "https://api.github.com/users/cqray1990/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cqray1990/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cqray1990/subscriptions",
"organizations_url": "https://api.github.com/users/cqray1990/orgs",
"repos_url": "https://api.github.com/users/cqray1990/repos",
"events_url": "https://api.github.com/users/cqray1990/events{/privacy}",
"received_events_url": "https://api.github.com/users/cqray1990/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 3
| 2024-12-02T10:51:54
| 2024-12-14T15:37:49
| 2024-12-14T15:37:49
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
ollama pull quentinz/bge-large-zh-v1.5
when start quentinz/bge-large-zh-v1.5:latest,raise errors
ollama run quentinz/bge-large-zh-v1.5:latest
Error: "quentinz/bge-large-zh-v1.5:latest" does not support generate
### OS
_No response_
### GPU
_No response_
### CPU
_No response_
### Ollama version
_No response_
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7909/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7909/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5911
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5911/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5911/comments
|
https://api.github.com/repos/ollama/ollama/issues/5911/events
|
https://github.com/ollama/ollama/pull/5911
| 2,427,537,187
|
PR_kwDOJ0Z1Ps52VubX
| 5,911
|
Update README.md
|
{
"login": "albertotn",
"id": 12526457,
"node_id": "MDQ6VXNlcjEyNTI2NDU3",
"avatar_url": "https://avatars.githubusercontent.com/u/12526457?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertotn",
"html_url": "https://github.com/albertotn",
"followers_url": "https://api.github.com/users/albertotn/followers",
"following_url": "https://api.github.com/users/albertotn/following{/other_user}",
"gists_url": "https://api.github.com/users/albertotn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertotn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertotn/subscriptions",
"organizations_url": "https://api.github.com/users/albertotn/orgs",
"repos_url": "https://api.github.com/users/albertotn/repos",
"events_url": "https://api.github.com/users/albertotn/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertotn/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-07-24T13:04:13
| 2024-11-21T08:08:50
| 2024-11-21T08:08:50
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5911",
"html_url": "https://github.com/ollama/ollama/pull/5911",
"diff_url": "https://github.com/ollama/ollama/pull/5911.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5911.patch",
"merged_at": null
}
|
Improved description on how to suggest new models to be supported by Ollama (suggested on Discord)
|
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5911/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5911/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2344
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2344/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2344/comments
|
https://api.github.com/repos/ollama/ollama/issues/2344/events
|
https://github.com/ollama/ollama/issues/2344
| 2,116,883,893
|
I_kwDOJ0Z1Ps5-LRW1
| 2,344
|
Support for Min_p
|
{
"login": "twalderman",
"id": 78627063,
"node_id": "MDQ6VXNlcjc4NjI3MDYz",
"avatar_url": "https://avatars.githubusercontent.com/u/78627063?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/twalderman",
"html_url": "https://github.com/twalderman",
"followers_url": "https://api.github.com/users/twalderman/followers",
"following_url": "https://api.github.com/users/twalderman/following{/other_user}",
"gists_url": "https://api.github.com/users/twalderman/gists{/gist_id}",
"starred_url": "https://api.github.com/users/twalderman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/twalderman/subscriptions",
"organizations_url": "https://api.github.com/users/twalderman/orgs",
"repos_url": "https://api.github.com/users/twalderman/repos",
"events_url": "https://api.github.com/users/twalderman/events{/privacy}",
"received_events_url": "https://api.github.com/users/twalderman/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-02-04T04:53:17
| 2024-03-11T18:28:57
| 2024-03-11T18:28:56
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I would like to have min_p as a ollama modelfile parameter.
See the link for context of this request: https://www.reddit.com/r/LocalLLaMA/comments/17vonjo/your_settings_are_probably_hurting_your_model_why/
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2344/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2344/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7418
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7418/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7418/comments
|
https://api.github.com/repos/ollama/ollama/issues/7418/events
|
https://github.com/ollama/ollama/pull/7418
| 2,623,909,471
|
PR_kwDOJ0Z1Ps6AYCva
| 7,418
|
Add Terminal App for README.md
|
{
"login": "joey5403",
"id": 93772967,
"node_id": "U_kgDOBZbcpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93772967?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joey5403",
"html_url": "https://github.com/joey5403",
"followers_url": "https://api.github.com/users/joey5403/followers",
"following_url": "https://api.github.com/users/joey5403/following{/other_user}",
"gists_url": "https://api.github.com/users/joey5403/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joey5403/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joey5403/subscriptions",
"organizations_url": "https://api.github.com/users/joey5403/orgs",
"repos_url": "https://api.github.com/users/joey5403/repos",
"events_url": "https://api.github.com/users/joey5403/events{/privacy}",
"received_events_url": "https://api.github.com/users/joey5403/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-10-30T12:12:42
| 2024-11-12T00:44:46
| 2024-11-12T00:44:46
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7418",
"html_url": "https://github.com/ollama/ollama/pull/7418",
"diff_url": "https://github.com/ollama/ollama/pull/7418.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7418.patch",
"merged_at": "2024-11-12T00:44:46"
}
|
add a very useful Terminal App for README.md
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7418/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7418/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3015
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3015/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3015/comments
|
https://api.github.com/repos/ollama/ollama/issues/3015/events
|
https://github.com/ollama/ollama/issues/3015
| 2,176,989,461
|
I_kwDOJ0Z1Ps6BwjkV
| 3,015
|
Suggestion: Ignore previous context in chat api.
|
{
"login": "owenzhao",
"id": 2182896,
"node_id": "MDQ6VXNlcjIxODI4OTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/2182896?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/owenzhao",
"html_url": "https://github.com/owenzhao",
"followers_url": "https://api.github.com/users/owenzhao/followers",
"following_url": "https://api.github.com/users/owenzhao/following{/other_user}",
"gists_url": "https://api.github.com/users/owenzhao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/owenzhao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/owenzhao/subscriptions",
"organizations_url": "https://api.github.com/users/owenzhao/orgs",
"repos_url": "https://api.github.com/users/owenzhao/repos",
"events_url": "https://api.github.com/users/owenzhao/events{/privacy}",
"received_events_url": "https://api.github.com/users/owenzhao/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-03-09T01:21:42
| 2024-03-10T01:11:58
| 2024-03-10T01:11:58
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Sometimes, we may change subjects when talking. All previous talks are irrelevant. But in chat API, there is no parameter for that. So maybe we can add a bool like isNewChat to start a new conversation.
|
{
"login": "owenzhao",
"id": 2182896,
"node_id": "MDQ6VXNlcjIxODI4OTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/2182896?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/owenzhao",
"html_url": "https://github.com/owenzhao",
"followers_url": "https://api.github.com/users/owenzhao/followers",
"following_url": "https://api.github.com/users/owenzhao/following{/other_user}",
"gists_url": "https://api.github.com/users/owenzhao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/owenzhao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/owenzhao/subscriptions",
"organizations_url": "https://api.github.com/users/owenzhao/orgs",
"repos_url": "https://api.github.com/users/owenzhao/repos",
"events_url": "https://api.github.com/users/owenzhao/events{/privacy}",
"received_events_url": "https://api.github.com/users/owenzhao/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3015/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3015/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6667
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6667/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6667/comments
|
https://api.github.com/repos/ollama/ollama/issues/6667/events
|
https://github.com/ollama/ollama/pull/6667
| 2,509,330,952
|
PR_kwDOJ0Z1Ps56m9Bc
| 6,667
|
(Rebased) Add Braina AI as an Ollama Desktop GUI #2
|
{
"login": "wallacelance",
"id": 177184683,
"node_id": "U_kgDOCo-fqw",
"avatar_url": "https://avatars.githubusercontent.com/u/177184683?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wallacelance",
"html_url": "https://github.com/wallacelance",
"followers_url": "https://api.github.com/users/wallacelance/followers",
"following_url": "https://api.github.com/users/wallacelance/following{/other_user}",
"gists_url": "https://api.github.com/users/wallacelance/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wallacelance/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wallacelance/subscriptions",
"organizations_url": "https://api.github.com/users/wallacelance/orgs",
"repos_url": "https://api.github.com/users/wallacelance/repos",
"events_url": "https://api.github.com/users/wallacelance/events{/privacy}",
"received_events_url": "https://api.github.com/users/wallacelance/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 3
| 2024-09-06T02:35:56
| 2024-11-23T02:42:20
| 2024-11-21T09:43:54
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6667",
"html_url": "https://github.com/ollama/ollama/pull/6667",
"diff_url": "https://github.com/ollama/ollama/pull/6667.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6667.patch",
"merged_at": null
}
|
Rebased PR. Added Braina in Ollama's community Integration list as desktop client for Windows. Please see the old pull request for more information: **https://github.com/ollama/ollama/pull/6112**
|
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6667/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6667/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7472
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7472/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7472/comments
|
https://api.github.com/repos/ollama/ollama/issues/7472/events
|
https://github.com/ollama/ollama/issues/7472
| 2,630,844,817
|
I_kwDOJ0Z1Ps6cz4GR
| 7,472
|
GPU on WIndows
|
{
"login": "godlatro",
"id": 7275726,
"node_id": "MDQ6VXNlcjcyNzU3MjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/7275726?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/godlatro",
"html_url": "https://github.com/godlatro",
"followers_url": "https://api.github.com/users/godlatro/followers",
"following_url": "https://api.github.com/users/godlatro/following{/other_user}",
"gists_url": "https://api.github.com/users/godlatro/gists{/gist_id}",
"starred_url": "https://api.github.com/users/godlatro/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/godlatro/subscriptions",
"organizations_url": "https://api.github.com/users/godlatro/orgs",
"repos_url": "https://api.github.com/users/godlatro/repos",
"events_url": "https://api.github.com/users/godlatro/events{/privacy}",
"received_events_url": "https://api.github.com/users/godlatro/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 4
| 2024-11-02T22:51:27
| 2024-11-03T21:19:45
| 2024-11-03T20:27:32
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I have latest Ollama desktop
Nvidia 3060
Windows 10
Try to use any models, CPU/GPU loading ~70%/20%
I load many models one by one.
I unload the extra ones with ```ollama stop model```
Almost all models work terribly slowly. In 90% of cases, the models can't even finish writing their answer and are interrupted every time when using the web interface open-webui on docker.
But when I running Ubuntu 24 on this computer, it shows 100% loading my GPU and all models work perfectly fast.
How can I make Windows also use 100% GPU like in ubuntu?
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.32
|
{
"login": "godlatro",
"id": 7275726,
"node_id": "MDQ6VXNlcjcyNzU3MjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/7275726?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/godlatro",
"html_url": "https://github.com/godlatro",
"followers_url": "https://api.github.com/users/godlatro/followers",
"following_url": "https://api.github.com/users/godlatro/following{/other_user}",
"gists_url": "https://api.github.com/users/godlatro/gists{/gist_id}",
"starred_url": "https://api.github.com/users/godlatro/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/godlatro/subscriptions",
"organizations_url": "https://api.github.com/users/godlatro/orgs",
"repos_url": "https://api.github.com/users/godlatro/repos",
"events_url": "https://api.github.com/users/godlatro/events{/privacy}",
"received_events_url": "https://api.github.com/users/godlatro/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7472/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7472/timeline
| null |
completed
| false
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.