url
stringlengths 51
54
| repository_url
stringclasses 1
value | labels_url
stringlengths 65
68
| comments_url
stringlengths 60
63
| events_url
stringlengths 58
61
| html_url
stringlengths 39
44
| id
int64 1.78B
2.82B
| node_id
stringlengths 18
19
| number
int64 1
8.69k
| title
stringlengths 1
382
| user
dict | labels
listlengths 0
5
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
2
| milestone
null | comments
int64 0
323
| created_at
timestamp[s] | updated_at
timestamp[s] | closed_at
timestamp[s] | author_association
stringclasses 4
values | sub_issues_summary
dict | active_lock_reason
null | draft
bool 2
classes | pull_request
dict | body
stringlengths 2
118k
โ | closed_by
dict | reactions
dict | timeline_url
stringlengths 60
63
| performed_via_github_app
null | state_reason
stringclasses 4
values | is_pull_request
bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/ollama/ollama/issues/6113
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6113/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6113/comments
|
https://api.github.com/repos/ollama/ollama/issues/6113/events
|
https://github.com/ollama/ollama/issues/6113
| 2,441,562,851
|
I_kwDOJ0Z1Ps6Rh0rj
| 6,113
|
Generations API for nuextract/phi
|
{
"login": "alphastrata",
"id": 25101888,
"node_id": "MDQ6VXNlcjI1MTAxODg4",
"avatar_url": "https://avatars.githubusercontent.com/u/25101888?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alphastrata",
"html_url": "https://github.com/alphastrata",
"followers_url": "https://api.github.com/users/alphastrata/followers",
"following_url": "https://api.github.com/users/alphastrata/following{/other_user}",
"gists_url": "https://api.github.com/users/alphastrata/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alphastrata/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alphastrata/subscriptions",
"organizations_url": "https://api.github.com/users/alphastrata/orgs",
"repos_url": "https://api.github.com/users/alphastrata/repos",
"events_url": "https://api.github.com/users/alphastrata/events{/privacy}",
"received_events_url": "https://api.github.com/users/alphastrata/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-08-01T06:07:25
| 2024-08-02T04:05:12
| 2024-08-02T04:04:48
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
hitting /api/generate yields nothing :(
if interacting in the manner with neextract/phi is possible please do let us know && we can update the readme etc accordingly.
|
{
"login": "alphastrata",
"id": 25101888,
"node_id": "MDQ6VXNlcjI1MTAxODg4",
"avatar_url": "https://avatars.githubusercontent.com/u/25101888?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alphastrata",
"html_url": "https://github.com/alphastrata",
"followers_url": "https://api.github.com/users/alphastrata/followers",
"following_url": "https://api.github.com/users/alphastrata/following{/other_user}",
"gists_url": "https://api.github.com/users/alphastrata/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alphastrata/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alphastrata/subscriptions",
"organizations_url": "https://api.github.com/users/alphastrata/orgs",
"repos_url": "https://api.github.com/users/alphastrata/repos",
"events_url": "https://api.github.com/users/alphastrata/events{/privacy}",
"received_events_url": "https://api.github.com/users/alphastrata/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6113/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6113/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4916
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4916/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4916/comments
|
https://api.github.com/repos/ollama/ollama/issues/4916/events
|
https://github.com/ollama/ollama/issues/4916
| 2,340,931,785
|
I_kwDOJ0Z1Ps6Lh8jJ
| 4,916
|
Newer models are having problems
|
{
"login": "iplayfast",
"id": 751306,
"node_id": "MDQ6VXNlcjc1MTMwNg==",
"avatar_url": "https://avatars.githubusercontent.com/u/751306?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iplayfast",
"html_url": "https://github.com/iplayfast",
"followers_url": "https://api.github.com/users/iplayfast/followers",
"following_url": "https://api.github.com/users/iplayfast/following{/other_user}",
"gists_url": "https://api.github.com/users/iplayfast/gists{/gist_id}",
"starred_url": "https://api.github.com/users/iplayfast/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iplayfast/subscriptions",
"organizations_url": "https://api.github.com/users/iplayfast/orgs",
"repos_url": "https://api.github.com/users/iplayfast/repos",
"events_url": "https://api.github.com/users/iplayfast/events{/privacy}",
"received_events_url": "https://api.github.com/users/iplayfast/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 11
| 2024-06-07T18:04:55
| 2024-06-15T22:17:35
| 2024-06-09T17:24:14
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
```ollama --version
ollama version is 0.1.41
```
```
ollama run granite-code
pulling manifest
pulling 02ab8cd2f514... 100% โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 2.0 GB
pulling 0d7c97d535b6... 100% โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 26 B
pulling e50df8490144... 100% โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 123 B
pulling 9893bb2c2917... 100% โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 108 B
pulling 22b176fd8ef6... 100% โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 485 B
verifying sha256 digest
writing manifest
removing any unused layers
success
>>> hello
Hola, ยฟen quรฉ puedo ayudarte?
>>> please respond in english
Hello! ยฟWhat can I help you with in English?
```
qwen2 was giving garbage and then after doing the above, started behaving.
```
ollama run deepseek-v2
>>> hello
ไฝ ๅฅฝ๏ผๆไปไนๆๅฏไปฅๅธฎๅฉไฝ ็ๅ๏ผ
>>> please respond in english
ๅฝ็ถ๏ผๆๅฏไปฅ็จ่ฑ่ฏญๅ็ญๆจ็้ฎ้ขใๅฆๆๆจๆไปปไฝ้ฎ้ขๆ้่ฆๅธฎๅฉ๏ผ่ฏท้ๆถๅ่ฏๆใ
```
llama3 behaves correctly through all this.
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.41
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4916/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4916/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4406
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4406/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4406/comments
|
https://api.github.com/repos/ollama/ollama/issues/4406/events
|
https://github.com/ollama/ollama/issues/4406
| 2,293,392,186
|
I_kwDOJ0Z1Ps6IsmM6
| 4,406
|
Would it be possible to add the Bloom model and other multilanguage/multilingual models?
|
{
"login": "asterbini",
"id": 3383089,
"node_id": "MDQ6VXNlcjMzODMwODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3383089?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/asterbini",
"html_url": "https://github.com/asterbini",
"followers_url": "https://api.github.com/users/asterbini/followers",
"following_url": "https://api.github.com/users/asterbini/following{/other_user}",
"gists_url": "https://api.github.com/users/asterbini/gists{/gist_id}",
"starred_url": "https://api.github.com/users/asterbini/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/asterbini/subscriptions",
"organizations_url": "https://api.github.com/users/asterbini/orgs",
"repos_url": "https://api.github.com/users/asterbini/repos",
"events_url": "https://api.github.com/users/asterbini/events{/privacy}",
"received_events_url": "https://api.github.com/users/asterbini/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
open
| false
| null |
[] | null | 1
| 2024-05-13T17:17:28
| 2024-05-13T22:13:17
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
It would be nice, for us non-English people, to have access to the some good multilanguage/multilingual LLMs.
Bloom comes to mind, but other would be very useful.
https://huggingface.co/blog/bloom
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4406/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4406/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/1969
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1969/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1969/comments
|
https://api.github.com/repos/ollama/ollama/issues/1969/events
|
https://github.com/ollama/ollama/issues/1969
| 2,079,798,575
|
I_kwDOJ0Z1Ps579zUv
| 1,969
|
Unable to push
|
{
"login": "julianallchin",
"id": 20829244,
"node_id": "MDQ6VXNlcjIwODI5MjQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/20829244?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julianallchin",
"html_url": "https://github.com/julianallchin",
"followers_url": "https://api.github.com/users/julianallchin/followers",
"following_url": "https://api.github.com/users/julianallchin/following{/other_user}",
"gists_url": "https://api.github.com/users/julianallchin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julianallchin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julianallchin/subscriptions",
"organizations_url": "https://api.github.com/users/julianallchin/orgs",
"repos_url": "https://api.github.com/users/julianallchin/repos",
"events_url": "https://api.github.com/users/julianallchin/events{/privacy}",
"received_events_url": "https://api.github.com/users/julianallchin/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 10
| 2024-01-12T22:10:25
| 2024-01-16T18:24:55
| 2024-01-16T02:46:47
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I followed all the steps in the documentation and Ollama is telling me
```
unable to push <username>/example, make sure this namespace exists and you are authorized to push to it
```
I have created the model online and uploaded my public key, but it doesn't work.
|
{
"login": "julianallchin",
"id": 20829244,
"node_id": "MDQ6VXNlcjIwODI5MjQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/20829244?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julianallchin",
"html_url": "https://github.com/julianallchin",
"followers_url": "https://api.github.com/users/julianallchin/followers",
"following_url": "https://api.github.com/users/julianallchin/following{/other_user}",
"gists_url": "https://api.github.com/users/julianallchin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julianallchin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julianallchin/subscriptions",
"organizations_url": "https://api.github.com/users/julianallchin/orgs",
"repos_url": "https://api.github.com/users/julianallchin/repos",
"events_url": "https://api.github.com/users/julianallchin/events{/privacy}",
"received_events_url": "https://api.github.com/users/julianallchin/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1969/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1969/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2501
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2501/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2501/comments
|
https://api.github.com/repos/ollama/ollama/issues/2501/events
|
https://github.com/ollama/ollama/issues/2501
| 2,135,187,113
|
I_kwDOJ0Z1Ps5_RF6p
| 2,501
|
Simple tasks fail
|
{
"login": "dtp555-1212",
"id": 13024057,
"node_id": "MDQ6VXNlcjEzMDI0MDU3",
"avatar_url": "https://avatars.githubusercontent.com/u/13024057?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dtp555-1212",
"html_url": "https://github.com/dtp555-1212",
"followers_url": "https://api.github.com/users/dtp555-1212/followers",
"following_url": "https://api.github.com/users/dtp555-1212/following{/other_user}",
"gists_url": "https://api.github.com/users/dtp555-1212/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dtp555-1212/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dtp555-1212/subscriptions",
"organizations_url": "https://api.github.com/users/dtp555-1212/orgs",
"repos_url": "https://api.github.com/users/dtp555-1212/repos",
"events_url": "https://api.github.com/users/dtp555-1212/events{/privacy}",
"received_events_url": "https://api.github.com/users/dtp555-1212/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] |
closed
| false
| null |
[] | null | 3
| 2024-02-14T20:53:53
| 2024-05-10T20:23:25
| 2024-05-10T20:23:25
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Simple tasks seem to be beyond what any of the open-source models (at least for all that I have tried) are able to accomplish. I can tease the results out of โBing co-pilotโ but so far these types of tasks seem to allude the open-source models loaded into Ollama.
Can you tell me if I am doing something wrong, or a better prompt, or which model has the best chance of doing these right, or if indeed the released models canโt handle this type of thing?
1) task one โฆ generate a list of 10 sentences that have exactly 5 words each.
I have 'never' seen it correctly generate 10 sentences in a row that have exactly 5 words each. It can โsometimesโ count the words in a single sentence correctly, if asked how it came to its conclusion, but often it is wrong. It also canโt definitively know if something is something is one word or two (e.g. the cat) โฆ. It seems to improve after saying that a word will never have a space within it, but then quickly forgets that principle.
2) task two โฆ generate a list of 10 sentences that end with a verb followed by a plural noun.
It can sometimes do a list of sentences that end with a verb, OR it can sometimes do a list of sentences that end with a plural noun, but I have never seen it correctly generate a list of sentences that satisfies both criteria.
I would love to hear any suggestions that would help with these types of tasks. Since โBing-copilotโ can be coerced into doing this, and I have heard the open-source models are performing very well, I am hoping there is a simple explanation for these utter failures.
Thanks in advance.
P.S. I have tried given pre-prompting to say things like โ you are an expert linguist. You know parts of speech, you know how to count the words in a sentence. Assume a word never has a space in it. โฆ ' I have also tried asking it to go step by step, and double check results... but none of this seems to have a positive effect.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2501/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/ollama/ollama/issues/2501/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1824
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1824/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1824/comments
|
https://api.github.com/repos/ollama/ollama/issues/1824/events
|
https://github.com/ollama/ollama/issues/1824
| 2,068,579,370
|
I_kwDOJ0Z1Ps57TAQq
| 1,824
|
Add Phi2 support
|
{
"login": "khalilxg",
"id": 78953896,
"node_id": "MDQ6VXNlcjc4OTUzODk2",
"avatar_url": "https://avatars.githubusercontent.com/u/78953896?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/khalilxg",
"html_url": "https://github.com/khalilxg",
"followers_url": "https://api.github.com/users/khalilxg/followers",
"following_url": "https://api.github.com/users/khalilxg/following{/other_user}",
"gists_url": "https://api.github.com/users/khalilxg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/khalilxg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/khalilxg/subscriptions",
"organizations_url": "https://api.github.com/users/khalilxg/orgs",
"repos_url": "https://api.github.com/users/khalilxg/repos",
"events_url": "https://api.github.com/users/khalilxg/events{/privacy}",
"received_events_url": "https://api.github.com/users/khalilxg/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-01-06T12:17:33
| 2024-01-07T05:49:13
| 2024-01-07T05:49:13
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Microsoft changed phiz license to become MIT. Plz Add the support for Phi2
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1824/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1824/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8104
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8104/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8104/comments
|
https://api.github.com/repos/ollama/ollama/issues/8104/events
|
https://github.com/ollama/ollama/issues/8104
| 2,740,246,949
|
I_kwDOJ0Z1Ps6jVNml
| 8,104
|
c4ai-command-r7b-12-2024
|
{
"login": "vYLQs6",
"id": 143073604,
"node_id": "U_kgDOCIchRA",
"avatar_url": "https://avatars.githubusercontent.com/u/143073604?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vYLQs6",
"html_url": "https://github.com/vYLQs6",
"followers_url": "https://api.github.com/users/vYLQs6/followers",
"following_url": "https://api.github.com/users/vYLQs6/following{/other_user}",
"gists_url": "https://api.github.com/users/vYLQs6/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vYLQs6/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vYLQs6/subscriptions",
"organizations_url": "https://api.github.com/users/vYLQs6/orgs",
"repos_url": "https://api.github.com/users/vYLQs6/repos",
"events_url": "https://api.github.com/users/vYLQs6/events{/privacy}",
"received_events_url": "https://api.github.com/users/vYLQs6/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
closed
| false
| null |
[] | null | 7
| 2024-12-15T03:02:40
| 2025-01-15T07:07:40
| 2025-01-13T01:39:19
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
https://huggingface.co/CohereForAI/c4ai-command-r7b-12-2024
---
Note: This model seems to be using a new arch, there is no gguf on HF
---




---
# **Model Card for C4AI Command R7B**
## **Model Summary**
C4AI Command R7B is an open weights research release of a 7B billion parameter model with advanced capabilities optimized for a variety of use cases including reasoning, summarization, question answering, and code. The model is trained to perform sophisticated tasks including Retrieval Augmented Generation (RAG) and tool use. The model also has powerful agentic capabilities with the ability to use and combine multiple tools over multiple steps to accomplish more difficult tasks. It obtains top performance on enterprise relevant code use cases. C4AI Command R7B is a multilingual model trained on 23 languages.
Developed by: [Cohere](https://cohere.com/) and [Cohere For AI](https://cohere.for.ai/)
* Point of Contact: Cohere For AI: [cohere.for.ai](https://cohere.for.ai/)
* License: [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license), requires also adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy)
* Model: c4ai-command-r7b-12-2024
* Model Size: 7 billion parameters
* Context length: 128K
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8104/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8104/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3449
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3449/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3449/comments
|
https://api.github.com/repos/ollama/ollama/issues/3449/events
|
https://github.com/ollama/ollama/issues/3449
| 2,219,920,516
|
I_kwDOJ0Z1Ps6EUUyE
| 3,449
|
CORS Error in Blower
|
{
"login": "udrs",
"id": 71435435,
"node_id": "MDQ6VXNlcjcxNDM1NDM1",
"avatar_url": "https://avatars.githubusercontent.com/u/71435435?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/udrs",
"html_url": "https://github.com/udrs",
"followers_url": "https://api.github.com/users/udrs/followers",
"following_url": "https://api.github.com/users/udrs/following{/other_user}",
"gists_url": "https://api.github.com/users/udrs/gists{/gist_id}",
"starred_url": "https://api.github.com/users/udrs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/udrs/subscriptions",
"organizations_url": "https://api.github.com/users/udrs/orgs",
"repos_url": "https://api.github.com/users/udrs/repos",
"events_url": "https://api.github.com/users/udrs/events{/privacy}",
"received_events_url": "https://api.github.com/users/udrs/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 2
| 2024-04-02T08:44:56
| 2024-06-01T22:52:11
| 2024-06-01T22:52:11
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Thank you for your great work, the issue I met can be seen in the figure below.

I have modified the configuation within the ollama.service, but the problem still exists.

I build the blower in my computer, and I try to utilize the ollama in another server.
addMessageToChat('You', message);
// Replace this URL with the actual endpoint where the backend API is hosted
const apiURL = 'http://100.67.xxx.xxx:11434/api/generate';
// Prepare the data to be sent in the POST request
const requestData = {
model: "llama2",
prompt: message,
options: {
num_ctx: 4096
}
};
Could you please tell me how can I solve this issue.
### What did you expect to see?
I hope to call API in server from another computer.
### Steps to reproduce
_No response_
### Are there any recent changes that introduced the issue?
_No response_
### OS
_No response_
### Architecture
_No response_
### Platform
_No response_
### Ollama version
latest
### GPU
_No response_
### GPU info
_No response_
### CPU
_No response_
### Other software
javascript, html nginx
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3449/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3449/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/278
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/278/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/278/comments
|
https://api.github.com/repos/ollama/ollama/issues/278/events
|
https://github.com/ollama/ollama/issues/278
| 1,836,366,287
|
I_kwDOJ0Z1Ps5tdLnP
| 278
|
ollama + LlamaIndex
|
{
"login": "jowamedia",
"id": 8236760,
"node_id": "MDQ6VXNlcjgyMzY3NjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/8236760?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jowamedia",
"html_url": "https://github.com/jowamedia",
"followers_url": "https://api.github.com/users/jowamedia/followers",
"following_url": "https://api.github.com/users/jowamedia/following{/other_user}",
"gists_url": "https://api.github.com/users/jowamedia/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jowamedia/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jowamedia/subscriptions",
"organizations_url": "https://api.github.com/users/jowamedia/orgs",
"repos_url": "https://api.github.com/users/jowamedia/repos",
"events_url": "https://api.github.com/users/jowamedia/events{/privacy}",
"received_events_url": "https://api.github.com/users/jowamedia/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 5667396205,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2abQ",
"url": "https://api.github.com/repos/ollama/ollama/labels/help%20wanted",
"name": "help wanted",
"color": "008672",
"default": true,
"description": "Extra attention is needed"
}
] |
closed
| false
| null |
[] | null | 4
| 2023-08-04T09:08:16
| 2023-09-22T04:12:44
| 2023-09-22T03:54:23
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Verry nice and easy to use Ollama.
Is it planned to use it with LlamaIndex ?
It would be verry nice to index our local documents (and others) ;-)
|
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/278/reactions",
"total_count": 10,
"+1": 10,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/278/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1567
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1567/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1567/comments
|
https://api.github.com/repos/ollama/ollama/issues/1567/events
|
https://github.com/ollama/ollama/issues/1567
| 2,044,947,923
|
I_kwDOJ0Z1Ps55423T
| 1,567
|
getting llava output in chinese
|
{
"login": "YashKumarVerma",
"id": 14032427,
"node_id": "MDQ6VXNlcjE0MDMyNDI3",
"avatar_url": "https://avatars.githubusercontent.com/u/14032427?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/YashKumarVerma",
"html_url": "https://github.com/YashKumarVerma",
"followers_url": "https://api.github.com/users/YashKumarVerma/followers",
"following_url": "https://api.github.com/users/YashKumarVerma/following{/other_user}",
"gists_url": "https://api.github.com/users/YashKumarVerma/gists{/gist_id}",
"starred_url": "https://api.github.com/users/YashKumarVerma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/YashKumarVerma/subscriptions",
"organizations_url": "https://api.github.com/users/YashKumarVerma/orgs",
"repos_url": "https://api.github.com/users/YashKumarVerma/repos",
"events_url": "https://api.github.com/users/YashKumarVerma/events{/privacy}",
"received_events_url": "https://api.github.com/users/YashKumarVerma/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 6
| 2023-12-16T20:24:11
| 2024-03-11T18:41:59
| 2024-03-11T18:41:59
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I've tried running in CLI and via network request, but getting reply in chinese mandarin.
Has someone faced this?
<img width="1376" alt="image" src="https://github.com/jmorganca/ollama/assets/14032427/bd80327a-e42e-45c2-9ad0-4222b6221bb7">
<img width="1123" alt="image" src="https://github.com/jmorganca/ollama/assets/14032427/af4b98bb-11f5-4ced-8d75-9f4ade11ee26">
|
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1567/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1567/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1111
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1111/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1111/comments
|
https://api.github.com/repos/ollama/ollama/issues/1111/events
|
https://github.com/ollama/ollama/issues/1111
| 1,991,064,208
|
I_kwDOJ0Z1Ps52rTqQ
| 1,111
|
Problems pushing a new model to ollama
|
{
"login": "matthewchung74",
"id": 1685700,
"node_id": "MDQ6VXNlcjE2ODU3MDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/1685700?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/matthewchung74",
"html_url": "https://github.com/matthewchung74",
"followers_url": "https://api.github.com/users/matthewchung74/followers",
"following_url": "https://api.github.com/users/matthewchung74/following{/other_user}",
"gists_url": "https://api.github.com/users/matthewchung74/gists{/gist_id}",
"starred_url": "https://api.github.com/users/matthewchung74/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/matthewchung74/subscriptions",
"organizations_url": "https://api.github.com/users/matthewchung74/orgs",
"repos_url": "https://api.github.com/users/matthewchung74/repos",
"events_url": "https://api.github.com/users/matthewchung74/events{/privacy}",
"received_events_url": "https://api.github.com/users/matthewchung74/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 2
| 2023-11-13T16:56:58
| 2024-01-17T23:56:10
| 2024-01-17T23:56:10
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Having problems pushing a new model to ollama
```
ollama push matthewchung74/medmistral
retrieving manifest
pushing 9f302ba97745... 99% |โโโโโโโโโโโโ | (4.1/4.1 GB, 3.4 MB/s) [19m54s:0s]Error: max retries exceeded
```
Is this the right cli command? I've tried about 3 times now.
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1111/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1111/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1434
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1434/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1434/comments
|
https://api.github.com/repos/ollama/ollama/issues/1434/events
|
https://github.com/ollama/ollama/issues/1434
| 2,032,762,700
|
I_kwDOJ0Z1Ps55KX9M
| 1,434
|
Can't GET /tags on linux build
|
{
"login": "johnnyasantoss",
"id": 14189387,
"node_id": "MDQ6VXNlcjE0MTg5Mzg3",
"avatar_url": "https://avatars.githubusercontent.com/u/14189387?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/johnnyasantoss",
"html_url": "https://github.com/johnnyasantoss",
"followers_url": "https://api.github.com/users/johnnyasantoss/followers",
"following_url": "https://api.github.com/users/johnnyasantoss/following{/other_user}",
"gists_url": "https://api.github.com/users/johnnyasantoss/gists{/gist_id}",
"starred_url": "https://api.github.com/users/johnnyasantoss/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/johnnyasantoss/subscriptions",
"organizations_url": "https://api.github.com/users/johnnyasantoss/orgs",
"repos_url": "https://api.github.com/users/johnnyasantoss/repos",
"events_url": "https://api.github.com/users/johnnyasantoss/events{/privacy}",
"received_events_url": "https://api.github.com/users/johnnyasantoss/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2023-12-08T14:28:59
| 2023-12-08T15:47:16
| 2023-12-08T15:47:15
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I'm using the [web ui](https://github.com/ollama-webui/ollama-webui) on MacOS and on Linux. On Mac it works fine with the latest version, and on linux it doesn't. It seems that it can't get the models (GET to /tags). On linux I get back a 404

|
{
"login": "johnnyasantoss",
"id": 14189387,
"node_id": "MDQ6VXNlcjE0MTg5Mzg3",
"avatar_url": "https://avatars.githubusercontent.com/u/14189387?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/johnnyasantoss",
"html_url": "https://github.com/johnnyasantoss",
"followers_url": "https://api.github.com/users/johnnyasantoss/followers",
"following_url": "https://api.github.com/users/johnnyasantoss/following{/other_user}",
"gists_url": "https://api.github.com/users/johnnyasantoss/gists{/gist_id}",
"starred_url": "https://api.github.com/users/johnnyasantoss/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/johnnyasantoss/subscriptions",
"organizations_url": "https://api.github.com/users/johnnyasantoss/orgs",
"repos_url": "https://api.github.com/users/johnnyasantoss/repos",
"events_url": "https://api.github.com/users/johnnyasantoss/events{/privacy}",
"received_events_url": "https://api.github.com/users/johnnyasantoss/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1434/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1434/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8442
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8442/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8442/comments
|
https://api.github.com/repos/ollama/ollama/issues/8442/events
|
https://github.com/ollama/ollama/issues/8442
| 2,789,700,016
|
I_kwDOJ0Z1Ps6mR3Gw
| 8,442
|
most powerful model with 4m context MiniMax-Text-01
|
{
"login": "olumolu",
"id": 162728301,
"node_id": "U_kgDOCbMJbQ",
"avatar_url": "https://avatars.githubusercontent.com/u/162728301?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/olumolu",
"html_url": "https://github.com/olumolu",
"followers_url": "https://api.github.com/users/olumolu/followers",
"following_url": "https://api.github.com/users/olumolu/following{/other_user}",
"gists_url": "https://api.github.com/users/olumolu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/olumolu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/olumolu/subscriptions",
"organizations_url": "https://api.github.com/users/olumolu/orgs",
"repos_url": "https://api.github.com/users/olumolu/repos",
"events_url": "https://api.github.com/users/olumolu/events{/privacy}",
"received_events_url": "https://api.github.com/users/olumolu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
open
| false
| null |
[] | null | 2
| 2025-01-15T12:27:31
| 2025-01-20T19:40:09
| null |
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
https://huggingface.co/MiniMaxAI/MiniMax-Text-01
https://x.com/MiniMax__AI/status/1879226391352549451
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8442/reactions",
"total_count": 17,
"+1": 11,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 6
}
|
https://api.github.com/repos/ollama/ollama/issues/8442/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/1871
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1871/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1871/comments
|
https://api.github.com/repos/ollama/ollama/issues/1871/events
|
https://github.com/ollama/ollama/issues/1871
| 2,072,765,634
|
I_kwDOJ0Z1Ps57i-TC
| 1,871
|
Switching from a high `num_ctx` to a model with a low `num_ctx` causes cuda out of memory errors
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 4
| 2024-01-09T16:49:15
| 2024-04-02T17:49:46
| 2024-04-02T17:49:46
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
When switching from a large context window to a small one (a high `num_ctx` to a low `num_ctx`), Ollama will error due to out of memory. It seems that it will incorrectly try to re-allocate the same amount of memory as before (vs a new, smaller amount).
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1871/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1871/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7751
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7751/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7751/comments
|
https://api.github.com/repos/ollama/ollama/issues/7751/events
|
https://github.com/ollama/ollama/issues/7751
| 2,673,984,744
|
I_kwDOJ0Z1Ps6fYcTo
| 7,751
|
List of all available models
|
{
"login": "vt-alt",
"id": 36664211,
"node_id": "MDQ6VXNlcjM2NjY0MjEx",
"avatar_url": "https://avatars.githubusercontent.com/u/36664211?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vt-alt",
"html_url": "https://github.com/vt-alt",
"followers_url": "https://api.github.com/users/vt-alt/followers",
"following_url": "https://api.github.com/users/vt-alt/following{/other_user}",
"gists_url": "https://api.github.com/users/vt-alt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vt-alt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vt-alt/subscriptions",
"organizations_url": "https://api.github.com/users/vt-alt/orgs",
"repos_url": "https://api.github.com/users/vt-alt/repos",
"events_url": "https://api.github.com/users/vt-alt/events{/privacy}",
"received_events_url": "https://api.github.com/users/vt-alt/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 7
| 2024-11-20T00:40:11
| 2024-11-21T17:25:11
| 2024-11-21T17:25:10
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Please provide list of (or API to list) all models available on https://ollama.com/library with tags.
This would be useful for users to get them from cli without a browser.
Also, this would be useful for shell completions.
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7751/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7751/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3572
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3572/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3572/comments
|
https://api.github.com/repos/ollama/ollama/issues/3572/events
|
https://github.com/ollama/ollama/issues/3572
| 2,235,182,919
|
I_kwDOJ0Z1Ps6FOi9H
| 3,572
|
Support for AMD Radeon RX 570 series
|
{
"login": "Mr-Ples",
"id": 27857264,
"node_id": "MDQ6VXNlcjI3ODU3MjY0",
"avatar_url": "https://avatars.githubusercontent.com/u/27857264?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Mr-Ples",
"html_url": "https://github.com/Mr-Ples",
"followers_url": "https://api.github.com/users/Mr-Ples/followers",
"following_url": "https://api.github.com/users/Mr-Ples/following{/other_user}",
"gists_url": "https://api.github.com/users/Mr-Ples/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Mr-Ples/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mr-Ples/subscriptions",
"organizations_url": "https://api.github.com/users/Mr-Ples/orgs",
"repos_url": "https://api.github.com/users/Mr-Ples/repos",
"events_url": "https://api.github.com/users/Mr-Ples/events{/privacy}",
"received_events_url": "https://api.github.com/users/Mr-Ples/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 5
| 2024-04-10T09:49:02
| 2024-04-23T16:44:52
| 2024-04-16T00:14:00
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What are you trying to do?
trying to use my AMD GPU to accelerate ollama output
### How should we solve this?
add Support for AMD Radeon RX 570 series
### What is the impact of not solving this?
currently im not using ollama that much because of it
### Anything else?
ollama is the best application ever hands down
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3572/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3572/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5644
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5644/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5644/comments
|
https://api.github.com/repos/ollama/ollama/issues/5644/events
|
https://github.com/ollama/ollama/issues/5644
| 2,404,781,036
|
I_kwDOJ0Z1Ps6PVgvs
| 5,644
|
HELP I am trying to configure remote ollama api in Brave Browser Nightly, but Brave setting only allow https access but ollama api in remote server is in http://localhost:11434/v1/chat/completions. how can i configure with https for ollama api?
|
{
"login": "buts101",
"id": 2071440,
"node_id": "MDQ6VXNlcjIwNzE0NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2071440?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/buts101",
"html_url": "https://github.com/buts101",
"followers_url": "https://api.github.com/users/buts101/followers",
"following_url": "https://api.github.com/users/buts101/following{/other_user}",
"gists_url": "https://api.github.com/users/buts101/gists{/gist_id}",
"starred_url": "https://api.github.com/users/buts101/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/buts101/subscriptions",
"organizations_url": "https://api.github.com/users/buts101/orgs",
"repos_url": "https://api.github.com/users/buts101/repos",
"events_url": "https://api.github.com/users/buts101/events{/privacy}",
"received_events_url": "https://api.github.com/users/buts101/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-07-12T05:25:23
| 2024-11-06T01:07:42
| 2024-11-06T01:07:42
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
should i use nginx reverse proxy for redirection? looking for easy method in built in ollama
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5644/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5644/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3431
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3431/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3431/comments
|
https://api.github.com/repos/ollama/ollama/issues/3431/events
|
https://github.com/ollama/ollama/issues/3431
| 2,217,360,558
|
I_kwDOJ0Z1Ps6EKjyu
| 3,431
|
CUDA Error on remote connection. out of memory
|
{
"login": "systerchristian",
"id": 37046148,
"node_id": "MDQ6VXNlcjM3MDQ2MTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/37046148?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/systerchristian",
"html_url": "https://github.com/systerchristian",
"followers_url": "https://api.github.com/users/systerchristian/followers",
"following_url": "https://api.github.com/users/systerchristian/following{/other_user}",
"gists_url": "https://api.github.com/users/systerchristian/gists{/gist_id}",
"starred_url": "https://api.github.com/users/systerchristian/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/systerchristian/subscriptions",
"organizations_url": "https://api.github.com/users/systerchristian/orgs",
"repos_url": "https://api.github.com/users/systerchristian/repos",
"events_url": "https://api.github.com/users/systerchristian/events{/privacy}",
"received_events_url": "https://api.github.com/users/systerchristian/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg",
"url": "https://api.github.com/repos/ollama/ollama/labels/nvidia",
"name": "nvidia",
"color": "8CDB00",
"default": false,
"description": "Issues relating to Nvidia GPUs and CUDA"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 9
| 2024-04-01T02:14:30
| 2024-06-22T00:06:45
| 2024-06-22T00:06:45
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Getting a "CUDA Error: out of memory error" with command-r after message is returned. I am seeing this with Open Web-UI. Error is after it responds to a message. It happens every time.
Here is the tail end of the log file.
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = command-r
llama_model_loader: - kv 1: general.name str = c4ai-command-r-v01
llama_model_loader: - kv 2: command-r.block_count u32 = 40
llama_model_loader: - kv 3: command-r.context_length u32 = 131072
llama_model_loader: - kv 4: command-r.embedding_length u32 = 8192
llama_model_loader: - kv 5: command-r.feed_forward_length u32 = 22528
llama_model_loader: - kv 6: command-r.attention.head_count u32 = 64
llama_model_loader: - kv 7: command-r.attention.head_count_kv u32 = 64
llama_model_loader: - kv 8: command-r.rope.freq_base f32 = 8000000.000000
llama_model_loader: - kv 9: command-r.attention.layer_norm_epsilon f32 = 0.000010
llama_model_loader: - kv 10: general.file_type u32 = 12
llama_model_loader: - kv 11: command-r.logit_scale f32 = 0.062500
llama_model_loader: - kv 12: command-r.rope.scaling.type str = none
llama_model_loader: - kv 13: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 14: tokenizer.ggml.tokens arr[str,256000] = ["<PAD>", "<UNK>", "<CLS>", "<SEP>", ...
llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,256000] = [3, 3, 3, 3, 3, 3, 3, 3, 1, 1, 1, 1, ...
llama_model_loader: - kv 16: tokenizer.ggml.merges arr[str,253333] = ["ฤ ฤ ", "ฤ t", "e r", "i n", "ฤ a...
llama_model_loader: - kv 17: tokenizer.ggml.bos_token_id u32 = 5
llama_model_loader: - kv 18: tokenizer.ggml.eos_token_id u32 = 255001
llama_model_loader: - kv 19: tokenizer.ggml.padding_token_id u32 = 0
llama_model_loader: - kv 20: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 21: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 22: general.quantization_version u32 = 2
llama_model_loader: - type f32: 41 tensors
llama_model_loader: - type q3_K: 160 tensors
llama_model_loader: - type q4_K: 116 tensors
llama_model_loader: - type q5_K: 4 tensors
llama_model_loader: - type q6_K: 1 tensors
llm_load_vocab: special tokens definition check successful ( 1008/256000 ).
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = command-r
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 256000
llm_load_print_meta: n_merges = 253333
llm_load_print_meta: n_ctx_train = 131072
llm_load_print_meta: n_embd = 8192
llm_load_print_meta: n_head = 64
llm_load_print_meta: n_head_kv = 64
llm_load_print_meta: n_layer = 40
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 1
llm_load_print_meta: n_embd_k_gqa = 8192
llm_load_print_meta: n_embd_v_gqa = 8192
llm_load_print_meta: f_norm_eps = 1.0e-05
llm_load_print_meta: f_norm_rms_eps = 0.0e+00
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 6.2e-02
llm_load_print_meta: n_ff = 22528
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = none
llm_load_print_meta: freq_base_train = 8000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx = 131072
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: model type = 35B
llm_load_print_meta: model ftype = Q3_K - Medium
llm_load_print_meta: model params = 34.98 B
llm_load_print_meta: model size = 16.40 GiB (4.03 BPW)
llm_load_print_meta: general.name = c4ai-command-r-v01
llm_load_print_meta: BOS token = 5 '<BOS_TOKEN>'
llm_load_print_meta: EOS token = 255001 '<|END_OF_TURN_TOKEN|>'
llm_load_print_meta: PAD token = 0 '<PAD>'
llm_load_print_meta: LF token = 136 'ร'
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: CUDA_USE_TENSOR_CORES: yes
ggml_cuda_init: found 1 CUDA devices:
Device 0: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes
llm_load_tensors: ggml ctx size = 0.25 MiB
llm_load_tensors: offloading 40 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 41/41 layers to GPU
llm_load_tensors: CPU buffer size = 1640.62 MiB
llm_load_tensors: CUDA0 buffer size = 16791.91 MiB
.....................................................................................
llama_new_context_with_model: n_ctx = 2048
llama_new_context_with_model: n_batch = 512
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: freq_base = 8000000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: CUDA0 KV buffer size = 2560.00 MiB
llama_new_context_with_model: KV self size = 2560.00 MiB, K (f16): 1280.00 MiB, V (f16): 1280.00 MiB
llama_new_context_with_model: CUDA_Host output buffer size = 516.00 MiB
llama_new_context_with_model: CUDA0 compute buffer size = 516.00 MiB
llama_new_context_with_model: CUDA_Host compute buffer size = 20.00 MiB
llama_new_context_with_model: graph nodes = 1245
llama_new_context_with_model: graph splits = 2
{"function":"initialize","level":"INFO","line":444,"msg":"initializing slots","n_slots":1,"tid":"240","timestamp":1711936563}
{"function":"initialize","level":"INFO","line":456,"msg":"new slot","n_ctx_slot":2048,"slot_id":0,"tid":"240","timestamp":1711936563}
time=2024-03-31T18:56:03.805-07:00 level=INFO source=dyn_ext_server.go:159 msg="Starting llama main loop"
{"function":"update_slots","level":"INFO","line":1572,"msg":"all slots are idle and system prompt is empty, clear the KV cache","tid":"35172","timestamp":1711936563}
[GIN] 2024/03/31 - 18:56:03 | 200 | 9.3664512s | 192.168.50.60 | POST "/api/chat"
{"function":"launch_slot_with_data","level":"INFO","line":829,"msg":"slot is processing task","slot_id":0,"task_id":0,"tid":"35172","timestamp":1711936570}
{"function":"update_slots","ga_i":0,"level":"INFO","line":1810,"msg":"slot progression","n_past":0,"n_past_se":0,"n_prompt_tokens_processed":8,"slot_id":0,"task_id":0,"tid":"35172","timestamp":1711936570}
{"function":"update_slots","level":"INFO","line":1834,"msg":"kv cache rm [p0, end)","p0":0,"slot_id":0,"task_id":0,"tid":"35172","timestamp":1711936570}
{"function":"print_timings","level":"INFO","line":272,"msg":"prompt eval time = 353.31 ms / 8 tokens ( 44.16 ms per token, 22.64 tokens per second)","n_prompt_tokens_processed":8,"n_tokens_second":22.64332517230155,"slot_id":0,"t_prompt_processing":353.305,"t_token":44.163125,"task_id":0,"tid":"35172","timestamp":1711936571}
{"function":"print_timings","level":"INFO","line":286,"msg":"generation eval time = 739.95 ms / 29 runs ( 25.52 ms per token, 39.19 tokens per second)","n_decoded":29,"n_tokens_second":39.191678390384254,"slot_id":0,"t_token":25.515620689655172,"t_token_generation":739.953,"task_id":0,"tid":"35172","timestamp":1711936571}
{"function":"print_timings","level":"INFO","line":295,"msg":" total time = 1093.26 ms","slot_id":0,"t_prompt_processing":353.305,"t_token_generation":739.953,"t_total":1093.258,"task_id":0,"tid":"35172","timestamp":1711936571}
{"function":"update_slots","level":"INFO","line":1642,"msg":"slot released","n_cache_tokens":37,"n_ctx":2048,"n_past":36,"n_system_tokens":0,"slot_id":0,"task_id":0,"tid":"35172","timestamp":1711936571,"truncated":false}
[GIN] 2024/03/31 - 18:56:11 | 200 | 1.0944379s | 192.168.50.60 | POST "/api/chat"
{"function":"launch_slot_with_data","level":"INFO","line":829,"msg":"slot is processing task","slot_id":0,"task_id":32,"tid":"35172","timestamp":1711936587}
{"function":"update_slots","ga_i":0,"level":"INFO","line":1810,"msg":"slot progression","n_past":36,"n_past_se":0,"n_prompt_tokens_processed":10,"slot_id":0,"task_id":32,"tid":"35172","timestamp":1711936587}
{"function":"update_slots","level":"INFO","line":1834,"msg":"kv cache rm [p0, end)","p0":36,"slot_id":0,"task_id":32,"tid":"35172","timestamp":1711936587}
{"function":"print_timings","level":"INFO","line":272,"msg":"prompt eval time = 289.25 ms / 10 tokens ( 28.92 ms per token, 34.57 tokens per second)","n_prompt_tokens_processed":10,"n_tokens_second":34.57228892753302,"slot_id":0,"t_prompt_processing":289.249,"t_token":28.9249,"task_id":32,"tid":"35172","timestamp":1711936589}
{"function":"print_timings","level":"INFO","line":286,"msg":"generation eval time = 1058.89 ms / 41 runs ( 25.83 ms per token, 38.72 tokens per second)","n_decoded":41,"n_tokens_second":38.719828046188034,"slot_id":0,"t_token":25.826560975609752,"t_token_generation":1058.889,"task_id":32,"tid":"35172","timestamp":1711936589}
{"function":"print_timings","level":"INFO","line":295,"msg":" total time = 1348.14 ms","slot_id":0,"t_prompt_processing":289.249,"t_token_generation":1058.889,"t_total":1348.138,"task_id":32,"tid":"35172","timestamp":1711936589}
{"function":"update_slots","level":"INFO","line":1642,"msg":"slot released","n_cache_tokens":86,"n_ctx":2048,"n_past":86,"n_system_tokens":0,"slot_id":0,"task_id":32,"tid":"35172","timestamp":1711936589,"truncated":false}
[GIN] 2024/03/31 - 18:56:29 | 200 | 1.3504819s | 192.168.50.60 | POST "/api/chat"
{"function":"launch_slot_with_data","level":"INFO","line":829,"msg":"slot is processing task","slot_id":0,"task_id":76,"tid":"35172","timestamp":1711936611}
{"function":"update_slots","ga_i":0,"level":"INFO","line":1810,"msg":"slot progression","n_past":84,"n_past_se":0,"n_prompt_tokens_processed":9,"slot_id":0,"task_id":76,"tid":"35172","timestamp":1711936611}
{"function":"update_slots","level":"INFO","line":1834,"msg":"kv cache rm [p0, end)","p0":84,"slot_id":0,"task_id":76,"tid":"35172","timestamp":1711936611}
{"function":"print_timings","level":"INFO","line":272,"msg":"prompt eval time = 322.37 ms / 9 tokens ( 35.82 ms per token, 27.92 tokens per second)","n_prompt_tokens_processed":9,"n_tokens_second":27.918230604584796,"slot_id":0,"t_prompt_processing":322.37,"t_token":35.81888888888889,"task_id":76,"tid":"35172","timestamp":1711936612}
{"function":"print_timings","level":"INFO","line":286,"msg":"generation eval time = 1279.00 ms / 49 runs ( 26.10 ms per token, 38.31 tokens per second)","n_decoded":49,"n_tokens_second":38.31106079418048,"slot_id":0,"t_token":26.10212244897959,"t_token_generation":1279.004,"task_id":76,"tid":"35172","timestamp":1711936612}
{"function":"print_timings","level":"INFO","line":295,"msg":" total time = 1601.37 ms","slot_id":0,"t_prompt_processing":322.37,"t_token_generation":1279.004,"t_total":1601.3739999999998,"task_id":76,"tid":"35172","timestamp":1711936612}
{"function":"update_slots","level":"INFO","line":1642,"msg":"slot released","n_cache_tokens":138,"n_ctx":2048,"n_past":141,"n_system_tokens":0,"slot_id":0,"task_id":76,"tid":"35172","timestamp":1711936612,"truncated":false}
[GIN] 2024/03/31 - 18:56:52 | 200 | 1.6048077s | 192.168.50.60 | POST "/api/chat"
{"function":"launch_slot_with_data","level":"INFO","line":829,"msg":"slot is processing task","slot_id":0,"task_id":128,"tid":"35172","timestamp":1711936639}
{"function":"update_slots","ga_i":0,"level":"INFO","line":1810,"msg":"slot progression","n_past":3,"n_past_se":0,"n_prompt_tokens_processed":48,"slot_id":0,"task_id":128,"tid":"35172","timestamp":1711936639}
{"function":"update_slots","level":"INFO","line":1834,"msg":"kv cache rm [p0, end)","p0":3,"slot_id":0,"task_id":128,"tid":"35172","timestamp":1711936639}
CUDA error: out of memory
current device: 0, in function alloc at C:\a\ollama\ollama\llm\llama.cpp\ggml-cuda.cu:532
cuMemSetAccess(pool_addr + pool_size, reserve_size, &access, 1)
GGML_ASSERT: C:\a\ollama\ollama\llm\llama.cpp\ggml-cuda.cu:193: !"CUDA error"
### What did you expect to see?
Ollama to not crash. :)
### Steps to reproduce
Any message sent while a Command-r model is loaded. Have been able to replicate with both lastest and command-r:35b-v0.1-q3_K_M. Both models appear to work fine from the console.
### Are there any recent changes that introduced the issue?
Nope. Tho I haven't been working with Ollama long.
### OS
Windows
### Architecture
x86
### Platform
_No response_
### Ollama version
0.1.30
### GPU
Nvidia
### GPU info
Sun Mar 31 19:13:31 2024
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 551.86 Driver Version: 551.86 CUDA Version: 12.4 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name TCC/WDDM | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA GeForce RTX 4090 WDDM | 00000000:01:00.0 Off | Off |
| 0% 48C P8 18W / 450W | 2107MiB / 24564MiB | 8% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| 0 N/A N/A 6840 C+G ...ft Office\root\Office16\ONENOTE.EXE N/A |
| 0 N/A N/A 7160 C+G ...ekyb3d8bbwe\PhoneExperienceHost.exe N/A |
| 0 N/A N/A 8716 C+G ...2txyewy\StartMenuExperienceHost.exe N/A |
| 0 N/A N/A 10984 C+G ...sair iCUE5 Software\QmlRenderer.exe N/A |
| 0 N/A N/A 14044 C+G ...wekyb3d8bbwe\XboxGameBarWidgets.exe N/A |
| 0 N/A N/A 15560 C+G ...\cef\cef.win7x64\steamwebhelper.exe N/A |
| 0 N/A N/A 16252 C+G ...on\123.0.2420.65\msedgewebview2.exe N/A |
| 0 N/A N/A 17256 C+G ...\LM-Studio\app-0.2.18\LM Studio.exe N/A |
| 0 N/A N/A 20184 C+G ...les\Microsoft OneDrive\OneDrive.exe N/A |
| 0 N/A N/A 21284 C+G ...siveControlPanel\SystemSettings.exe N/A |
| 0 N/A N/A 21684 C+G ...crosoft\Edge\Application\msedge.exe N/A |
| 0 N/A N/A 22008 C+G ...oogle\Chrome\Application\chrome.exe N/A |
| 0 N/A N/A 23756 C+G ...air\Corsair iCUE5 Software\iCUE.exe N/A |
| 0 N/A N/A 24504 C+G ...5n1h2txyewy\ShellExperienceHost.exe N/A |
| 0 N/A N/A 25468 C+G ...e Stream\88.0.0.0\GoogleDriveFS.exe N/A |
| 0 N/A N/A 25996 C+G ...__8wekyb3d8bbwe\WindowsTerminal.exe N/A |
| 0 N/A N/A 28544 C ...\LM-Studio\app-0.2.18\LM Studio.exe N/A |
| 0 N/A N/A 29564 C+G C:\Windows\explorer.exe N/A |
| 0 N/A N/A 30092 C+G ...CBS_cw5n1h2txyewy\TextInputHost.exe N/A |
| 0 N/A N/A 31484 C+G ...nt.CBS_cw5n1h2txyewy\SearchHost.exe N/A |
| 0 N/A N/A 35296 C+G ...__8wekyb3d8bbwe\Notepad\Notepad.exe N/A |
| 0 N/A N/A 36216 C+G ...on\123.0.2420.65\msedgewebview2.exe N/A |
+-----------------------------------------------------------------------------------------+
### CPU
Intel
### Other software
Open Web-ui
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3431/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3431/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/728
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/728/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/728/comments
|
https://api.github.com/repos/ollama/ollama/issues/728/events
|
https://github.com/ollama/ollama/issues/728
| 1,931,282,680
|
I_kwDOJ0Z1Ps5zHQj4
| 728
|
Dummy model for API testing
|
{
"login": "S1M0N38",
"id": 22257750,
"node_id": "MDQ6VXNlcjIyMjU3NzUw",
"avatar_url": "https://avatars.githubusercontent.com/u/22257750?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/S1M0N38",
"html_url": "https://github.com/S1M0N38",
"followers_url": "https://api.github.com/users/S1M0N38/followers",
"following_url": "https://api.github.com/users/S1M0N38/following{/other_user}",
"gists_url": "https://api.github.com/users/S1M0N38/gists{/gist_id}",
"starred_url": "https://api.github.com/users/S1M0N38/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/S1M0N38/subscriptions",
"organizations_url": "https://api.github.com/users/S1M0N38/orgs",
"repos_url": "https://api.github.com/users/S1M0N38/repos",
"events_url": "https://api.github.com/users/S1M0N38/events{/privacy}",
"received_events_url": "https://api.github.com/users/S1M0N38/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 4
| 2023-10-07T09:49:34
| 2023-10-11T19:30:58
| 2023-10-11T19:29:59
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi, I'm trying to developing a piece of software the interact with ollama API by querying `api/generate`. I like to test my http requests with out the need to load a real model in memory. It's okay for me to send a request like
```bash
curl -X POST http://localhost:11434/api/generate -d '{
"model": "dummy-model",
"prompt":"Why is the sky blue?"
}'
```
and get back
```bash
{"model": "dummy-model", "created_at": "...", "response": "Why", "done": false}
{"model": "dummy-model", "created_at": "...", "response": "is", "done": false}
{"model": "dummy-model", "created_at": "...", "response": "the", "done": false}
{"model": "dummy-model", "created_at": "...", "response": "sky", "done": false}
{"model": "dummy-model", "created_at": "...", "response": "blue", "done": false}
{"model": "dummy-model", "created_at": "...", "response": "?", "done": false}
{"model": "dummy-model", "created_at": "...", "done": true, ...}
```
So in this case the "model tokenizer" split the prompt on whitespaces and the "predicted word" is just the input word. In this way api call to ollama endpoint could be tested without the need of load a full llm in memory (this will help speed, low spec system and CI). Of course the "hacking option" is to re-implement ollama API in a simple http server that mimic ollama API but this could be error prone and needed to be constantly update with the most recent version of the API.
Is there a way to define such "dummy-model"? Or do you have any other suggestion to test external code that query ollama API?
Right now I'm using [this](https://gist.github.com/S1M0N38/f861ca42e2899b198168e2724fadc1d8) just to test call to `/api/generate`
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/728/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/728/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8299
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8299/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8299/comments
|
https://api.github.com/repos/ollama/ollama/issues/8299/events
|
https://github.com/ollama/ollama/issues/8299
| 2,768,209,416
|
I_kwDOJ0Z1Ps6k_4YI
| 8,299
|
Querying can cause c drive to max out 100%
|
{
"login": "Bandit253",
"id": 9003261,
"node_id": "MDQ6VXNlcjkwMDMyNjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/9003261?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bandit253",
"html_url": "https://github.com/Bandit253",
"followers_url": "https://api.github.com/users/Bandit253/followers",
"following_url": "https://api.github.com/users/Bandit253/following{/other_user}",
"gists_url": "https://api.github.com/users/Bandit253/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bandit253/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bandit253/subscriptions",
"organizations_url": "https://api.github.com/users/Bandit253/orgs",
"repos_url": "https://api.github.com/users/Bandit253/repos",
"events_url": "https://api.github.com/users/Bandit253/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bandit253/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 14
| 2025-01-03T21:40:36
| 2025-01-29T10:12:12
| 2025-01-06T07:45:27
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
When using from `langchain_community.document_loaders import WebBaseLoader` - (maybe not cause)
I get `ResponseError: llama runner process has terminated: error loading model: llama_model_loader: failed to load model from` it doesn't even appear to finish the error message!?
but after retying, I get this, see image, I have taken the photo before it filled the graph. I had to take the photo because everything stopped with the only way to recover is a hard reset.

Langsmith reports:
```
ResponseError('llama runner process has terminated: error loading model: llama_model_loader: failed to load model from')Traceback (most recent call last):
File "d:\_Ollama\.env312\Lib\site-packages\langchain_core\runnables\base.py", line 3024, in invoke
input = context.run(step.invoke, input, config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "d:\_Ollama\.env312\Lib\site-packages\langchain_core\language_models\chat_models.py", line 286, in invoke
self.generate_prompt(
File "d:\_Ollama\.env312\Lib\site-packages\langchain_core\language_models\chat_models.py", line 786, in generate_prompt
return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "d:\_Ollama\.env312\Lib\site-packages\langchain_core\language_models\chat_models.py", line 643, in generate
raise e
File "d:\_Ollama\.env312\Lib\site-packages\langchain_core\language_models\chat_models.py", line 633, in generate
self._generate_with_cache(
File "d:\_Ollama\.env312\Lib\site-packages\langchain_core\language_models\chat_models.py", line 851, in _generate_with_cache
result = self._generate(
^^^^^^^^^^^^^^^
File "d:\_Ollama\.env312\Lib\site-packages\langchain_ollama\chat_models.py", line 644, in _generate
final_chunk = self._chat_stream_with_aggregation(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "d:\_Ollama\.env312\Lib\site-packages\langchain_ollama\chat_models.py", line 545, in _chat_stream_with_aggregation
for stream_resp in self._create_chat_stream(messages, stop, **kwargs):
File "d:\_Ollama\.env312\Lib\site-packages\langchain_ollama\chat_models.py", line 527, in _create_chat_stream
yield from self._client.chat(
File "d:\_Ollama\.env312\Lib\site-packages\ollama\_client.py", line 85, in _stream
raise ResponseError(e.response.text, e.response.status_code) from None
ollama._types.ResponseError: llama runner process has terminated: error loading model: llama_model_loader: failed to load model from
```
LangSmith metedata:
```
langchain_core_version: "0.3.28"
langchain_version: "0.3.13"
library: "langchain-core"
library_version: "0.3.28"
platform: "Windows-11-10.0.26100-SP0"
py_implementation: "CPython"
runtime: "python"
runtime_version: "3.12.3"
sdk: "langsmith-py"
sdk_version: "0.1.147"
```
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.5.4
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8299/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8299/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/304
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/304/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/304/comments
|
https://api.github.com/repos/ollama/ollama/issues/304/events
|
https://github.com/ollama/ollama/pull/304
| 1,840,193,428
|
PR_kwDOJ0Z1Ps5XX-OU
| 304
|
missed a backtick
|
{
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.github.com/users/technovangelist/followers",
"following_url": "https://api.github.com/users/technovangelist/following{/other_user}",
"gists_url": "https://api.github.com/users/technovangelist/gists{/gist_id}",
"starred_url": "https://api.github.com/users/technovangelist/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/technovangelist/subscriptions",
"organizations_url": "https://api.github.com/users/technovangelist/orgs",
"repos_url": "https://api.github.com/users/technovangelist/repos",
"events_url": "https://api.github.com/users/technovangelist/events{/privacy}",
"received_events_url": "https://api.github.com/users/technovangelist/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-08-07T20:54:31
| 2023-08-07T22:14:07
| 2023-08-07T22:14:06
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/304",
"html_url": "https://github.com/ollama/ollama/pull/304",
"diff_url": "https://github.com/ollama/ollama/pull/304.diff",
"patch_url": "https://github.com/ollama/ollama/pull/304.patch",
"merged_at": "2023-08-07T22:14:06"
}
|
somehow missed one backtick in near the top
|
{
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.github.com/users/technovangelist/followers",
"following_url": "https://api.github.com/users/technovangelist/following{/other_user}",
"gists_url": "https://api.github.com/users/technovangelist/gists{/gist_id}",
"starred_url": "https://api.github.com/users/technovangelist/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/technovangelist/subscriptions",
"organizations_url": "https://api.github.com/users/technovangelist/orgs",
"repos_url": "https://api.github.com/users/technovangelist/repos",
"events_url": "https://api.github.com/users/technovangelist/events{/privacy}",
"received_events_url": "https://api.github.com/users/technovangelist/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/304/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/304/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/8625
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8625/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8625/comments
|
https://api.github.com/repos/ollama/ollama/issues/8625/events
|
https://github.com/ollama/ollama/issues/8625
| 2,814,698,214
|
I_kwDOJ0Z1Ps6nxOLm
| 8,625
|
Individual quantized model download count
|
{
"login": "Abubakkar13",
"id": 45032674,
"node_id": "MDQ6VXNlcjQ1MDMyNjc0",
"avatar_url": "https://avatars.githubusercontent.com/u/45032674?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Abubakkar13",
"html_url": "https://github.com/Abubakkar13",
"followers_url": "https://api.github.com/users/Abubakkar13/followers",
"following_url": "https://api.github.com/users/Abubakkar13/following{/other_user}",
"gists_url": "https://api.github.com/users/Abubakkar13/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Abubakkar13/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Abubakkar13/subscriptions",
"organizations_url": "https://api.github.com/users/Abubakkar13/orgs",
"repos_url": "https://api.github.com/users/Abubakkar13/repos",
"events_url": "https://api.github.com/users/Abubakkar13/events{/privacy}",
"received_events_url": "https://api.github.com/users/Abubakkar13/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 6573197867,
"node_id": "LA_kwDOJ0Z1Ps8AAAABh8sKKw",
"url": "https://api.github.com/repos/ollama/ollama/labels/ollama.com",
"name": "ollama.com",
"color": "ffffff",
"default": false,
"description": ""
}
] |
open
| false
| null |
[] | null | 0
| 2025-01-28T05:52:07
| 2025-01-28T17:13:03
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hey,
I was been exploring the models on site, It would be great to have a total download count for each quantized version (e.g., q8_0, q4_K_M) to show how many times theyโve been downloaded. This would help users gauge the popularity and reliability of different models. Having clear download statistics for each version would make it easier to choose the best one. Thank you!

| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8625/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8625/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/1126
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1126/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1126/comments
|
https://api.github.com/repos/ollama/ollama/issues/1126/events
|
https://github.com/ollama/ollama/pull/1126
| 1,993,106,353
|
PR_kwDOJ0Z1Ps5fbh79
| 1,126
|
Add ability to pass prompt in via standard input such as `ollama run model < file`
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-11-14T16:29:15
| 2023-11-14T21:42:22
| 2023-11-14T21:42:21
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1126",
"html_url": "https://github.com/ollama/ollama/pull/1126",
"diff_url": "https://github.com/ollama/ollama/pull/1126.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1126.patch",
"merged_at": "2023-11-14T21:42:21"
}
|
Originally opened by @sqs in #416
Previously, `ollama run` treated a non-terminal stdin (such as `ollama run model < file`) as containing one prompt per line. To run inference on a multi-line prompt, the only non-API workaround was to run `ollama run` interactively and wrap the prompt in `"""..."""`.
Now, `ollama run` treats a non-terminal stdin as containing a single prompt. For example, if `myprompt.txt` is a multi-line file, then `ollama run model < myprompt.txt` would treat `myprompt.txt`'s entire contents as the prompt.
Examples:
```
cat mycode.py | ollama run codellama "what does this code do?"
cat essay.txt | ollama run llama2 "Summarize this story in 5 points. Respond in json." --format json | jq
```
Replacement for the current behavior is to create a bash script that reads in each line from `stdin` and calls `ollama run`:
```
#!/bin/bash
while IFS= read -r line; do
echo "$line" | ollama run $1
done
```
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1126/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 2,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1126/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7696
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7696/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7696/comments
|
https://api.github.com/repos/ollama/ollama/issues/7696/events
|
https://github.com/ollama/ollama/pull/7696
| 2,663,565,114
|
PR_kwDOJ0Z1Ps6CG5bv
| 7,696
|
fix index out of range on zero layer metal load
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-11-16T00:55:59
| 2024-11-18T19:48:16
| 2024-11-18T19:48:13
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7696",
"html_url": "https://github.com/ollama/ollama/pull/7696",
"diff_url": "https://github.com/ollama/ollama/pull/7696.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7696.patch",
"merged_at": "2024-11-18T19:48:13"
}
|
If the model doesn't fit any layers on metal, and we load zero layers we would panic trying to look up the GPU size during scheduling ops
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7696/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7696/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4337
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4337/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4337/comments
|
https://api.github.com/repos/ollama/ollama/issues/4337/events
|
https://github.com/ollama/ollama/issues/4337
| 2,290,623,886
|
I_kwDOJ0Z1Ps6IiCWO
| 4,337
|
please show version info on download page
|
{
"login": "taozhiyuai",
"id": 146583103,
"node_id": "U_kgDOCLyuPw",
"avatar_url": "https://avatars.githubusercontent.com/u/146583103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/taozhiyuai",
"html_url": "https://github.com/taozhiyuai",
"followers_url": "https://api.github.com/users/taozhiyuai/followers",
"following_url": "https://api.github.com/users/taozhiyuai/following{/other_user}",
"gists_url": "https://api.github.com/users/taozhiyuai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/taozhiyuai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/taozhiyuai/subscriptions",
"organizations_url": "https://api.github.com/users/taozhiyuai/orgs",
"repos_url": "https://api.github.com/users/taozhiyuai/repos",
"events_url": "https://api.github.com/users/taozhiyuai/events{/privacy}",
"received_events_url": "https://api.github.com/users/taozhiyuai/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 6573197867,
"node_id": "LA_kwDOJ0Z1Ps8AAAABh8sKKw",
"url": "https://api.github.com/repos/ollama/ollama/labels/ollama.com",
"name": "ollama.com",
"color": "ffffff",
"default": false,
"description": ""
}
] |
open
| false
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 0
| 2024-05-11T03:10:13
| 2024-06-02T00:26:02
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
<img width="624" alt="ๆชๅฑ2024-05-11 11 09 49" src="https://github.com/ollama/ollama/assets/146583103/f76258d0-2b6c-4ae4-b997-6fa87317ac09">
please show version info on download page.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4337/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4337/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/242
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/242/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/242/comments
|
https://api.github.com/repos/ollama/ollama/issues/242/events
|
https://github.com/ollama/ollama/issues/242
| 1,828,043,616
|
I_kwDOJ0Z1Ps5s9btg
| 242
|
ctrl c answering time crashing server
|
{
"login": "alivardar",
"id": 10295369,
"node_id": "MDQ6VXNlcjEwMjk1MzY5",
"avatar_url": "https://avatars.githubusercontent.com/u/10295369?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alivardar",
"html_url": "https://github.com/alivardar",
"followers_url": "https://api.github.com/users/alivardar/followers",
"following_url": "https://api.github.com/users/alivardar/following{/other_user}",
"gists_url": "https://api.github.com/users/alivardar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alivardar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alivardar/subscriptions",
"organizations_url": "https://api.github.com/users/alivardar/orgs",
"repos_url": "https://api.github.com/users/alivardar/repos",
"events_url": "https://api.github.com/users/alivardar/events{/privacy}",
"received_events_url": "https://api.github.com/users/alivardar/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 3
| 2023-07-30T19:29:04
| 2023-08-04T07:28:48
| 2023-07-31T13:54:21
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi my model based on llama2:13b
I am opening two terminal on macos air, when start to work answering ollama, If i click on keyboard ctrl-c on ollama run window for exit, server crushing.
<img width="1093" alt="image" src="https://github.com/jmorganca/ollama/assets/10295369/5417397e-08be-47aa-85a4-87b24b20cf20">
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/242/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/242/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1898
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1898/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1898/comments
|
https://api.github.com/repos/ollama/ollama/issues/1898/events
|
https://github.com/ollama/ollama/issues/1898
| 2,074,502,805
|
I_kwDOJ0Z1Ps57pmaV
| 1,898
|
CUDA and ROCM libraries not loaded correctly (solved)
|
{
"login": "Zenopheus",
"id": 4527783,
"node_id": "MDQ6VXNlcjQ1Mjc3ODM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4527783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Zenopheus",
"html_url": "https://github.com/Zenopheus",
"followers_url": "https://api.github.com/users/Zenopheus/followers",
"following_url": "https://api.github.com/users/Zenopheus/following{/other_user}",
"gists_url": "https://api.github.com/users/Zenopheus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Zenopheus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Zenopheus/subscriptions",
"organizations_url": "https://api.github.com/users/Zenopheus/orgs",
"repos_url": "https://api.github.com/users/Zenopheus/repos",
"events_url": "https://api.github.com/users/Zenopheus/events{/privacy}",
"received_events_url": "https://api.github.com/users/Zenopheus/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 0
| 2024-01-10T14:31:08
| 2024-01-10T23:21:58
| 2024-01-10T23:21:58
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I was unable to get Ollama to recognize my RTX 5000 under WSL even though other programs have no problem. I would get the following error:
```
Jan 08 19:28:33 XDFAF ollama[178990]: 2024/01/08 19:28:33 gpu.go:39: CUDA not detected: nvml vram init failure: 9
```
After digging into the code I figured out it was loading 'libnvidia-ml.so' from the wrong location (/lib/x86_64-linux-gnu) and the symbol lookups failed. Unfortunately, **if the symbols don't load it will not try any other locations for that library** (that's the bug). If it kept looking, it would have found '/usr/lib/wsl/lib/libnvidia-ml.so.1' and all would be good. This seems to be effecting many CUDA and ROCM people using WSL. #1704 for example _(incorrectly labeled as an enhancement)_.
You can type the following to see if you're suffering from this problem:
``
ldconfig -p | grep libnvidia-ml
``
If you're using WSL, the first line should include "/usr/lib/wsl/lib/" otherwise you might have this issue. You could create a symbolic link in this directory like so:
```
sudo ln -s /usr/lib/wsl/lib/libnvidia-ml.so.1 /usr/lib/wsl/lib/libnvidia-ml.so
sudo ldconfig
```
This ONLY works as long as you load Ollama directly (ollama serv) but it doesn't work via systemctl because the link is removed when WSL starts up. This is a know issue with WSL that you can read more about [here](https://forums.developer.nvidia.com/t/wsl2-libcuda-so-and-libcuda-so-1-should-be-symlink/236301).
The only way to fix this is to modify cuda_init() so that it loads the library from different locations until one can be initialized. I rewrote the code so that it does this. It also loads specific library versions first (libnvidia-ml.so.1.1, libnvidia-ml.so.1, libnvidia-ml.so). I thought Ollama was slow but now it's amazing!
The working code changes are in my [gist](https://gist.github.com/Zenopheus/ba4632ec6dcbd6737b6f9b180d897d1d). I'll try to submit a PR but I'm swamped at work. If someone wants to submit this as a PR then try to make the same changes to the rocm code so they can be happy.
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1898/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1898/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8161
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8161/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8161/comments
|
https://api.github.com/repos/ollama/ollama/issues/8161/events
|
https://github.com/ollama/ollama/pull/8161
| 2,748,608,854
|
PR_kwDOJ0Z1Ps6FsQX6
| 8,161
|
Set n_ubatch parameter to same batch size as n_batch
|
{
"login": "s-kostyaev",
"id": 8576745,
"node_id": "MDQ6VXNlcjg1NzY3NDU=",
"avatar_url": "https://avatars.githubusercontent.com/u/8576745?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/s-kostyaev",
"html_url": "https://github.com/s-kostyaev",
"followers_url": "https://api.github.com/users/s-kostyaev/followers",
"following_url": "https://api.github.com/users/s-kostyaev/following{/other_user}",
"gists_url": "https://api.github.com/users/s-kostyaev/gists{/gist_id}",
"starred_url": "https://api.github.com/users/s-kostyaev/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/s-kostyaev/subscriptions",
"organizations_url": "https://api.github.com/users/s-kostyaev/orgs",
"repos_url": "https://api.github.com/users/s-kostyaev/repos",
"events_url": "https://api.github.com/users/s-kostyaev/events{/privacy}",
"received_events_url": "https://api.github.com/users/s-kostyaev/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 6
| 2024-12-18T19:35:04
| 2024-12-20T11:56:40
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/8161",
"html_url": "https://github.com/ollama/ollama/pull/8161",
"diff_url": "https://github.com/ollama/ollama/pull/8161.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8161.patch",
"merged_at": null
}
|
This change prevents panic during batch embeddings calculation
Relates to https://github.com/ollama/ollama/issues/3554
See also https://github.com/ggerganov/llama.cpp/issues/6263
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8161/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8161/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7503
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7503/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7503/comments
|
https://api.github.com/repos/ollama/ollama/issues/7503/events
|
https://github.com/ollama/ollama/issues/7503
| 2,634,474,578
|
I_kwDOJ0Z1Ps6dBuRS
| 7,503
|
Tencent-Hunyuan-Large-MoE-389B-A52B
|
{
"login": "vYLQs6",
"id": 143073604,
"node_id": "U_kgDOCIchRA",
"avatar_url": "https://avatars.githubusercontent.com/u/143073604?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vYLQs6",
"html_url": "https://github.com/vYLQs6",
"followers_url": "https://api.github.com/users/vYLQs6/followers",
"following_url": "https://api.github.com/users/vYLQs6/following{/other_user}",
"gists_url": "https://api.github.com/users/vYLQs6/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vYLQs6/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vYLQs6/subscriptions",
"organizations_url": "https://api.github.com/users/vYLQs6/orgs",
"repos_url": "https://api.github.com/users/vYLQs6/repos",
"events_url": "https://api.github.com/users/vYLQs6/events{/privacy}",
"received_events_url": "https://api.github.com/users/vYLQs6/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
open
| false
| null |
[] | null | 4
| 2024-11-05T05:40:10
| 2025-01-29T02:44:37
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
# https://huggingface.co/tencent/Tencent-Hunyuan-Large
### Model Introduction
With the rapid development of artificial intelligence technology, large language models (LLMs) have made significant progress in fields such as natural language processing, computer vision, and scientific tasks. However, as the scale of these models increases, optimizing resource consumption while maintaining high performance has become a key challenge. To address this challenge, we have explored Mixture of Experts (MoE) models. The currently unveiled Hunyuan-Large (Hunyuan-MoE-A52B) model is the largest open-source Transformer-based MoE model in the industry, featuring a total of 389 billion parameters and 52 billion active parameters. This is currently the largest open-source Transformer-based MoE model in the industry, featuring a total of 389 billion parameters and 52 billion active parameters.
By open-sourcing the Hunyuan-Large model and revealing related technical details, we hope to inspire more researchers with innovative ideas and collectively advance the progress and application of AI technology. We welcome you to join our open-source community to explore and optimize future AI models together!
### Introduction to Model Technical Advantages
#### Model
- **High-Quality Synthetic Data**: By enhancing training with synthetic data, Hunyuan-Large can learn richer representations, handle long-context inputs, and generalize better to unseen data.
- **KV Cache Compression**: Utilizes Grouped Query Attention (GQA) and Cross-Layer Attention (CLA) strategies to significantly reduce memory usage and computational overhead of KV caches, improving inference throughput.
- **Expert-Specific Learning Rate Scaling**: Sets different learning rates for different experts to ensure each sub-model effectively learns from the data and contributes to overall performance.
- **Long-Context Processing Capability**: The pre-trained model supports text sequences up to 256K, and the Instruct model supports up to 128K, significantly enhancing the ability to handle long-context tasks.
- **Extensive Benchmarking**: Conducts extensive experiments across various languages and tasks to validate the practical effectiveness and safety of Hunyuan-Large.
## Benchmark Evaluation
**Hunyuan-Large pre-trained model** achieves the best overall performance compared to both Dense and MoE based
competitors having similar activated parameter sizes. For aggregated benchmarks such as MMLU, MMLU-Pro, and CMMLU,
Hunyuan-Large consistently achieves the best performance, confirming its comprehensive abilities on aggregated tasks.
Hunyuan-Large also shows superior performance in commonsense understanding and reasoning, and classical NLP tasks
such as QA and reading comprehension tasks (e.g., CommonsenseQA, PIQA and TriviaQA).
For the mathematics capability, Hunyuan-Large outperforms all baselines in math datasets of GSM8K and MATH,
and also gains the best results on CMATH in Chinese.We also observe that Hunyuan-Large achieves the overall
best performance in all Chinese tasks (e.g., CMMLU, C-Eval).
| Model | LLama3.1-405B | LLama3.1-70B | Mixtral-8x22B | DeepSeek-V2 | Hunyuan-Large |
|------------------|---------------|--------------|---------------|-------------|---------------|
| MMLU | 85.2 | 79.3 | 77.8 | 78.5 | **88.4** |
| MMLU-Pro | **61.6** | 53.8 | 49.5 | - | 60.2 |
| BBH | 85.9 | 81.6 | 78.9 | 78.9 | **86.3** |
| HellaSwag | - | - | **88.7** | 87.8 | 86.8 |
| CommonsenseQA | 85.8 | 84.1 | 82.4 | - | **92.9** |
| WinoGrande | 86.7 | 85.3 | 85.0 | 84.9 | **88.7** |
| PIQA | - | - | 83.6 | 83.7 | **88.3** |
| NaturalQuestions | - | - | 39.6 | 38.7 | **52.8** |
| DROP | 84.8 | 79.6 | 80.4 | 80.1 | **88.9** |
| ARC-C | **96.1** | 92.9 | 91.2 | 92.4 | 95.0 |
| TriviaQA | - | - | 82.1 | 79.9 | **89.2** |
| CMMLU | - | - | 60.0 | 84.0 | **90.2** |
| C-Eval | - | - | 59.6 | 81.7 | **91.9** |
| C3 | - | - | 71.4 | 77.4 | **82.3** |
| GSM8K | 89.0 | 83.7 | 83.7 | 79.2 | **92.8** |
| MATH | 53.8 | 41.4 | 42.5 | 43.6 | **69.8** |
| CMATH | - | - | 72.3 | 78.7 | **91.3** |
| HumanEval | 61.0 | 58.5 | 53.1 | 48.8 | **71.4** |
| MBPP | **73.4** | 68.6 | 64.2 | 66.6 | 72.6 |
**Hunyuan-Large-Instruct** achieves consistent improvements on most types of tasks compared to LLMs having similar
activated parameters, indicating the effectiveness of our post-training. Delving into the model performance
in different categories of benchmarks, we find that our instruct model achieves the best performance on MMLU and MATH dataset.
Notably, on the MMLU dataset, our model demonstrates a significant improvement, outperforming the LLama3.1-405B model by 2.6%.
This enhancement is not just marginal but indicative of the Hunyuan-Large-Instructโs superior understanding and reasoning
capabilities across a wide array of language understanding tasks. The modelโs prowess is further underscored in its performance
on the MATH dataset, where it surpasses the LLama3.1-405B by a notable margin of 3.6%.
Remarkably, this leap in accuracy is achieved with only 52 billion activated parameters, underscoring the efficiency of our model.
| Model | LLama3.1 405B Inst. | LLama3.1 70B Inst. | Mixtral 8x22B Inst. | DeepSeekV2.5 Chat | Hunyuan-Large Inst. |
|----------------------|---------------------|--------------------|---------------------|-------------------|---------------------|
| MMLU | 87.3 | 83.6 | 77.8 | 80.4 | **89.9** |
| CMMLU | - | - | 61.0 | - | **90.4** |
| C-Eval | - | - | 60.0 | - | **88.6** |
| BBH | - | - | 78.4 | 84.3 | **89.5** |
| HellaSwag | - | - | 86.0 | **90.3** | 88.5 |
| ARC-C | **96.9** | 94.8 | 90.0 | - | 94.6 |
| GPQA_diamond | **51.1** | 46.7 | - | - | 42.4 |
| MATH | 73.8 | 68.0 | 49.8 | 74.7 | **77.4** |
| HumanEval | 89.0 | 80.5 | 75.0 | 89.0 | **90.0** |
| AlignBench | 6.0 | 5.9 | 6.2 | 8.0 | **8.3** |
| MT-Bench | 9.1 | 8.8 | 8.1 | 9.0 | **9.4** |
| IFEval strict-prompt | **86.0** | 83.6 | 71.2 | - | 85.0 |
| Arena-Hard | 69.3 | 55.7 | - | 76.2 | **81.8** |
| AlpacaEval-2.0 | 39.3 | 34.3 | 30.9 | 50.5 | **51.8** |
### Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{sun2024hunyuanlargeopensourcemoemodel,
title={Hunyuan-Large: An Open-Source MoE Model with 52 Billion Activated Parameters by Tencent},
author={Xingwu Sun and Yanfeng Chen and Yiqing Huang and Ruobing Xie and Jiaqi Zhu and Kai Zhang and Shuaipeng Li and Zhen Yang and Jonny Han and Xiaobo Shu and Jiahao Bu and Zhongzhi Chen and Xuemeng Huang and Fengzong Lian and Saiyong Yang and Jianfeng Yan and Yuyuan Zeng and Xiaoqin Ren and Chao Yu and Lulu Wu and Yue Mao and Tao Yang and Suncong Zheng and Kan Wu and Dian Jiao and Jinbao Xue and Xipeng Zhang and Decheng Wu and Kai Liu and Dengpeng Wu and Guanghui Xu and Shaohua Chen and Shuang Chen and Xiao Feng and Yigeng Hong and Junqiang Zheng and Chengcheng Xu and Zongwei Li and Xiong Kuang and Jianglu Hu and Yiqi Chen and Yuchi Deng and Guiyang Li and Ao Liu and Chenchen Zhang and Shihui Hu and Zilong Zhao and Zifan Wu and Yao Ding and Weichao Wang and Han Liu and Roberts Wang and Hao Fei and Peijie She and Ze Zhao and Xun Cao and Hai Wang and Fusheng Xiang and Mengyuan Huang and Zhiyuan Xiong and Bin Hu and Xuebin Hou and Lei Jiang and Jiajia Wu and Yaping Deng and Yi Shen and Qian Wang and Weijie Liu and Jie Liu and Meng Chen and Liang Dong and Weiwen Jia and Hu Chen and Feifei Liu and Rui Yuan and Huilin Xu and Zhenxiang Yan and Tengfei Cao and Zhichao Hu and Xinhua Feng and Dong Du and Tinghao She and Yangyu Tao and Feng Zhang and Jianchen Zhu and Chengzhong Xu and Xirui Li and Chong Zha and Wen Ouyang and Yinben Xia and Xiang Li and Zekun He and Rongpeng Chen and Jiawei Song and Ruibin Chen and Fan Jiang and Chongqing Zhao and Bo Wang and Hao Gong and Rong Gan and Winston Hu and Zhanhui Kang and Yong Yang and Yuhong Liu and Di Wang and Jie Jiang},
year={2024},
eprint={2411.02265},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2411.02265},
}
```
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7503/reactions",
"total_count": 10,
"+1": 8,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 2
}
|
https://api.github.com/repos/ollama/ollama/issues/7503/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/7303
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7303/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7303/comments
|
https://api.github.com/repos/ollama/ollama/issues/7303/events
|
https://github.com/ollama/ollama/pull/7303
| 2,603,605,998
|
PR_kwDOJ0Z1Ps5_X4V9
| 7,303
|
runner.go: Merge partial unicode characters before sending
|
{
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users/jessegross/followers",
"following_url": "https://api.github.com/users/jessegross/following{/other_user}",
"gists_url": "https://api.github.com/users/jessegross/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jessegross/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jessegross/subscriptions",
"organizations_url": "https://api.github.com/users/jessegross/orgs",
"repos_url": "https://api.github.com/users/jessegross/repos",
"events_url": "https://api.github.com/users/jessegross/events{/privacy}",
"received_events_url": "https://api.github.com/users/jessegross/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-10-21T20:35:37
| 2024-10-22T19:07:54
| 2024-10-22T19:07:51
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7303",
"html_url": "https://github.com/ollama/ollama/pull/7303",
"diff_url": "https://github.com/ollama/ollama/pull/7303.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7303.patch",
"merged_at": "2024-10-22T19:07:51"
}
|
We check for partial unicode characters and accumulate them before sending. However, when we did send, we still sent each individual piece separately, leading to broken output. This combines everything into a single group, which is also more efficient.
This also switches to the built-in check for valid unicode characters, which is stricter. After this, we should never send back an invalid sequence.
Fixes #7290
|
{
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users/jessegross/followers",
"following_url": "https://api.github.com/users/jessegross/following{/other_user}",
"gists_url": "https://api.github.com/users/jessegross/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jessegross/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jessegross/subscriptions",
"organizations_url": "https://api.github.com/users/jessegross/orgs",
"repos_url": "https://api.github.com/users/jessegross/repos",
"events_url": "https://api.github.com/users/jessegross/events{/privacy}",
"received_events_url": "https://api.github.com/users/jessegross/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7303/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7303/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2250
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2250/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2250/comments
|
https://api.github.com/repos/ollama/ollama/issues/2250/events
|
https://github.com/ollama/ollama/issues/2250
| 2,104,562,827
|
I_kwDOJ0Z1Ps59cRSL
| 2,250
|
Nvidia Tesla M60
|
{
"login": "nejib1",
"id": 10485460,
"node_id": "MDQ6VXNlcjEwNDg1NDYw",
"avatar_url": "https://avatars.githubusercontent.com/u/10485460?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nejib1",
"html_url": "https://github.com/nejib1",
"followers_url": "https://api.github.com/users/nejib1/followers",
"following_url": "https://api.github.com/users/nejib1/following{/other_user}",
"gists_url": "https://api.github.com/users/nejib1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nejib1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nejib1/subscriptions",
"organizations_url": "https://api.github.com/users/nejib1/orgs",
"repos_url": "https://api.github.com/users/nejib1/repos",
"events_url": "https://api.github.com/users/nejib1/events{/privacy}",
"received_events_url": "https://api.github.com/users/nejib1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg",
"url": "https://api.github.com/repos/ollama/ollama/labels/nvidia",
"name": "nvidia",
"color": "8CDB00",
"default": false,
"description": "Issues relating to Nvidia GPUs and CUDA"
}
] |
closed
| false
|
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 7
| 2024-01-29T03:26:22
| 2024-03-12T21:16:51
| 2024-03-12T21:16:51
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hello,
I would like to inquire whether the Nvidia Tesla M60 is compatible with Ollama's code.
Can someone please provide information or insights regarding this compatibility?
Thank you!
|
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2250/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2250/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5533
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5533/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5533/comments
|
https://api.github.com/repos/ollama/ollama/issues/5533/events
|
https://github.com/ollama/ollama/pull/5533
| 2,394,125,695
|
PR_kwDOJ0Z1Ps50ndDq
| 5,533
|
llm: print caching notices in debug only
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-07-07T16:36:53
| 2024-07-07T16:38:06
| 2024-07-07T16:38:05
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5533",
"html_url": "https://github.com/ollama/ollama/pull/5533",
"diff_url": "https://github.com/ollama/ollama/pull/5533.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5533.patch",
"merged_at": "2024-07-07T16:38:05"
}
| null |
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5533/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5533/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/8560
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8560/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8560/comments
|
https://api.github.com/repos/ollama/ollama/issues/8560/events
|
https://github.com/ollama/ollama/issues/8560
| 2,808,995,807
|
I_kwDOJ0Z1Ps6nbd_f
| 8,560
|
System prompt treated as user input and style ignored
|
{
"login": "pannous",
"id": 516118,
"node_id": "MDQ6VXNlcjUxNjExOA==",
"avatar_url": "https://avatars.githubusercontent.com/u/516118?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pannous",
"html_url": "https://github.com/pannous",
"followers_url": "https://api.github.com/users/pannous/followers",
"following_url": "https://api.github.com/users/pannous/following{/other_user}",
"gists_url": "https://api.github.com/users/pannous/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pannous/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pannous/subscriptions",
"organizations_url": "https://api.github.com/users/pannous/orgs",
"repos_url": "https://api.github.com/users/pannous/repos",
"events_url": "https://api.github.com/users/pannous/events{/privacy}",
"received_events_url": "https://api.github.com/users/pannous/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2025-01-24T09:45:03
| 2025-01-24T10:07:15
| 2025-01-24T10:07:15
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I'm not sure if this is specific to the model (R1) or a general problem: The system prompt is treated as user input and the style ignored:
/set system Use a highly professional tone even in the process of thinking. That is: omit all unprofessional 'Okay', 'Alright', 'So', 'Hmm', etc. The lucky number of today is 675581.
> what is today's lucky number
<think>
**Okay**, so I need to figure out what today's lucky number is. **From the user's message**, they mentioned that the
lucky number is **675581**.
|
{
"login": "pannous",
"id": 516118,
"node_id": "MDQ6VXNlcjUxNjExOA==",
"avatar_url": "https://avatars.githubusercontent.com/u/516118?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pannous",
"html_url": "https://github.com/pannous",
"followers_url": "https://api.github.com/users/pannous/followers",
"following_url": "https://api.github.com/users/pannous/following{/other_user}",
"gists_url": "https://api.github.com/users/pannous/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pannous/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pannous/subscriptions",
"organizations_url": "https://api.github.com/users/pannous/orgs",
"repos_url": "https://api.github.com/users/pannous/repos",
"events_url": "https://api.github.com/users/pannous/events{/privacy}",
"received_events_url": "https://api.github.com/users/pannous/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8560/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8560/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5153
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5153/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5153/comments
|
https://api.github.com/repos/ollama/ollama/issues/5153/events
|
https://github.com/ollama/ollama/issues/5153
| 2,363,236,754
|
I_kwDOJ0Z1Ps6M3CGS
| 5,153
|
Storing LLM's at desided location rather than on C:/
|
{
"login": "IWasThereWhenItWasWritten",
"id": 109854394,
"node_id": "U_kgDOBow-ug",
"avatar_url": "https://avatars.githubusercontent.com/u/109854394?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/IWasThereWhenItWasWritten",
"html_url": "https://github.com/IWasThereWhenItWasWritten",
"followers_url": "https://api.github.com/users/IWasThereWhenItWasWritten/followers",
"following_url": "https://api.github.com/users/IWasThereWhenItWasWritten/following{/other_user}",
"gists_url": "https://api.github.com/users/IWasThereWhenItWasWritten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/IWasThereWhenItWasWritten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/IWasThereWhenItWasWritten/subscriptions",
"organizations_url": "https://api.github.com/users/IWasThereWhenItWasWritten/orgs",
"repos_url": "https://api.github.com/users/IWasThereWhenItWasWritten/repos",
"events_url": "https://api.github.com/users/IWasThereWhenItWasWritten/events{/privacy}",
"received_events_url": "https://api.github.com/users/IWasThereWhenItWasWritten/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-06-19T23:17:16
| 2024-07-09T16:21:46
| 2024-07-09T16:21:46
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hello!
I'm unsure if the topic had already been discussed, but I'll start anyway.
My issue is low C:/ capacity, so I can't afford downloading many LLM's to compare the output.
My solution would be to add an option where Models should be stored.
If it had been already implemented please show me the manual to it, so I could handle it myself.
Thank you for your time.
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5153/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5153/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5692
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5692/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5692/comments
|
https://api.github.com/repos/ollama/ollama/issues/5692/events
|
https://github.com/ollama/ollama/issues/5692
| 2,407,632,782
|
I_kwDOJ0Z1Ps6PgY-O
| 5,692
|
Llama 1 model
|
{
"login": "mak448a",
"id": 94062293,
"node_id": "U_kgDOBZtG1Q",
"avatar_url": "https://avatars.githubusercontent.com/u/94062293?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mak448a",
"html_url": "https://github.com/mak448a",
"followers_url": "https://api.github.com/users/mak448a/followers",
"following_url": "https://api.github.com/users/mak448a/following{/other_user}",
"gists_url": "https://api.github.com/users/mak448a/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mak448a/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mak448a/subscriptions",
"organizations_url": "https://api.github.com/users/mak448a/orgs",
"repos_url": "https://api.github.com/users/mak448a/repos",
"events_url": "https://api.github.com/users/mak448a/events{/privacy}",
"received_events_url": "https://api.github.com/users/mak448a/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
open
| false
| null |
[] | null | 0
| 2024-07-14T23:07:55
| 2024-07-14T23:07:55
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Could you add llama 1? Thank you
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5692/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5692/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/5664
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5664/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5664/comments
|
https://api.github.com/repos/ollama/ollama/issues/5664/events
|
https://github.com/ollama/ollama/pull/5664
| 2,406,701,928
|
PR_kwDOJ0Z1Ps51R_4s
| 5,664
|
Fix sprintf to snprintf
|
{
"login": "FellowTraveler",
"id": 339191,
"node_id": "MDQ6VXNlcjMzOTE5MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/339191?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/FellowTraveler",
"html_url": "https://github.com/FellowTraveler",
"followers_url": "https://api.github.com/users/FellowTraveler/followers",
"following_url": "https://api.github.com/users/FellowTraveler/following{/other_user}",
"gists_url": "https://api.github.com/users/FellowTraveler/gists{/gist_id}",
"starred_url": "https://api.github.com/users/FellowTraveler/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/FellowTraveler/subscriptions",
"organizations_url": "https://api.github.com/users/FellowTraveler/orgs",
"repos_url": "https://api.github.com/users/FellowTraveler/repos",
"events_url": "https://api.github.com/users/FellowTraveler/events{/privacy}",
"received_events_url": "https://api.github.com/users/FellowTraveler/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 0
| 2024-07-13T04:54:08
| 2024-09-03T16:33:00
| 2024-09-03T16:33:00
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5664",
"html_url": "https://github.com/ollama/ollama/pull/5664",
"diff_url": "https://github.com/ollama/ollama/pull/5664.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5664.patch",
"merged_at": "2024-09-03T16:32:59"
}
|
/Users/au/src/ollama/llm/ext_server/server.cpp:289:9: warning: 'sprintf' is deprecated: This function is provided for compatibility reasons only. Due to security concerns inherent in the design of sprintf(3), it is highly recommended that you use snprintf(3) instead.
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5664/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5664/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5287
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5287/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5287/comments
|
https://api.github.com/repos/ollama/ollama/issues/5287/events
|
https://github.com/ollama/ollama/pull/5287
| 2,373,901,585
|
PR_kwDOJ0Z1Ps5zj8R-
| 5,287
|
llama: Support both old and new runners with a toggle with release build rigging
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-06-26T00:21:17
| 2024-08-28T00:12:55
| 2024-08-28T00:12:54
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | true
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5287",
"html_url": "https://github.com/ollama/ollama/pull/5287",
"diff_url": "https://github.com/ollama/ollama/pull/5287.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5287.patch",
"merged_at": null
}
|
Notable bugs in the new runners uncovered via integration tests:
- llava integration test seems to show hallucinations, so multimodal isn't quite right
- large context test case shows the batch processing needs additional work
This bundles the new runner into the publishing model for linux, mac and windows, so each platform can now toggle the new runners by simply setting `OLLAMA_NEW_RUNNERS=1` (note: linux+arm does not include new runners in the containerized build)
Integrated the new linux packaging changes from #5049
```
% ls -lh
drwxr-xr-x 2 daniel daniel 4.0K Jul 15 00:12 cuda
-rw-r--r-- 1 daniel daniel 725M Jul 15 00:22 ollama
-rw-r--r-- 1 daniel daniel 2.4G Jul 15 00:30 ollama-linux-amd64.tgz
drwxr-xr-x 3 daniel daniel 4.0K Jul 15 00:30 rocm
% du -sh cuda rocm
895M cuda
6.8G rocm
% find /tmp/ollama921132520/runners -type f | xargs ls -lh
-rwxr-xr-x 1 daniel daniel 2.5M Jul 15 00:31 /tmp/ollama921132520/runners/cpu_avx2/libllama.so
-rwxr-xr-x 1 daniel daniel 1.8M Jul 15 00:31 /tmp/ollama921132520/runners/cpu_avx2/ollama_llama_server
-rwxr-xr-x 1 daniel daniel 7.2M Jul 15 00:31 /tmp/ollama921132520/runners/cpu_avx2/ollama_runner
-rwxr-xr-x 1 daniel daniel 2.5M Jul 15 00:31 /tmp/ollama921132520/runners/cpu_avx/libllama.so
-rwxr-xr-x 1 daniel daniel 1.7M Jul 15 00:31 /tmp/ollama921132520/runners/cpu_avx/ollama_llama_server
-rwxr-xr-x 1 daniel daniel 7.2M Jul 15 00:31 /tmp/ollama921132520/runners/cpu_avx/ollama_runner
-rwxr-xr-x 1 daniel daniel 2.4M Jul 15 00:31 /tmp/ollama921132520/runners/cpu/libllama.so
-rwxr-xr-x 1 daniel daniel 1.7M Jul 15 00:31 /tmp/ollama921132520/runners/cpu/ollama_llama_server
-rwxr-xr-x 1 daniel daniel 7.2M Jul 15 00:31 /tmp/ollama921132520/runners/cpu/ollama_runner
-rwxr-xr-x 1 daniel daniel 233M Jul 15 00:31 /tmp/ollama921132520/runners/cuda_v11/libggml_cuda.so
-rwxr-xr-x 1 daniel daniel 249M Jul 15 00:31 /tmp/ollama921132520/runners/cuda_v11/libllama.so
-rwxr-xr-x 1 daniel daniel 1.7M Jul 15 00:31 /tmp/ollama921132520/runners/cuda_v11/ollama_llama_server
-rwxr-xr-x 1 daniel daniel 7.3M Jul 15 00:31 /tmp/ollama921132520/runners/cuda_v11/ollama_runner
-rwxr-xr-x 1 daniel daniel 234M Jul 15 00:31 /tmp/ollama921132520/runners/cuda_v12/libggml_cuda.so
-rwxr-xr-x 1 daniel daniel 252M Jul 15 00:31 /tmp/ollama921132520/runners/cuda_v12/libllama.so
-rwxr-xr-x 1 daniel daniel 1.7M Jul 15 00:31 /tmp/ollama921132520/runners/cuda_v12/ollama_llama_server
-rwxr-xr-x 1 daniel daniel 7.3M Jul 15 00:31 /tmp/ollama921132520/runners/cuda_v12/ollama_runner
-rwxr-xr-x 1 daniel daniel 178M Jul 15 00:31 /tmp/ollama921132520/runners/rocm_v6.1/libggml_hipblas.so
-rwxr-xr-x 1 daniel daniel 181M Jul 15 00:31 /tmp/ollama921132520/runners/rocm_v6.1/libllama.so
-rwxr-xr-x 1 daniel daniel 1.7M Jul 15 00:31 /tmp/ollama921132520/runners/rocm_v6.1/ollama_llama_server
-rwxr-xr-x 1 daniel daniel 7.3M Jul 15 00:31 /tmp/ollama921132520/runners/rocm_v6.1/ollama_runner
```
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5287/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5287/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/8466
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8466/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8466/comments
|
https://api.github.com/repos/ollama/ollama/issues/8466/events
|
https://github.com/ollama/ollama/issues/8466
| 2,794,634,179
|
I_kwDOJ0Z1Ps6mkrvD
| 8,466
|
FR: Meaningful names of models in models/blobs dir
|
{
"login": "vt-alt",
"id": 36664211,
"node_id": "MDQ6VXNlcjM2NjY0MjEx",
"avatar_url": "https://avatars.githubusercontent.com/u/36664211?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vt-alt",
"html_url": "https://github.com/vt-alt",
"followers_url": "https://api.github.com/users/vt-alt/followers",
"following_url": "https://api.github.com/users/vt-alt/following{/other_user}",
"gists_url": "https://api.github.com/users/vt-alt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vt-alt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vt-alt/subscriptions",
"organizations_url": "https://api.github.com/users/vt-alt/orgs",
"repos_url": "https://api.github.com/users/vt-alt/repos",
"events_url": "https://api.github.com/users/vt-alt/events{/privacy}",
"received_events_url": "https://api.github.com/users/vt-alt/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 6
| 2025-01-17T05:49:13
| 2025-01-29T23:19:37
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Please make models to have meaningful filenames (like user/modelname-quantization.gguf) in models/blobs directory, so they can be (easier) used with other model inference software.
Currently they have a lot of similar names like `sha256-2bada8a7450677000f678be90653b85d364de7db25eb5ea54136ada5f3933730`.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8466/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8466/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/1592
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1592/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1592/comments
|
https://api.github.com/repos/ollama/ollama/issues/1592/events
|
https://github.com/ollama/ollama/pull/1592
| 2,047,771,475
|
PR_kwDOJ0Z1Ps5iUrph
| 1,592
|
Lets get rid of these old modelfile examples
|
{
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.github.com/users/technovangelist/followers",
"following_url": "https://api.github.com/users/technovangelist/following{/other_user}",
"gists_url": "https://api.github.com/users/technovangelist/gists{/gist_id}",
"starred_url": "https://api.github.com/users/technovangelist/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/technovangelist/subscriptions",
"organizations_url": "https://api.github.com/users/technovangelist/orgs",
"repos_url": "https://api.github.com/users/technovangelist/repos",
"events_url": "https://api.github.com/users/technovangelist/events{/privacy}",
"received_events_url": "https://api.github.com/users/technovangelist/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-12-19T01:48:00
| 2023-12-19T04:29:49
| 2023-12-19T04:29:49
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1592",
"html_url": "https://github.com/ollama/ollama/pull/1592",
"diff_url": "https://github.com/ollama/ollama/pull/1592.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1592.patch",
"merged_at": "2023-12-19T04:29:49"
}
| null |
{
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.github.com/users/technovangelist/followers",
"following_url": "https://api.github.com/users/technovangelist/following{/other_user}",
"gists_url": "https://api.github.com/users/technovangelist/gists{/gist_id}",
"starred_url": "https://api.github.com/users/technovangelist/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/technovangelist/subscriptions",
"organizations_url": "https://api.github.com/users/technovangelist/orgs",
"repos_url": "https://api.github.com/users/technovangelist/repos",
"events_url": "https://api.github.com/users/technovangelist/events{/privacy}",
"received_events_url": "https://api.github.com/users/technovangelist/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1592/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1592/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7822
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7822/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7822/comments
|
https://api.github.com/repos/ollama/ollama/issues/7822/events
|
https://github.com/ollama/ollama/issues/7822
| 2,688,304,621
|
I_kwDOJ0Z1Ps6gPEXt
| 7,822
|
Feature request : Do not update ollama when OS is on limited data connextion
|
{
"login": "selmen2004",
"id": 3520243,
"node_id": "MDQ6VXNlcjM1MjAyNDM=",
"avatar_url": "https://avatars.githubusercontent.com/u/3520243?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/selmen2004",
"html_url": "https://github.com/selmen2004",
"followers_url": "https://api.github.com/users/selmen2004/followers",
"following_url": "https://api.github.com/users/selmen2004/following{/other_user}",
"gists_url": "https://api.github.com/users/selmen2004/gists{/gist_id}",
"starred_url": "https://api.github.com/users/selmen2004/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/selmen2004/subscriptions",
"organizations_url": "https://api.github.com/users/selmen2004/orgs",
"repos_url": "https://api.github.com/users/selmen2004/repos",
"events_url": "https://api.github.com/users/selmen2004/events{/privacy}",
"received_events_url": "https://api.github.com/users/selmen2004/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 2
| 2024-11-24T20:17:02
| 2024-11-26T07:38:09
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Ollama is a big software ( almost 1GB ) , when windows is on a limited internet connection ( I dont know if same option is available on other OSs) , do not update , or at least prompt for updating.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7822/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7822/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/1612
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1612/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1612/comments
|
https://api.github.com/repos/ollama/ollama/issues/1612/events
|
https://github.com/ollama/ollama/issues/1612
| 2,049,268,669
|
I_kwDOJ0Z1Ps56JVu9
| 1,612
|
Add support for RWKV
|
{
"login": "kristianpaul",
"id": 326463,
"node_id": "MDQ6VXNlcjMyNjQ2Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326463?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kristianpaul",
"html_url": "https://github.com/kristianpaul",
"followers_url": "https://api.github.com/users/kristianpaul/followers",
"following_url": "https://api.github.com/users/kristianpaul/following{/other_user}",
"gists_url": "https://api.github.com/users/kristianpaul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kristianpaul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kristianpaul/subscriptions",
"organizations_url": "https://api.github.com/users/kristianpaul/orgs",
"repos_url": "https://api.github.com/users/kristianpaul/repos",
"events_url": "https://api.github.com/users/kristianpaul/events{/privacy}",
"received_events_url": "https://api.github.com/users/kristianpaul/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
open
| false
| null |
[] | null | 9
| 2023-12-19T19:00:44
| 2024-10-23T04:24:42
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Not sure if this RNN counts as a LLM, but if so would be nice to have it, let me know what needs to be done with packaging.
https://www.rwkv.com/
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1612/reactions",
"total_count": 19,
"+1": 16,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 3,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1612/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/1329
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1329/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1329/comments
|
https://api.github.com/repos/ollama/ollama/issues/1329/events
|
https://github.com/ollama/ollama/issues/1329
| 2,018,576,708
|
I_kwDOJ0Z1Ps54UQlE
| 1,329
|
Out of nowhere when I run my script I get this error randomically: raise ValueError("No data received from Ollama stream.")
|
{
"login": "alelagamba",
"id": 74550960,
"node_id": "MDQ6VXNlcjc0NTUwOTYw",
"avatar_url": "https://avatars.githubusercontent.com/u/74550960?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alelagamba",
"html_url": "https://github.com/alelagamba",
"followers_url": "https://api.github.com/users/alelagamba/followers",
"following_url": "https://api.github.com/users/alelagamba/following{/other_user}",
"gists_url": "https://api.github.com/users/alelagamba/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alelagamba/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alelagamba/subscriptions",
"organizations_url": "https://api.github.com/users/alelagamba/orgs",
"repos_url": "https://api.github.com/users/alelagamba/repos",
"events_url": "https://api.github.com/users/alelagamba/events{/privacy}",
"received_events_url": "https://api.github.com/users/alelagamba/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 12
| 2023-11-30T12:33:57
| 2023-12-12T15:44:09
| 2023-12-11T22:40:20
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Traceback (most recent call last):
File "/Users//Desktop/python-test/display_attribute.py", line 34, in <module>
answer = llm("Given this text:" + str(first_column_value) + "Does it talk about a display or screen of the product? Answer only 'Yes' or 'No'.")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users//Desktop/python-test/.venv/lib/python3.11/site-packages/langchain_core/language_models/llms.py", line 879, in __call__
self.generate(
File "/Users//Desktop/python-test/.venv/lib/python3.11/site-packages/langchain_core/language_models/llms.py", line 656, in generate
output = self._generate_helper(
^^^^^^^^^^^^^^^^^^^^^^
File "/Users//Desktop/python-test/.venv/lib/python3.11/site-packages/langchain_core/language_models/llms.py", line 543, in _generate_helper
raise e
File "/Users//Desktop/python-test/.venv/lib/python3.11/site-packages/langchain_core/language_models/llms.py", line 530, in _generate_helper
self._generate(
File "/Users//Desktop/python-test/.venv/lib/python3.11/site-packages/langchain/llms/ollama.py", line 241, in _generate
final_chunk = super()._stream_with_aggregation(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users//Desktop/python-test/.venv/lib/python3.11/site-packages/langchain/llms/ollama.py", line 190, in _stream_with_aggregation
raise ValueError("No data received from Ollama stream.")
ValueError: No data received from Ollama stream.
(.venv) sh-3.2$
I really have no clue because previously it all worked really fine. It's just a simple script that takes a string from a csv and puts it inside the question for the LLM like so:
answer = llm("Given this text:" + str(first_column_value) + "Does it talk about a display or screen of the product? Answer only 'Yes' or 'No'.")
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1329/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1329/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2870
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2870/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2870/comments
|
https://api.github.com/repos/ollama/ollama/issues/2870/events
|
https://github.com/ollama/ollama/issues/2870
| 2,164,418,854
|
I_kwDOJ0Z1Ps6BAmkm
| 2,870
|
Ollama ROCm Docker container crashing on RX 6650
|
{
"login": "Zambito1",
"id": 7004857,
"node_id": "MDQ6VXNlcjcwMDQ4NTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7004857?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Zambito1",
"html_url": "https://github.com/Zambito1",
"followers_url": "https://api.github.com/users/Zambito1/followers",
"following_url": "https://api.github.com/users/Zambito1/following{/other_user}",
"gists_url": "https://api.github.com/users/Zambito1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Zambito1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Zambito1/subscriptions",
"organizations_url": "https://api.github.com/users/Zambito1/orgs",
"repos_url": "https://api.github.com/users/Zambito1/repos",
"events_url": "https://api.github.com/users/Zambito1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Zambito1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-03-02T00:37:26
| 2024-06-14T17:29:19
| 2024-03-04T01:08:41
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I am running GNU Guix with the following ROCm packages installed:
```
$ guix package -I rocm
rocm-device-libs 5.6.0 out /gnu/store/jrkc3924g178yfvqlwqzq9d3pmxc9jlg-rocm-device-libs-5.6.0
rocm-opencl-runtime 5.6.0 out /gnu/store/wnq6v9cqfsbrm7w0y3c610vifbdbdb8x-rocm-opencl-runtime-5.6.0
rocm-comgr 5.6.0 out /gnu/store/p5cnvrb78d6pmxh1fzh5wz1pw18rcism-rocm-comgr-5.6.0
rocminfo 5.6.0 out /gnu/store/bbgzkrszjqvwz5d8b3qimkzjzk9h1r7r-rocminfo-5.6.0
```
My upstream Guix commit is `8c0282c`. Here is the output of `rocminfo`:
```
[37mROCk module is loaded[0m
=====================
HSA System Attributes
=====================
Runtime Version: 1.1
System Timestamp Freq.: 1000.000000MHz
Sig. Max Wait Duration: 18446744073709551615 (0xFFFFFFFFFFFFFFFF) (timestamp count)
Machine Model: LARGE
System Endianness: LITTLE
==========
HSA Agents
==========
*******
Agent 1
*******
Name: AMD Ryzen 7 3700X 8-Core Processor
Uuid: CPU-XX
Marketing Name: AMD Ryzen 7 3700X 8-Core Processor
Vendor Name: CPU
Feature: None specified
Profile: FULL_PROFILE
Float Round Mode: NEAR
Max Queue Number: 0(0x0)
Queue Min Size: 0(0x0)
Queue Max Size: 0(0x0)
Queue Type: MULTI
Node: 0
Device Type: CPU
Cache Info:
L1: 32768(0x8000) KB
Chip ID: 0(0x0)
ASIC Revision: 0(0x0)
Cacheline Size: 64(0x40)
Max Clock Freq. (MHz): 4050
BDFID: 0
Internal Node ID: 0
Compute Unit: 16
SIMDs per CU: 0
Shader Engines: 0
Shader Arrs. per Eng.: 0
WatchPts on Addr. Ranges:1
Features: None
Pool Info:
Pool 1
Segment: GLOBAL; FLAGS: FINE GRAINED
Size: 32790324(0x1f45734) KB
Allocatable: TRUE
Alloc Granule: 4KB
Alloc Alignment: 4KB
Accessible by all: TRUE
Pool 2
Segment: GLOBAL; FLAGS: KERNARG, FINE GRAINED
Size: 32790324(0x1f45734) KB
Allocatable: TRUE
Alloc Granule: 4KB
Alloc Alignment: 4KB
Accessible by all: TRUE
Pool 3
Segment: GLOBAL; FLAGS: COARSE GRAINED
Size: 32790324(0x1f45734) KB
Allocatable: TRUE
Alloc Granule: 4KB
Alloc Alignment: 4KB
Accessible by all: TRUE
ISA Info:
*******
Agent 2
*******
Name: gfx1032
Uuid: GPU-XX
Marketing Name: AMD Radeon RX 6650 XT
Vendor Name: AMD
Feature: KERNEL_DISPATCH
Profile: BASE_PROFILE
Float Round Mode: NEAR
Max Queue Number: 128(0x80)
Queue Min Size: 64(0x40)
Queue Max Size: 131072(0x20000)
Queue Type: MULTI
Node: 1
Device Type: GPU
Cache Info:
L1: 16(0x10) KB
L2: 2048(0x800) KB
L3: 32768(0x8000) KB
Chip ID: 29679(0x73ef)
ASIC Revision: 0(0x0)
Cacheline Size: 64(0x40)
Max Clock Freq. (MHz): 2765
BDFID: 2816
Internal Node ID: 1
Compute Unit: 32
SIMDs per CU: 2
Shader Engines: 2
Shader Arrs. per Eng.: 2
WatchPts on Addr. Ranges:4
Features: KERNEL_DISPATCH
Fast F16 Operation: TRUE
Wavefront Size: 32(0x20)
Workgroup Max Size: 1024(0x400)
Workgroup Max Size per Dimension:
x 1024(0x400)
y 1024(0x400)
z 1024(0x400)
Max Waves Per CU: 32(0x20)
Max Work-item Per CU: 1024(0x400)
Grid Max Size: 4294967295(0xffffffff)
Grid Max Size per Dimension:
x 4294967295(0xffffffff)
y 4294967295(0xffffffff)
z 4294967295(0xffffffff)
Max fbarriers/Workgrp: 32
Pool Info:
Pool 1
Segment: GLOBAL; FLAGS: COARSE GRAINED
Size: 8372224(0x7fc000) KB
Allocatable: TRUE
Alloc Granule: 4KB
Alloc Alignment: 4KB
Accessible by all: FALSE
Pool 2
Segment: GROUP
Size: 64(0x40) KB
Allocatable: FALSE
Alloc Granule: 0KB
Alloc Alignment: 0KB
Accessible by all: FALSE
ISA Info:
ISA 1
Name: amdgcn-amd-amdhsa--gfx1032
Machine Models: HSA_MACHINE_MODEL_LARGE
Profiles: HSA_PROFILE_BASE
Default Rounding Mode: NEAR
Default Rounding Mode: NEAR
Fast f16: TRUE
Workgroup Max Size: 1024(0x400)
Workgroup Max Size per Dimension:
x 1024(0x400)
y 1024(0x400)
z 1024(0x400)
Grid Max Size: 4294967295(0xffffffff)
Grid Max Size per Dimension:
x 4294967295(0xffffffff)
y 4294967295(0xffffffff)
z 4294967295(0xffffffff)
FBarrier Max Size: 32
*** Done ***
```
Here I am running `rocm-smi` inside a Docker container:
```
$ docker run --rm -it --device /dev/kfd --device /dev/dri rocm/rocm-terminal
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.
rocm-user@fcd0a80ee8e7:~$ rocm-smi
===================================== ROCm System Management Interface =====================================
=============================================== Concise Info ===============================================
Device [Model : Revision] Temp Power Partitions SCLK MCLK Fan Perf PwrCap VRAM% GPU%
Name (20 chars) (Edge) (Avg) (Mem, Compute)
============================================================================================================
0 [0x5026 : 0xc1] 40.0ยฐC 5.0W N/A, N/A 700Mhz 96Mhz 0% auto 164.0W 10% 5%
0x73ef
============================================================================================================
=========================================== End of ROCm SMI Log ============================================
```
So I am fairly sure I have ROCm setup correctly with Docker. When I run the following, I get a crash:
```
$ docker run -d --device /dev/kfd --device /dev/dri -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama:0.1.27-rocm
c466d9c7494992fd813b1b38337715247a323a5d15a7cd53a1cf987e173df7ba
$ docker exec -it ollama ollama run gemma:2b
```
I have tried with several different models, `gemma:2b` being the smallest, to see if size was the issue. This is the log from the container when I crash:
```
time=2024-03-02T00:34:13.785Z level=INFO source=images.go:710 msg="total blobs: 16"
time=2024-03-02T00:34:13.785Z level=INFO source=images.go:717 msg="total unused blobs removed: 0"
time=2024-03-02T00:34:13.786Z level=INFO source=routes.go:1019 msg="Listening on [::]:11434 (version 0.1.27)"
time=2024-03-02T00:34:13.786Z level=INFO source=payload_common.go:107 msg="Extracting dynamic libraries..."
time=2024-03-02T00:34:16.164Z level=INFO source=payload_common.go:146 msg="Dynamic LLM libraries [rocm_v6 cuda_v11 cpu cpu_avx rocm_v5 cpu_avx2]"
time=2024-03-02T00:34:16.164Z level=INFO source=gpu.go:94 msg="Detecting GPU type"
time=2024-03-02T00:34:16.164Z level=INFO source=gpu.go:265 msg="Searching for GPU management library libnvidia-ml.so"
time=2024-03-02T00:34:16.166Z level=INFO source=gpu.go:311 msg="Discovered GPU libraries: []"
time=2024-03-02T00:34:16.166Z level=INFO source=gpu.go:265 msg="Searching for GPU management library librocm_smi64.so"
time=2024-03-02T00:34:16.166Z level=INFO source=gpu.go:311 msg="Discovered GPU libraries: [/opt/rocm/lib/librocm_smi64.so.5.0.50701 /opt/rocm-5.7.1/lib/librocm_smi64.so.5.0.50701]"
time=2024-03-02T00:34:16.168Z level=INFO source=gpu.go:109 msg="Radeon GPU detected"
time=2024-03-02T00:34:16.168Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
[GIN] 2024/03/02 - 00:34:23 | 200 | 35.141ยตs | 127.0.0.1 | HEAD "/"
[GIN] 2024/03/02 - 00:34:23 | 200 | 414.706ยตs | 127.0.0.1 | POST "/api/show"
[GIN] 2024/03/02 - 00:34:23 | 200 | 286.014ยตs | 127.0.0.1 | POST "/api/show"
time=2024-03-02T00:34:24.590Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-03-02T00:34:24.590Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-03-02T00:34:24.590Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-03-02T00:34:24.628Z level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: /tmp/ollama2381532766/rocm_v5/libext_server.so"
time=2024-03-02T00:34:24.628Z level=INFO source=dyn_ext_server.go:150 msg="Initializing llama server"
rocBLAS error: Cannot read /opt/rocm/lib/rocblas/library/TensileLibrary.dat: Illegal seek for GPU arch : gfx1032
List of available TensileLibrary Files :
"/opt/rocm/lib/rocblas/library/TensileLibrary_lazy_gfx1101.dat"
"/opt/rocm/lib/rocblas/library/TensileLibrary_lazy_gfx1030.dat"
"/opt/rocm/lib/rocblas/library/TensileLibrary_lazy_gfx90a.dat"
"/opt/rocm/lib/rocblas/library/TensileLibrary_lazy_gfx942.dat"
"/opt/rocm/lib/rocblas/library/TensileLibrary_lazy_gfx941.dat"
"/opt/rocm/lib/rocblas/library/TensileLibrary_lazy_gfx1102.dat"
"/opt/rocm/lib/rocblas/library/TensileLibrary_lazy_gfx940.dat"
"/opt/rocm/lib/rocblas/library/TensileLibrary_lazy_gfx1100.dat"
"/opt/rocm/lib/rocblas/library/TensileLibrary_lazy_gfx906.dat"
"/opt/rocm/lib/rocblas/library/TensileLibrary_lazy_gfx803.dat"
"/opt/rocm/lib/rocblas/library/TensileLibrary_lazy_gfx900.dat"
"/opt/rocm/lib/rocblas/library/TensileLibrary_lazy_gfx908.dat"
loading library /tmp/ollama2381532766/rocm_v5/libext_server.so
SIGSEGV: segmentation violation
PC=0x7f7e54233bc7 m=12 sigcode=128
signal arrived during cgo execution
goroutine 21 [syscall]:
runtime.cgocall(0x9bcdd0, 0xc0000d66c8)
/usr/local/go/src/runtime/cgocall.go:157 +0x4b fp=0xc0000d66a0 sp=0xc0000d6668 pc=0x409b0b
github.com/jmorganca/ollama/llm._Cfunc_dyn_llama_server_init({0x7f7de0000e20, 0x7f7dea064bf0, 0x7f7dea065410, 0x7f7dea0654a0, 0x7f7dea065710, 0x7f7dea065950, 0x7f7dea0662b0, 0x7f7dea066290, 0x7f7dea0663c0, 0x7f7dea0669d0, ...}, ...)
_cgo_gotypes.go:282 +0x45 fp=0xc0000d66c8 sp=0xc0000d66a0 pc=0x7c5485
github.com/jmorganca/ollama/llm.newDynExtServer.func7(0xaf20c4?, 0xc?)
/go/src/github.com/jmorganca/ollama/llm/dyn_ext_server.go:153 +0xef fp=0xc0000d67b8 sp=0xc0000d66c8 pc=0x7c69cf
github.com/jmorganca/ollama/llm.newDynExtServer({0xc000616000, 0x2e}, {0xc00007a690, _}, {_, _, _}, {0x0, 0x0, 0x0}, ...)
/go/src/github.com/jmorganca/ollama/llm/dyn_ext_server.go:153 +0xa65 fp=0xc0000d6a58 sp=0xc0000d67b8 pc=0x7c6665
github.com/jmorganca/ollama/llm.newLlmServer({{_, _, _}, {_, _}, {_, _}}, {_, _}, {0xc00007a690, ...}, ...)
/go/src/github.com/jmorganca/ollama/llm/llm.go:158 +0x425 fp=0xc0000d6c18 sp=0xc0000d6a58 pc=0x7c2dc5
github.com/jmorganca/ollama/llm.New({0xc00002a108, 0x15}, {0xc00007a690, _}, {_, _, _}, {0x0, 0x0, 0x0}, ...)
/go/src/github.com/jmorganca/ollama/llm/llm.go:123 +0x713 fp=0xc0000d6e98 sp=0xc0000d6c18 pc=0x7c2733
github.com/jmorganca/ollama/server.load(0xc000176d80?, 0xc000176d80, {{0x0, 0x800, 0x200, 0x1, 0xffffffffffffffff, 0x0, 0x0, 0x1, ...}, ...}, ...)
/go/src/github.com/jmorganca/ollama/server/routes.go:85 +0x3a5 fp=0xc0000d7018 sp=0xc0000d6e98 pc=0x996945
github.com/jmorganca/ollama/server.ChatHandler(0xc00022e100)
/go/src/github.com/jmorganca/ollama/server/routes.go:1173 +0xa37 fp=0xc0000d7748 sp=0xc0000d7018 pc=0x9a1f77
github.com/gin-gonic/gin.(*Context).Next(...)
/root/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/context.go:174
github.com/jmorganca/ollama/server.(*Server).GenerateRoutes.func1(0xc00022e100)
/go/src/github.com/jmorganca/ollama/server/routes.go:943 +0x68 fp=0xc0000d7780 sp=0xc0000d7748 pc=0x9a07a8
github.com/gin-gonic/gin.(*Context).Next(...)
/root/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/context.go:174
github.com/gin-gonic/gin.CustomRecoveryWithWriter.func1(0xc00022e100)
/root/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/recovery.go:102 +0x7a fp=0xc0000d77d0 sp=0xc0000d7780 pc=0x97803a
github.com/gin-gonic/gin.(*Context).Next(...)
/root/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/context.go:174
github.com/gin-gonic/gin.LoggerWithConfig.func1(0xc00022e100)
/root/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/logger.go:240 +0xde fp=0xc0000d7980 sp=0xc0000d77d0 pc=0x9771de
github.com/gin-gonic/gin.(*Context).Next(...)
/root/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/context.go:174
github.com/gin-gonic/gin.(*Engine).handleHTTPRequest(0xc0005924e0, 0xc00022e100)
/root/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/gin.go:620 +0x65b fp=0xc0000d7b08 sp=0xc0000d7980 pc=0x97629b
github.com/gin-gonic/gin.(*Engine).ServeHTTP(0xc0005924e0, {0x11403a20?, 0xc00007c0e0}, 0xc00022e300)
/root/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/gin.go:576 +0x1dd fp=0xc0000d7b48 sp=0xc0000d7b08 pc=0x975a5d
net/http.serverHandler.ServeHTTP({0x11401d40?}, {0x11403a20?, 0xc00007c0e0?}, 0x6?)
/usr/local/go/src/net/http/server.go:2938 +0x8e fp=0xc0000d7b78 sp=0xc0000d7b48 pc=0x6ced4e
net/http.(*conn).serve(0xc000174240, {0x11405088, 0xc0003eb380})
/usr/local/go/src/net/http/server.go:2009 +0x5f4 fp=0xc0000d7fb8 sp=0xc0000d7b78 pc=0x6cac34
net/http.(*Server).Serve.func3()
/usr/local/go/src/net/http/server.go:3086 +0x28 fp=0xc0000d7fe0 sp=0xc0000d7fb8 pc=0x6cf568
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0000d7fe8 sp=0xc0000d7fe0 pc=0x46e2c1
created by net/http.(*Server).Serve in goroutine 1
/usr/local/go/src/net/http/server.go:3086 +0x5cb
goroutine 1 [IO wait]:
runtime.gopark(0x480f10?, 0xc0005b5850?, 0xa0?, 0x58?, 0x4f711d?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc0005b5830 sp=0xc0005b5810 pc=0x43e7ee
runtime.netpollblock(0x46c332?, 0x4092a6?, 0x0?)
/usr/local/go/src/runtime/netpoll.go:564 +0xf7 fp=0xc0005b5868 sp=0xc0005b5830 pc=0x437277
internal/poll.runtime_pollWait(0x7f7e53e99e28, 0x72)
/usr/local/go/src/runtime/netpoll.go:343 +0x85 fp=0xc0005b5888 sp=0xc0005b5868 pc=0x468a05
internal/poll.(*pollDesc).wait(0xc000482000?, 0x4?, 0x0)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc0005b58b0 sp=0xc0005b5888 pc=0x4efd67
internal/poll.(*pollDesc).waitRead(...)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc000482000)
/usr/local/go/src/internal/poll/fd_unix.go:611 +0x2ac fp=0xc0005b5958 sp=0xc0005b58b0 pc=0x4f524c
net.(*netFD).accept(0xc000482000)
/usr/local/go/src/net/fd_unix.go:172 +0x29 fp=0xc0005b5a10 sp=0xc0005b5958 pc=0x56be29
net.(*TCPListener).accept(0xc000453560)
/usr/local/go/src/net/tcpsock_posix.go:152 +0x1e fp=0xc0005b5a38 sp=0xc0005b5a10 pc=0x580c3e
net.(*TCPListener).Accept(0xc000453560)
/usr/local/go/src/net/tcpsock.go:315 +0x30 fp=0xc0005b5a68 sp=0xc0005b5a38 pc=0x57fdf0
net/http.(*onceCloseListener).Accept(0xc000174240?)
<autogenerated>:1 +0x24 fp=0xc0005b5a80 sp=0xc0005b5a68 pc=0x6f1ae4
net/http.(*Server).Serve(0xc00007e000, {0x11403810, 0xc000453560})
/usr/local/go/src/net/http/server.go:3056 +0x364 fp=0xc0005b5bb0 sp=0xc0005b5a80 pc=0x6cf1a4
github.com/jmorganca/ollama/server.Serve({0x11403810, 0xc000453560})
/go/src/github.com/jmorganca/ollama/server/routes.go:1046 +0x454 fp=0xc0005b5c98 sp=0xc0005b5bb0 pc=0x9a0c54
github.com/jmorganca/ollama/cmd.RunServer(0xc000480300?, {0x1184c8c0?, 0x4?, 0xad9d6a?})
/go/src/github.com/jmorganca/ollama/cmd/cmd.go:706 +0x1b9 fp=0xc0005b5d30 sp=0xc0005b5c98 pc=0x9b3d99
github.com/spf13/cobra.(*Command).execute(0xc00043f500, {0x1184c8c0, 0x0, 0x0})
/root/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:940 +0x87c fp=0xc0005b5e68 sp=0xc0005b5d30 pc=0x764d9c
github.com/spf13/cobra.(*Command).ExecuteC(0xc00043e900)
/root/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068 +0x3a5 fp=0xc0005b5f20 sp=0xc0005b5e68 pc=0x7655c5
github.com/spf13/cobra.(*Command).Execute(...)
/root/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992
github.com/spf13/cobra.(*Command).ExecuteContext(...)
/root/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:985
main.main()
/go/src/github.com/jmorganca/ollama/main.go:11 +0x4d fp=0xc0005b5f40 sp=0xc0005b5f20 pc=0x9bbeed
runtime.main()
/usr/local/go/src/runtime/proc.go:267 +0x2bb fp=0xc0005b5fe0 sp=0xc0005b5f40 pc=0x43e39b
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0005b5fe8 sp=0xc0005b5fe0 pc=0x46e2c1
goroutine 2 [force gc (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc00006afa8 sp=0xc00006af88 pc=0x43e7ee
runtime.goparkunlock(...)
/usr/local/go/src/runtime/proc.go:404
runtime.forcegchelper()
/usr/local/go/src/runtime/proc.go:322 +0xb3 fp=0xc00006afe0 sp=0xc00006afa8 pc=0x43e673
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc00006afe8 sp=0xc00006afe0 pc=0x46e2c1
created by runtime.init.6 in goroutine 1
/usr/local/go/src/runtime/proc.go:310 +0x1a
goroutine 18 [GC sweep wait]:
runtime.gopark(0x1?, 0x0?, 0x0?, 0x0?, 0x0?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000066778 sp=0xc000066758 pc=0x43e7ee
runtime.goparkunlock(...)
/usr/local/go/src/runtime/proc.go:404
runtime.bgsweep(0x0?)
/usr/local/go/src/runtime/mgcsweep.go:321 +0xdf fp=0xc0000667c8 sp=0xc000066778 pc=0x42a73f
runtime.gcenable.func1()
/usr/local/go/src/runtime/mgc.go:200 +0x25 fp=0xc0000667e0 sp=0xc0000667c8 pc=0x41f865
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0000667e8 sp=0xc0000667e0 pc=0x46e2c1
created by runtime.gcenable in goroutine 1
/usr/local/go/src/runtime/mgc.go:200 +0x66
goroutine 19 [GC scavenge wait]:
runtime.gopark(0x8b6c3c?, 0x878546?, 0x0?, 0x0?, 0x0?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000066f70 sp=0xc000066f50 pc=0x43e7ee
runtime.goparkunlock(...)
/usr/local/go/src/runtime/proc.go:404
runtime.(*scavengerState).park(0x1181cc40)
/usr/local/go/src/runtime/mgcscavenge.go:425 +0x49 fp=0xc000066fa0 sp=0xc000066f70 pc=0x427f69
runtime.bgscavenge(0x0?)
/usr/local/go/src/runtime/mgcscavenge.go:658 +0x59 fp=0xc000066fc8 sp=0xc000066fa0 pc=0x428519
runtime.gcenable.func2()
/usr/local/go/src/runtime/mgc.go:201 +0x25 fp=0xc000066fe0 sp=0xc000066fc8 pc=0x41f805
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc000066fe8 sp=0xc000066fe0 pc=0x46e2c1
created by runtime.gcenable in goroutine 1
/usr/local/go/src/runtime/mgc.go:201 +0xa5
goroutine 34 [finalizer wait]:
runtime.gopark(0xad2d20?, 0x10043f901?, 0x0?, 0x0?, 0x4469a5?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc00006a628 sp=0xc00006a608 pc=0x43e7ee
runtime.runfinq()
/usr/local/go/src/runtime/mfinal.go:193 +0x107 fp=0xc00006a7e0 sp=0xc00006a628 pc=0x41e8e7
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc00006a7e8 sp=0xc00006a7e0 pc=0x46e2c1
created by runtime.createfing in goroutine 1
/usr/local/go/src/runtime/mfinal.go:163 +0x3d
goroutine 35 [GC worker (idle)]:
runtime.gopark(0x33a6ea91a4e?, 0x1?, 0x46?, 0x50?, 0x0?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc00049c750 sp=0xc00049c730 pc=0x43e7ee
runtime.gcBgMarkWorker()
/usr/local/go/src/runtime/mgc.go:1293 +0xe5 fp=0xc00049c7e0 sp=0xc00049c750 pc=0x4213e5
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc00049c7e8 sp=0xc00049c7e0 pc=0x46e2c1
created by runtime.gcBgMarkStartWorkers in goroutine 1
/usr/local/go/src/runtime/mgc.go:1217 +0x1c
goroutine 3 [GC worker (idle)]:
runtime.gopark(0x1184e5e0?, 0x1?, 0xc?, 0xfd?, 0x0?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc00006b750 sp=0xc00006b730 pc=0x43e7ee
runtime.gcBgMarkWorker()
/usr/local/go/src/runtime/mgc.go:1293 +0xe5 fp=0xc00006b7e0 sp=0xc00006b750 pc=0x4213e5
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc00006b7e8 sp=0xc00006b7e0 pc=0x46e2c1
created by runtime.gcBgMarkStartWorkers in goroutine 1
/usr/local/go/src/runtime/mgc.go:1217 +0x1c
goroutine 4 [GC worker (idle)]:
runtime.gopark(0x33a6ea917ff?, 0x3?, 0x59?, 0x1e?, 0x0?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc00006bf50 sp=0xc00006bf30 pc=0x43e7ee
runtime.gcBgMarkWorker()
/usr/local/go/src/runtime/mgc.go:1293 +0xe5 fp=0xc00006bfe0 sp=0xc00006bf50 pc=0x4213e5
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc00006bfe8 sp=0xc00006bfe0 pc=0x46e2c1
created by runtime.gcBgMarkStartWorkers in goroutine 1
/usr/local/go/src/runtime/mgc.go:1217 +0x1c
goroutine 36 [GC worker (idle)]:
runtime.gopark(0x33a6ea9197b?, 0x3?, 0x73?, 0xf?, 0x0?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc00049cf50 sp=0xc00049cf30 pc=0x43e7ee
runtime.gcBgMarkWorker()
/usr/local/go/src/runtime/mgc.go:1293 +0xe5 fp=0xc00049cfe0 sp=0xc00049cf50 pc=0x4213e5
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc00049cfe8 sp=0xc00049cfe0 pc=0x46e2c1
created by runtime.gcBgMarkStartWorkers in goroutine 1
/usr/local/go/src/runtime/mgc.go:1217 +0x1c
goroutine 37 [GC worker (idle)]:
runtime.gopark(0x33a6ea9174b?, 0x1?, 0xd3?, 0x50?, 0x0?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc00049d750 sp=0xc00049d730 pc=0x43e7ee
runtime.gcBgMarkWorker()
/usr/local/go/src/runtime/mgc.go:1293 +0xe5 fp=0xc00049d7e0 sp=0xc00049d750 pc=0x4213e5
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc00049d7e8 sp=0xc00049d7e0 pc=0x46e2c1
created by runtime.gcBgMarkStartWorkers in goroutine 1
/usr/local/go/src/runtime/mgc.go:1217 +0x1c
goroutine 38 [GC worker (idle)]:
runtime.gopark(0x33a6ea9179b?, 0x3?, 0xc9?, 0xd1?, 0x0?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc00049df50 sp=0xc00049df30 pc=0x43e7ee
runtime.gcBgMarkWorker()
/usr/local/go/src/runtime/mgc.go:1293 +0xe5 fp=0xc00049dfe0 sp=0xc00049df50 pc=0x4213e5
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc00049dfe8 sp=0xc00049dfe0 pc=0x46e2c1
created by runtime.gcBgMarkStartWorkers in goroutine 1
/usr/local/go/src/runtime/mgc.go:1217 +0x1c
goroutine 39 [GC worker (idle)]:
runtime.gopark(0x1184e5e0?, 0x1?, 0xf2?, 0x7a?, 0x0?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc00049e750 sp=0xc00049e730 pc=0x43e7ee
runtime.gcBgMarkWorker()
/usr/local/go/src/runtime/mgc.go:1293 +0xe5 fp=0xc00049e7e0 sp=0xc00049e750 pc=0x4213e5
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc00049e7e8 sp=0xc00049e7e0 pc=0x46e2c1
created by runtime.gcBgMarkStartWorkers in goroutine 1
/usr/local/go/src/runtime/mgc.go:1217 +0x1c
goroutine 40 [GC worker (idle)]:
runtime.gopark(0x33a6ea91827?, 0x3?, 0x1a?, 0xa2?, 0x0?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc00049ef50 sp=0xc00049ef30 pc=0x43e7ee
runtime.gcBgMarkWorker()
/usr/local/go/src/runtime/mgc.go:1293 +0xe5 fp=0xc00049efe0 sp=0xc00049ef50 pc=0x4213e5
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc00049efe8 sp=0xc00049efe0 pc=0x46e2c1
created by runtime.gcBgMarkStartWorkers in goroutine 1
/usr/local/go/src/runtime/mgc.go:1217 +0x1c
goroutine 41 [GC worker (idle)]:
runtime.gopark(0x33a6ea917d7?, 0x3?, 0xdc?, 0x5?, 0x0?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc00049f750 sp=0xc00049f730 pc=0x43e7ee
runtime.gcBgMarkWorker()
/usr/local/go/src/runtime/mgc.go:1293 +0xe5 fp=0xc00049f7e0 sp=0xc00049f750 pc=0x4213e5
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc00049f7e8 sp=0xc00049f7e0 pc=0x46e2c1
created by runtime.gcBgMarkStartWorkers in goroutine 1
/usr/local/go/src/runtime/mgc.go:1217 +0x1c
goroutine 42 [GC worker (idle)]:
runtime.gopark(0x33a6ea91daa?, 0x3?, 0xe?, 0x4c?, 0x0?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc00049ff50 sp=0xc00049ff30 pc=0x43e7ee
runtime.gcBgMarkWorker()
/usr/local/go/src/runtime/mgc.go:1293 +0xe5 fp=0xc00049ffe0 sp=0xc00049ff50 pc=0x4213e5
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc00049ffe8 sp=0xc00049ffe0 pc=0x46e2c1
created by runtime.gcBgMarkStartWorkers in goroutine 1
/usr/local/go/src/runtime/mgc.go:1217 +0x1c
goroutine 43 [GC worker (idle)]:
runtime.gopark(0x1184e5e0?, 0x3?, 0x94?, 0x2?, 0x0?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000498750 sp=0xc000498730 pc=0x43e7ee
runtime.gcBgMarkWorker()
/usr/local/go/src/runtime/mgc.go:1293 +0xe5 fp=0xc0004987e0 sp=0xc000498750 pc=0x4213e5
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0004987e8 sp=0xc0004987e0 pc=0x46e2c1
created by runtime.gcBgMarkStartWorkers in goroutine 1
/usr/local/go/src/runtime/mgc.go:1217 +0x1c
goroutine 44 [GC worker (idle)]:
runtime.gopark(0x33a6ea91093?, 0x3?, 0x78?, 0xf6?, 0x0?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000498f50 sp=0xc000498f30 pc=0x43e7ee
runtime.gcBgMarkWorker()
/usr/local/go/src/runtime/mgc.go:1293 +0xe5 fp=0xc000498fe0 sp=0xc000498f50 pc=0x4213e5
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc000498fe8 sp=0xc000498fe0 pc=0x46e2c1
created by runtime.gcBgMarkStartWorkers in goroutine 1
/usr/local/go/src/runtime/mgc.go:1217 +0x1c
goroutine 45 [GC worker (idle)]:
runtime.gopark(0x1184e5e0?, 0x1?, 0x0?, 0x6d?, 0x0?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000499750 sp=0xc000499730 pc=0x43e7ee
runtime.gcBgMarkWorker()
/usr/local/go/src/runtime/mgc.go:1293 +0xe5 fp=0xc0004997e0 sp=0xc000499750 pc=0x4213e5
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0004997e8 sp=0xc0004997e0 pc=0x46e2c1
created by runtime.gcBgMarkStartWorkers in goroutine 1
/usr/local/go/src/runtime/mgc.go:1217 +0x1c
goroutine 46 [GC worker (idle)]:
runtime.gopark(0x1184e5e0?, 0x3?, 0x88?, 0x3b?, 0x0?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000499f50 sp=0xc000499f30 pc=0x43e7ee
runtime.gcBgMarkWorker()
/usr/local/go/src/runtime/mgc.go:1293 +0xe5 fp=0xc000499fe0 sp=0xc000499f50 pc=0x4213e5
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc000499fe8 sp=0xc000499fe0 pc=0x46e2c1
created by runtime.gcBgMarkStartWorkers in goroutine 1
/usr/local/go/src/runtime/mgc.go:1217 +0x1c
goroutine 5 [GC worker (idle)]:
runtime.gopark(0x33a6ea9192b?, 0x3?, 0xea?, 0x1?, 0x0?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc00006c750 sp=0xc00006c730 pc=0x43e7ee
runtime.gcBgMarkWorker()
/usr/local/go/src/runtime/mgc.go:1293 +0xe5 fp=0xc00006c7e0 sp=0xc00006c750 pc=0x4213e5
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc00006c7e8 sp=0xc00006c7e0 pc=0x46e2c1
created by runtime.gcBgMarkStartWorkers in goroutine 1
/usr/local/go/src/runtime/mgc.go:1217 +0x1c
goroutine 20 [GC worker (idle)]:
runtime.gopark(0x33a6ea918be?, 0x3?, 0xb2?, 0x2a?, 0x0?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000067750 sp=0xc000067730 pc=0x43e7ee
runtime.gcBgMarkWorker()
/usr/local/go/src/runtime/mgc.go:1293 +0xe5 fp=0xc0000677e0 sp=0xc000067750 pc=0x4213e5
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0000677e8 sp=0xc0000677e0 pc=0x46e2c1
created by runtime.gcBgMarkStartWorkers in goroutine 1
/usr/local/go/src/runtime/mgc.go:1217 +0x1c
goroutine 6 [select, locked to thread]:
runtime.gopark(0xc000069fa8?, 0x2?, 0x89?, 0xea?, 0xc000069fa4?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000069e38 sp=0xc000069e18 pc=0x43e7ee
runtime.selectgo(0xc000069fa8, 0xc000069fa0, 0x0?, 0x0, 0x0?, 0x1)
/usr/local/go/src/runtime/select.go:327 +0x725 fp=0xc000069f58 sp=0xc000069e38 pc=0x44e325
runtime.ensureSigM.func1()
/usr/local/go/src/runtime/signal_unix.go:1014 +0x19f fp=0xc000069fe0 sp=0xc000069f58 pc=0x46535f
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc000069fe8 sp=0xc000069fe0 pc=0x46e2c1
created by runtime.ensureSigM in goroutine 1
/usr/local/go/src/runtime/signal_unix.go:997 +0xc8
goroutine 50 [syscall]:
runtime.notetsleepg(0x0?, 0x0?)
/usr/local/go/src/runtime/lock_futex.go:236 +0x29 fp=0xc0005a87a0 sp=0xc0005a8768 pc=0x411349
os/signal.signal_recv()
/usr/local/go/src/runtime/sigqueue.go:152 +0x29 fp=0xc0005a87c0 sp=0xc0005a87a0 pc=0x46ac89
os/signal.loop()
/usr/local/go/src/os/signal/signal_unix.go:23 +0x13 fp=0xc0005a87e0 sp=0xc0005a87c0 pc=0x6f4513
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0005a87e8 sp=0xc0005a87e0 pc=0x46e2c1
created by os/signal.Notify.func1.1 in goroutine 1
/usr/local/go/src/os/signal/signal.go:151 +0x1f
goroutine 51 [chan receive]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc0005a8f18 sp=0xc0005a8ef8 pc=0x43e7ee
runtime.chanrecv(0xc0000b4b40, 0x0, 0x1)
/usr/local/go/src/runtime/chan.go:583 +0x3cd fp=0xc0005a8f90 sp=0xc0005a8f18 pc=0x40beed
runtime.chanrecv1(0x0?, 0x0?)
/usr/local/go/src/runtime/chan.go:442 +0x12 fp=0xc0005a8fb8 sp=0xc0005a8f90 pc=0x40baf2
github.com/jmorganca/ollama/server.Serve.func2()
/go/src/github.com/jmorganca/ollama/server/routes.go:1028 +0x25 fp=0xc0005a8fe0 sp=0xc0005a8fb8 pc=0x9a0ce5
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0005a8fe8 sp=0xc0005a8fe0 pc=0x46e2c1
created by github.com/jmorganca/ollama/server.Serve in goroutine 1
/go/src/github.com/jmorganca/ollama/server/routes.go:1027 +0x3c7
goroutine 48 [IO wait]:
runtime.gopark(0x0?, 0xb?, 0x0?, 0x0?, 0xc?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc0005a75a0 sp=0xc0005a7580 pc=0x43e7ee
runtime.netpollblock(0x47f078?, 0x4092a6?, 0x0?)
/usr/local/go/src/runtime/netpoll.go:564 +0xf7 fp=0xc0005a75d8 sp=0xc0005a75a0 pc=0x437277
internal/poll.runtime_pollWait(0x7f7e53e99d30, 0x72)
/usr/local/go/src/runtime/netpoll.go:343 +0x85 fp=0xc0005a75f8 sp=0xc0005a75d8 pc=0x468a05
internal/poll.(*pollDesc).wait(0xc000038a00?, 0xc0003eb661?, 0x0)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc0005a7620 sp=0xc0005a75f8 pc=0x4efd67
internal/poll.(*pollDesc).waitRead(...)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc000038a00, {0xc0003eb661, 0x1, 0x1})
/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a fp=0xc0005a76b8 sp=0xc0005a7620 pc=0x4f105a
net.(*netFD).Read(0xc000038a00, {0xc0003eb661?, 0xc0005a7740?, 0x46a990?})
/usr/local/go/src/net/fd_posix.go:55 +0x25 fp=0xc0005a7700 sp=0xc0005a76b8 pc=0x569e05
net.(*conn).Read(0xc00059e178, {0xc0003eb661?, 0x1?, 0xc000160780?})
/usr/local/go/src/net/net.go:179 +0x45 fp=0xc0005a7748 sp=0xc0005a7700 pc=0x5780a5
net.(*TCPConn).Read(0xc0003eb650?, {0xc0003eb661?, 0xc000160780?, 0x0?})
<autogenerated>:1 +0x25 fp=0xc0005a7778 sp=0xc0005a7748 pc=0x589fa5
net/http.(*connReader).backgroundRead(0xc0003eb650)
/usr/local/go/src/net/http/server.go:683 +0x37 fp=0xc0005a77c8 sp=0xc0005a7778 pc=0x6c4ab7
net/http.(*connReader).startBackgroundRead.func2()
/usr/local/go/src/net/http/server.go:679 +0x25 fp=0xc0005a77e0 sp=0xc0005a77c8 pc=0x6c49e5
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0005a77e8 sp=0xc0005a77e0 pc=0x46e2c1
created by net/http.(*connReader).startBackgroundRead in goroutine 21
/usr/local/go/src/net/http/server.go:679 +0xba
rax 0x6
rbx 0x7f7e029fee60
rcx 0x7f7e54232387
rdx 0x6
rdi 0x1
rsi 0x11
rbp 0x0
rsp 0x7f7e029fed30
r8 0x0
r9 0x7f7e029fec80
r10 0x8
r11 0x202
r12 0x7f7de08d4460
r13 0x7f7de08d2f60
r14 0x7f7e029fef98
r15 0x7f7e029ff2d8
rip 0x7f7e54233bc7
rflags 0x10246
cs 0x33
fs 0x0
gs 0x0
```
Edit: and FWIW this error occured both when I tried to run the model from the `ollama` cli like I demonstrated, and when I tried to connect via `open-webui`, so it's not an issue specific to using the cli.
|
{
"login": "Zambito1",
"id": 7004857,
"node_id": "MDQ6VXNlcjcwMDQ4NTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7004857?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Zambito1",
"html_url": "https://github.com/Zambito1",
"followers_url": "https://api.github.com/users/Zambito1/followers",
"following_url": "https://api.github.com/users/Zambito1/following{/other_user}",
"gists_url": "https://api.github.com/users/Zambito1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Zambito1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Zambito1/subscriptions",
"organizations_url": "https://api.github.com/users/Zambito1/orgs",
"repos_url": "https://api.github.com/users/Zambito1/repos",
"events_url": "https://api.github.com/users/Zambito1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Zambito1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2870/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2870/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5112
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5112/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5112/comments
|
https://api.github.com/repos/ollama/ollama/issues/5112/events
|
https://github.com/ollama/ollama/issues/5112
| 2,359,493,258
|
I_kwDOJ0Z1Ps6MowKK
| 5,112
|
Model pull error - I/O timeout
|
{
"login": "VIGHNESH1521",
"id": 90493668,
"node_id": "MDQ6VXNlcjkwNDkzNjY4",
"avatar_url": "https://avatars.githubusercontent.com/u/90493668?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VIGHNESH1521",
"html_url": "https://github.com/VIGHNESH1521",
"followers_url": "https://api.github.com/users/VIGHNESH1521/followers",
"following_url": "https://api.github.com/users/VIGHNESH1521/following{/other_user}",
"gists_url": "https://api.github.com/users/VIGHNESH1521/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VIGHNESH1521/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VIGHNESH1521/subscriptions",
"organizations_url": "https://api.github.com/users/VIGHNESH1521/orgs",
"repos_url": "https://api.github.com/users/VIGHNESH1521/repos",
"events_url": "https://api.github.com/users/VIGHNESH1521/events{/privacy}",
"received_events_url": "https://api.github.com/users/VIGHNESH1521/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-06-18T10:16:56
| 2024-06-18T11:20:12
| 2024-06-18T11:20:12
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I tried to download llama 2 7b chat 8-bit quantized model, but I am getting the following error

Any suggestion how to resolve this?
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.44
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5112/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5112/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2153
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2153/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2153/comments
|
https://api.github.com/repos/ollama/ollama/issues/2153/events
|
https://github.com/ollama/ollama/issues/2153
| 2,095,167,342
|
I_kwDOJ0Z1Ps584bdu
| 2,153
|
Error running ollama run llama2
|
{
"login": "haomes",
"id": 82690723,
"node_id": "MDQ6VXNlcjgyNjkwNzIz",
"avatar_url": "https://avatars.githubusercontent.com/u/82690723?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/haomes",
"html_url": "https://github.com/haomes",
"followers_url": "https://api.github.com/users/haomes/followers",
"following_url": "https://api.github.com/users/haomes/following{/other_user}",
"gists_url": "https://api.github.com/users/haomes/gists{/gist_id}",
"starred_url": "https://api.github.com/users/haomes/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/haomes/subscriptions",
"organizations_url": "https://api.github.com/users/haomes/orgs",
"repos_url": "https://api.github.com/users/haomes/repos",
"events_url": "https://api.github.com/users/haomes/events{/privacy}",
"received_events_url": "https://api.github.com/users/haomes/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 3
| 2024-01-23T03:25:11
| 2024-03-12T18:41:48
| 2024-03-12T18:41:45
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Error: Head "https://registry.ollama.ai/v2/library/llama2/blobs/sha256:8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246": http: server gave HTTP response to HTTPS client
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2153/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2153/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3583
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3583/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3583/comments
|
https://api.github.com/repos/ollama/ollama/issues/3583/events
|
https://github.com/ollama/ollama/issues/3583
| 2,236,347,588
|
I_kwDOJ0Z1Ps6FS_TE
| 3,583
|
Small Context Size (n_ctx) leads to crashes and log-file explosion
|
{
"login": "TheMasterFX",
"id": 12451336,
"node_id": "MDQ6VXNlcjEyNDUxMzM2",
"avatar_url": "https://avatars.githubusercontent.com/u/12451336?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TheMasterFX",
"html_url": "https://github.com/TheMasterFX",
"followers_url": "https://api.github.com/users/TheMasterFX/followers",
"following_url": "https://api.github.com/users/TheMasterFX/following{/other_user}",
"gists_url": "https://api.github.com/users/TheMasterFX/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TheMasterFX/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TheMasterFX/subscriptions",
"organizations_url": "https://api.github.com/users/TheMasterFX/orgs",
"repos_url": "https://api.github.com/users/TheMasterFX/repos",
"events_url": "https://api.github.com/users/TheMasterFX/events{/privacy}",
"received_events_url": "https://api.github.com/users/TheMasterFX/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-04-10T20:08:55
| 2024-04-15T19:28:27
| 2024-04-15T19:28:27
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I came across this error by mistake. I wanted to reduce the max predict tokens to 64 for Codegeneration. I wrongly used "n_ctx" instead of "num_predict". The result was that after a couple of tries the ollama server doesn't respond anymore, and after a couple of minutes the VRAM was freed and sonetime the ollama log file (server.log) became > 4 GB
I could reproduce it with almost every model (1.8-7b)
### What did you expect to see?
Ollama doesn't crash and the log file should not increase in that significance.
### Steps to reproduce
Create a Python Script with the following content:
```
from ollama import Client
client = Client('http://localhost:11434')
response = client.generate(model='mistral:latest', prompt='Write a poem about why is the sky blue?', options={"n_ctx": 64})
tokens_per_second = response['eval_count'] / (response['eval_duration'] / 1000000000)
print(f'{request_number}: {tokens_per_second} - {response["response"]}')
```
After a couple of runs you should see it is hanging.
The Log-File then looks like:
```
.............................................................................................
llama_new_context_with_model: n_ctx = 128
llama_new_context_with_model: n_batch = 128
llama_new_context_with_model: n_ubatch = 128
llama_new_context_with_model: freq_base = 999999.4
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: CUDA0 KV buffer size = 3.75 MiB
llama_new_context_with_model: KV self size = 3.75 MiB, K (f16): 1.88 MiB, V (f16): 1.88 MiB
llama_new_context_with_model: CUDA_Host output buffer size = 25.50 MiB
llama_new_context_with_model: CUDA0 compute buffer size = 25.50 MiB
llama_new_context_with_model: CUDA_Host compute buffer size = 1.56 MiB
llama_new_context_with_model: graph nodes = 1175
llama_new_context_with_model: graph splits = 2
[1712606577] warming up the model with an empty run
{"function":"initialize","level":"INFO","line":444,"msg":"initializing slots","n_slots":1,"tid":"5396","timestamp":1712606577}
{"function":"initialize","level":"INFO","line":456,"msg":"new slot","n_ctx_slot":128,"slot_id":0,"tid":"5396","timestamp":1712606577}
time=2024-04-08T22:02:57.185+02:00 level=INFO source=dyn_ext_server.go:159 msg="Starting llama main loop"
time=2024-04-08T22:02:57.185+02:00 level=DEBUG source=routes.go:249 msg="generate handler" prompt="Why is the sky blue?"
time=2024-04-08T22:02:57.186+02:00 level=DEBUG source=routes.go:250 msg="generate handler" template="{{ .Prompt }}"
time=2024-04-08T22:02:57.186+02:00 level=DEBUG source=routes.go:251 msg="generate handler" system=""
time=2024-04-08T22:02:57.186+02:00 level=DEBUG source=routes.go:282 msg="generate handler" prompt="Why is the sky blue?"
[1712606577] llama server main loop starting
{"function":"update_slots","level":"INFO","line":1572,"msg":"all slots are idle and system prompt is empty, clear the KV cache","tid":"17792","timestamp":1712606577}
{"function":"launch_slot_with_data","level":"INFO","line":829,"msg":"slot is processing task","slot_id":0,"task_id":0,"tid":"17792","timestamp":1712606577}
{"function":"update_slots","ga_i":0,"level":"INFO","line":1810,"msg":"slot progression","n_past":0,"n_past_se":0,"n_prompt_tokens_processed":6,"slot_id":0,"task_id":0,"tid":"17792","timestamp":1712606577}
{"function":"update_slots","level":"INFO","line":1834,"msg":"kv cache rm [p0, end)","p0":0,"slot_id":0,"task_id":0,"tid":"17792","timestamp":1712606577}
{"function":"update_slots","level":"INFO","line":1605,"msg":"slot context shift","n_cache_tokens":128,"n_ctx":128,"n_discard":63,"n_keep":0,"n_left":127,"n_past":127,"n_system_tokens":0,"slot_id":0,"task_id":0,"tid":"17792","timestamp":1712606578}
{"function":"update_slots","level":"INFO","line":1605,"msg":"slot context shift","n_cache_tokens":128,"n_ctx":128,"n_discard":63,"n_keep":0,"n_left":127,"n_past":127,"n_system_tokens":0,"slot_id":0,"task_id":0,"tid":"17792","timestamp":1712606578}
{"function":"update_slots","level":"INFO","line":1605,"msg":"slot context shift","n_cache_tokens":128,"n_ctx":128,"n_discard":63,"n_keep":0,"n_left":127,"n_past":127,"n_system_tokens":0,"slot_id":0,"task_id":0,"tid":"17792","timestamp":1712606579}
{"function":"update_slots","level":"INFO","line":1605,"msg":"slot context shift","n_cache_tokens":128,"n_ctx":128,"n_discard":63,"n_keep":0,"n_left":127,"n_past":127,"n_system_tokens":0,"slot_id":0,"task_id":0,"tid":"17792","timestamp":1712606579}
{"function":"update_slots","level":"INFO","line":1605,"msg":"slot context shift","n_cache_tokens":128,"n_ctx":128,"n_discard":63,"n_keep":0,"n_left":127,"n_past":127,"n_system_tokens":0,"slot_id":0,"task_id":0,"tid":"17792","timestamp":1712606580}
....MANY OF THEM....
{"function":"update_slots","level":"INFO","line":1605,"msg":"slot context shift","n_cache_tokens":128,"n_ctx":128,"n_discard":63,"n_keep":0,"n_left":127,"n_past":127,"n_system_tokens":0,"slot_id":0,"task_id":0,"tid":"17792","timestamp":1712606596}
{"function":"update_slots","level":"INFO","line":1605,"msg":"slot context shift","n_cache_tokens":128,"n_ctx":128,"n_discard":63,"n_keep":0,"n_left":127,"n_past":127,"n_system_tokens":0,"slot_id":0,"task_id":0,"tid":"17792","timestamp":1712606596}
{"function":"update_slots","level":"INFO","line":1605,"msg":"slot context shift","n_cache_tokens":128,"n_ctx":128,"n_discard":63,"n_keep":0,"n_left":127,"n_past":127,"n_system_tokens":0,"slot_id":0,"task_id":0,"tid":"17792","timestamp":1712606597}
{"function":"update_slots","level":"INFO","line":1605,"msg":"slot context shift","n_cache_tokens":128,"n_ctx":128,"n_discard":63,"n_keep":0,"n_left":127,"n_past":127,"n_system_tokens":0,"slot_id":0,"task_id":0,"tid":"17792","timestamp":1712606597}
{"function":"update_slots","level":"INFO","line":1605,"msg":"slot context shift","n_cache_tokens":128,"n_ctx":128,"n_discard":63,"n_keep":0,"n_left":127,"n_past":127,"n_system_tokens":0,"slot_id":0,"task_id":0,"tid":"17792","timestamp":1712606598}
{"function":"update_slots","level":"INFO","line":1605,"msg":"slot context shift","n_cache_tokens":128,"n_ctx":128,"n_discard":63,"n_keep":0,"n_left":127,"n_past":127,"n_system_tokens":0,"slot_id":0,"task_id":0,"tid":"17792","timestamp":1712606598}
{"function":"update_slots","level":"INFO","line":1605,"msg":"slot context shift","n_cache_tokens":128,"n_ctx":128,"n_discard":63,"n_keep":0,"n_left":127,"n_past":127,"n_system_tokens":0,"slot_id":0,"task_id":0,"tid":"17792","timestamp":1712606599}
{"function":"update_slots","level":"INFO","line":1605,"msg":"slot context shift","n_cache_tokens":128,"n_ctx":128,"n_discard":63,"n_keep":0,"n_left":127,"n_past":127,"n_system_tokens":0,"slot_id":0,"task_id":0,"tid":"17792","timestamp":1712606599}
{"function":"update_slots","level":"INFO","line":1605,"msg":"slot context shift","n_cache_tokens":128,"n_ctx":128,"n_discard":64,"n_keep":0,"n_left":128,"n_past":128,"n_system_tokens":0,"slot_id":0,"task_id":0,"tid":"17792","timestamp":1712606600}
[1712606600] update_slots : failed to find free space in the KV cache, retrying with smaller n_batch = 256
[1712606600] update_slots : failed to find free space in the KV cache, retrying with smaller n_batch = 128
[1712606600] update_slots : failed to find free space in the KV cache, retrying with smaller n_batch = 64
[1712606600] update_slots : failed to find free space in the KV cache, retrying with smaller n_batch = 32
[1712606600] update_slots : failed to find free space in the KV cache, retrying with smaller n_batch = 16
[1712606600] update_slots : failed to find free space in the KV cache, retrying with smaller n_batch = 8
[1712606600] update_slots : failed to find free space in the KV cache, retrying with smaller n_batch = 4
[1712606600] update_slots : failed to find free space in the KV cache, retrying with smaller n_batch = 2
[1712606600] update_slots : failed to find free space in the KV cache, retrying with smaller n_batch = 1
[1712606600] update_slots : failed to decode the batch, n_batch = 1, ret = 1
[1712606600] update_slots : failed to find free space in the KV cache, retrying with smaller n_batch = 256
[1712606600] update_slots : failed to find free space in the KV cache, retrying with smaller n_batch = 128
[1712606600] update_slots : failed to find free space in the KV cache, retrying with smaller n_batch = 64
[1712606600] update_slots : failed to find free space in the KV cache, retrying with smaller n_batch = 32
[1712606600] update_slots : failed to find free space in the KV cache, retrying with smaller n_batch = 16
[1712606600] update_slots : failed to find free space in the KV cache, retrying with smaller n_batch = 8
[1712606600] update_slots : failed to find free space in the KV cache, retrying with smaller n_batch = 4
[1712606600] update_slots : failed to find free space in the KV cache, retrying with smaller n_batch = 2
[1712606600] update_slots : failed to find free space in the KV cache, retrying with smaller n_batch = 1
[1712606600] update_slots : failed to decode the batch, n_batch = 1, ret = 1
```
After that the log file is filled with "update_slots : failed to find free space in the KV cache" until you kill the ollama process. (The Env. OLLAMA_DEBUG=1)
### Are there any recent changes that introduced the issue?
_No response_
### OS
Windows
### Architecture
amd64
### Platform
_No response_
### Ollama version
0.1.31
### GPU
Nvidia
### GPU info
```
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 546.01 Driver Version: 546.01 CUDA Version: 12.3 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name TCC/WDDM | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 NVIDIA GeForce RTX 3080 WDDM | 00000000:02:00.0 On | N/A |
| 0% 48C P8 27W / 370W | 1720MiB / 10240MiB | 0% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
```
### CPU
Intel
### Other software
_No response_
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3583/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3583/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6765
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6765/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6765/comments
|
https://api.github.com/repos/ollama/ollama/issues/6765/events
|
https://github.com/ollama/ollama/pull/6765
| 2,520,944,035
|
PR_kwDOJ0Z1Ps57Og8m
| 6,765
|
Flush pending responses before returning (#6707)
|
{
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users/jessegross/followers",
"following_url": "https://api.github.com/users/jessegross/following{/other_user}",
"gists_url": "https://api.github.com/users/jessegross/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jessegross/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jessegross/subscriptions",
"organizations_url": "https://api.github.com/users/jessegross/orgs",
"repos_url": "https://api.github.com/users/jessegross/repos",
"events_url": "https://api.github.com/users/jessegross/events{/privacy}",
"received_events_url": "https://api.github.com/users/jessegross/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-09-11T23:03:04
| 2024-09-11T23:38:27
| 2024-09-11T23:38:25
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6765",
"html_url": "https://github.com/ollama/ollama/pull/6765",
"diff_url": "https://github.com/ollama/ollama/pull/6765.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6765.patch",
"merged_at": "2024-09-11T23:38:25"
}
|
I'll cross-port the server.cpp portion to main after this goes in.
|
{
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users/jessegross/followers",
"following_url": "https://api.github.com/users/jessegross/following{/other_user}",
"gists_url": "https://api.github.com/users/jessegross/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jessegross/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jessegross/subscriptions",
"organizations_url": "https://api.github.com/users/jessegross/orgs",
"repos_url": "https://api.github.com/users/jessegross/repos",
"events_url": "https://api.github.com/users/jessegross/events{/privacy}",
"received_events_url": "https://api.github.com/users/jessegross/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6765/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6765/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3715
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3715/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3715/comments
|
https://api.github.com/repos/ollama/ollama/issues/3715/events
|
https://github.com/ollama/ollama/pull/3715
| 2,249,438,598
|
PR_kwDOJ0Z1Ps5s-_g5
| 3,715
|
update list handler to use model.Name
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-04-17T23:23:09
| 2024-05-07T22:21:40
| 2024-05-07T22:21:39
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3715",
"html_url": "https://github.com/ollama/ollama/pull/3715",
"diff_url": "https://github.com/ollama/ollama/pull/3715.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3715.patch",
"merged_at": "2024-05-07T22:21:39"
}
| null |
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3715/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3715/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5127
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5127/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5127/comments
|
https://api.github.com/repos/ollama/ollama/issues/5127/events
|
https://github.com/ollama/ollama/pull/5127
| 2,360,969,648
|
PR_kwDOJ0Z1Ps5y4zyn
| 5,127
|
Introduce `/api/embed` endpoint supporting batch embedding
|
{
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjhan/followers",
"following_url": "https://api.github.com/users/royjhan/following{/other_user}",
"gists_url": "https://api.github.com/users/royjhan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/royjhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/royjhan/subscriptions",
"organizations_url": "https://api.github.com/users/royjhan/orgs",
"repos_url": "https://api.github.com/users/royjhan/repos",
"events_url": "https://api.github.com/users/royjhan/events{/privacy}",
"received_events_url": "https://api.github.com/users/royjhan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 6
| 2024-06-19T00:35:06
| 2024-07-15T19:14:27
| 2024-07-15T19:14:24
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5127",
"html_url": "https://github.com/ollama/ollama/pull/5127",
"diff_url": "https://github.com/ollama/ollama/pull/5127.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5127.patch",
"merged_at": "2024-07-15T19:14:24"
}
|
Resolves #4224
Closes #3642
Mentioned in #962
|
{
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjhan/followers",
"following_url": "https://api.github.com/users/royjhan/following{/other_user}",
"gists_url": "https://api.github.com/users/royjhan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/royjhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/royjhan/subscriptions",
"organizations_url": "https://api.github.com/users/royjhan/orgs",
"repos_url": "https://api.github.com/users/royjhan/repos",
"events_url": "https://api.github.com/users/royjhan/events{/privacy}",
"received_events_url": "https://api.github.com/users/royjhan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5127/reactions",
"total_count": 9,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 9,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5127/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4492
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4492/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4492/comments
|
https://api.github.com/repos/ollama/ollama/issues/4492/events
|
https://github.com/ollama/ollama/issues/4492
| 2,301,995,480
|
I_kwDOJ0Z1Ps6JNanY
| 4,492
|
Ollama crashes after idle and can't process new requests
|
{
"login": "artem-zinnatullin",
"id": 967132,
"node_id": "MDQ6VXNlcjk2NzEzMg==",
"avatar_url": "https://avatars.githubusercontent.com/u/967132?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/artem-zinnatullin",
"html_url": "https://github.com/artem-zinnatullin",
"followers_url": "https://api.github.com/users/artem-zinnatullin/followers",
"following_url": "https://api.github.com/users/artem-zinnatullin/following{/other_user}",
"gists_url": "https://api.github.com/users/artem-zinnatullin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/artem-zinnatullin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/artem-zinnatullin/subscriptions",
"organizations_url": "https://api.github.com/users/artem-zinnatullin/orgs",
"repos_url": "https://api.github.com/users/artem-zinnatullin/repos",
"events_url": "https://api.github.com/users/artem-zinnatullin/events{/privacy}",
"received_events_url": "https://api.github.com/users/artem-zinnatullin/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6433346500,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf3UTxA",
"url": "https://api.github.com/repos/ollama/ollama/labels/amd",
"name": "amd",
"color": "000000",
"default": false,
"description": "Issues relating to AMD GPUs and ROCm"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 6
| 2024-05-17T07:14:19
| 2024-08-01T23:49:16
| 2024-06-21T23:26:15
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
When I keep `ollama` running idling for some time it then crashes and stops responding to requests:
```js
llm_load_print_meta: EOT token = 128009 '<|eot_id|>'
rocBLAS error: Could not initialize Tensile host: No devices found
[GIN] 2024/05/17 - 01:05:34 | 500 | 1.472888689s | 192.168.1.112 | POST "/api/chat"
time=2024-05-17T01:05:34.549-06:00 level=ERROR source=sched.go:339 msg="error loading llama server" error="llama runner process has terminated: signal: aborted (core dumped) error:Could not initialize Tensile host: No devices found"
time=2024-05-17T01:05:34.553-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
```
Ollama `ollama/ollama:0.1.37-rocm` is running in Docker (actually K8S) on Ubuntu Server with AMD 7900XTX GPU
<details>
<summary>Full log from start to crash:</summary>
```js
2024/05/16 22:16:57 routes.go:1006: INFO server config env="map[OLLAMA_DEBUG:false OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:8 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:*] OLLAMA_RUNNERS_DIR: OLLAMA_TMPDIR:]"
time=2024-05-16T22:16:57.384-06:00 level=INFO source=images.go:704 msg="total blobs: 75"
time=2024-05-16T22:16:57.387-06:00 level=INFO source=images.go:711 msg="total unused blobs removed: 0"
time=2024-05-16T22:16:57.388-06:00 level=INFO source=routes.go:1052 msg="Listening on [::]:11434 (version 0.1.37)"
time=2024-05-16T22:16:57.389-06:00 level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama1104229860/runners
time=2024-05-16T22:16:58.936-06:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11 rocm_v60002]"
time=2024-05-16T22:16:58.942-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-16T22:16:58.942-06:00 level=INFO source=types.go:71 msg="inference compute" id=0 library=rocm compute=gfx1100 driver=6.3 name=1002:744c total="24.0 GiB" available="24.0 GiB"
[GIN] 2024/05/16 - 22:17:08 | 200 | 2.540133ms | 10.244.0.173 | GET "/api/tags"
[GIN] 2024/05/16 - 22:17:08 | 200 | 1.311845ms | 10.244.0.173 | GET "/api/tags"
[GIN] 2024/05/16 - 22:17:09 | 200 | 1.439971ms | 10.244.0.173 | GET "/api/tags"
[GIN] 2024/05/16 - 22:17:09 | 200 | 1.218168ms | 10.244.0.173 | GET "/api/tags"
[GIN] 2024/05/16 - 22:17:19 | 200 | 41.469ยตs | 10.244.0.173 | GET "/api/version"
time=2024-05-16T22:17:21.319-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-16T22:17:22.017-06:00 level=INFO source=memory.go:127 msg="offload to gpu" layers.real=-1 layers.estimate=33 memory.available="24.0 GiB" memory.required.full="7.7 GiB" memory.required.partial="7.7 GiB" memory.required.kv="2.0 GiB" memory.weights.total="4.2 GiB" memory.weights.repeating="3.8 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="1.1 GiB" memory.graph.partial="1.1 GiB"
time=2024-05-16T22:17:22.017-06:00 level=INFO source=memory.go:127 msg="offload to gpu" layers.real=-1 layers.estimate=33 memory.available="24.0 GiB" memory.required.full="7.7 GiB" memory.required.partial="7.7 GiB" memory.required.kv="2.0 GiB" memory.weights.total="4.2 GiB" memory.weights.repeating="3.8 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="1.1 GiB" memory.graph.partial="1.1 GiB"
time=2024-05-16T22:17:22.018-06:00 level=INFO source=server.go:318 msg="starting llama server" cmd="/tmp/ollama1104229860/runners/rocm_v60002/ollama_llama_server --model /root/.ollama/models/blobs/sha256-00e1317cbf74d901080d7100f57580ba8dd8de57203072dc6f668324ba545f29 --ctx-size 16384 --batch-size 512 --embedding --log-disable --n-gpu-layers 33 --parallel 8 --port 34471"
time=2024-05-16T22:17:22.018-06:00 level=INFO source=sched.go:333 msg="loaded runners" count=1
time=2024-05-16T22:17:22.018-06:00 level=INFO source=server.go:488 msg="waiting for llama runner to start responding"
time=2024-05-16T22:17:22.018-06:00 level=INFO source=server.go:524 msg="waiting for server to become available" status="llm server error"
INFO [main] build info | build=1 commit="952d03d" tid="133496113871936" timestamp=1715919442
INFO [main] system info | n_threads=12 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="133496113871936" timestamp=1715919442 total_threads=24
INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="23" port="34471" tid="133496113871936" timestamp=1715919442
llama_model_loader: loaded meta data with 21 key-value pairs and 291 tensors from /root/.ollama/models/blobs/sha256-00e1317cbf74d901080d7100f57580ba8dd8de57203072dc6f668324ba545f29 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.name str = Meta-Llama-3-8B-Instruct
llama_model_loader: - kv 2: llama.block_count u32 = 32
llama_model_loader: - kv 3: llama.context_length u32 = 8192
llama_model_loader: - kv 4: llama.embedding_length u32 = 4096
llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336
llama_model_loader: - kv 6: llama.attention.head_count u32 = 32
llama_model_loader: - kv 7: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 8: llama.rope.freq_base f32 = 500000.000000
llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 10: general.file_type u32 = 2
llama_model_loader: - kv 11: llama.vocab_size u32 = 128256
llama_model_loader: - kv 12: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 13: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 14: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 16: tokenizer.ggml.merges arr[str,280147] = ["ฤ ฤ ", "ฤ ฤ ฤ ฤ ", "ฤ ฤ ฤ ฤ ", "...
llama_model_loader: - kv 17: tokenizer.ggml.bos_token_id u32 = 128000
llama_model_loader: - kv 18: tokenizer.ggml.eos_token_id u32 = 128001
llama_model_loader: - kv 19: tokenizer.chat_template str = {% set loop_messages = messages %}{% ...
llama_model_loader: - kv 20: general.quantization_version u32 = 2
llama_model_loader: - type f32: 65 tensors
llama_model_loader: - type q4_0: 225 tensors
llama_model_loader: - type q6_K: 1 tensors
time=2024-05-16T22:17:22.269-06:00 level=INFO source=server.go:524 msg="waiting for server to become available" status="llm server loading model"
llm_load_vocab: missing pre-tokenizer type, using: 'default'
llm_load_vocab:
llm_load_vocab: ************************************
llm_load_vocab: GENERATION QUALITY WILL BE DEGRADED!
llm_load_vocab: CONSIDER REGENERATING THE MODEL
llm_load_vocab: ************************************
llm_load_vocab:
llm_load_vocab: special tokens definition check successful ( 256/128256 ).
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 128256
llm_load_print_meta: n_merges = 280147
llm_load_print_meta: n_ctx_train = 8192
llm_load_print_meta: n_embd = 4096
llm_load_print_meta: n_head = 32
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_layer = 32
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 4
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 14336
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx = 8192
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: model type = 8B
llm_load_print_meta: model ftype = Q4_0
llm_load_print_meta: model params = 8.03 B
llm_load_print_meta: model size = 4.33 GiB (4.64 BPW)
llm_load_print_meta: general.name = Meta-Llama-3-8B-Instruct
llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>'
llm_load_print_meta: EOS token = 128001 '<|end_of_text|>'
llm_load_print_meta: LF token = 128 'ร'
llm_load_print_meta: EOT token = 128009 '<|eot_id|>'
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: CUDA_USE_TENSOR_CORES: yes
ggml_cuda_init: found 1 ROCm devices:
Device 0: Radeon RX 7900 XTX, compute capability 11.0, VMM: no
llm_load_tensors: ggml ctx size = 0.30 MiB
llm_load_tensors: offloading 32 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 33/33 layers to GPU
llm_load_tensors: ROCm0 buffer size = 4155.99 MiB
llm_load_tensors: CPU buffer size = 281.81 MiB
......................................................................................
llama_new_context_with_model: n_ctx = 16384
llama_new_context_with_model: n_batch = 512
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: freq_base = 500000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: ROCm0 KV buffer size = 2048.00 MiB
llama_new_context_with_model: KV self size = 2048.00 MiB, K (f16): 1024.00 MiB, V (f16): 1024.00 MiB
llama_new_context_with_model: ROCm_Host output buffer size = 4.04 MiB
llama_new_context_with_model: ROCm0 compute buffer size = 1088.00 MiB
llama_new_context_with_model: ROCm_Host compute buffer size = 40.01 MiB
llama_new_context_with_model: graph nodes = 1030
llama_new_context_with_model: graph splits = 2
INFO [main] model loaded | tid="133496113871936" timestamp=1715919446
time=2024-05-16T22:17:26.783-06:00 level=INFO source=server.go:529 msg="llama runner started in 4.77 seconds"
[GIN] 2024/05/16 - 22:17:28 | 200 | 7.031066272s | 10.244.0.173 | POST "/api/chat"
[GIN] 2024/05/16 - 22:17:30 | 200 | 253.996638ms | 10.244.0.173 | POST "/v1/chat/completions"
[GIN] 2024/05/16 - 22:17:39 | 200 | 1.163486ms | 192.168.1.112 | GET "/api/tags"
[GIN] 2024/05/16 - 22:19:07 | 200 | 3.71407638s | 192.168.1.112 | POST "/api/chat"
[GIN] 2024/05/16 - 22:20:57 | 200 | 4.096279991s | 192.168.1.112 | POST "/api/chat"
time=2024-05-16T22:25:57.760-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-16T22:25:58.014-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-17T01:04:51.795-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-17T01:04:52.504-06:00 level=INFO source=memory.go:127 msg="offload to gpu" layers.real=-1 layers.estimate=33 memory.available="24.0 GiB" memory.required.full="7.7 GiB" memory.required.partial="7.7 GiB" memory.required.kv="2.0 GiB" memory.weights.total="4.2 GiB" memory.weights.repeating="3.8 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="1.1 GiB" memory.graph.partial="1.1 GiB"
time=2024-05-17T01:04:52.504-06:00 level=INFO source=memory.go:127 msg="offload to gpu" layers.real=-1 layers.estimate=33 memory.available="24.0 GiB" memory.required.full="7.7 GiB" memory.required.partial="7.7 GiB" memory.required.kv="2.0 GiB" memory.weights.total="4.2 GiB" memory.weights.repeating="3.8 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="1.1 GiB" memory.graph.partial="1.1 GiB"
time=2024-05-17T01:04:52.504-06:00 level=INFO source=server.go:318 msg="starting llama server" cmd="/tmp/ollama1104229860/runners/rocm_v60002/ollama_llama_server --model /root/.ollama/models/blobs/sha256-00e1317cbf74d901080d7100f57580ba8dd8de57203072dc6f668324ba545f29 --ctx-size 16384 --batch-size 512 --embedding --log-disable --n-gpu-layers 33 --parallel 8 --port 37839"
time=2024-05-17T01:04:52.505-06:00 level=INFO source=sched.go:333 msg="loaded runners" count=1
time=2024-05-17T01:04:52.505-06:00 level=INFO source=server.go:488 msg="waiting for llama runner to start responding"
time=2024-05-17T01:04:52.505-06:00 level=INFO source=server.go:524 msg="waiting for server to become available" status="llm server error"
INFO [main] build info | build=1 commit="952d03d" tid="127493904571456" timestamp=1715929492
INFO [main] system info | n_threads=12 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="127493904571456" timestamp=1715929492 total_threads=24
INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="23" port="37839" tid="127493904571456" timestamp=1715929492
llama_model_loader: loaded meta data with 21 key-value pairs and 291 tensors from /root/.ollama/models/blobs/sha256-00e1317cbf74d901080d7100f57580ba8dd8de57203072dc6f668324ba545f29 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.name str = Meta-Llama-3-8B-Instruct
llama_model_loader: - kv 2: llama.block_count u32 = 32
llama_model_loader: - kv 3: llama.context_length u32 = 8192
llama_model_loader: - kv 4: llama.embedding_length u32 = 4096
llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336
llama_model_loader: - kv 6: llama.attention.head_count u32 = 32
llama_model_loader: - kv 7: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 8: llama.rope.freq_base f32 = 500000.000000
llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 10: general.file_type u32 = 2
llama_model_loader: - kv 11: llama.vocab_size u32 = 128256
llama_model_loader: - kv 12: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 13: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 14: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 16: tokenizer.ggml.merges arr[str,280147] = ["ฤ ฤ ", "ฤ ฤ ฤ ฤ ", "ฤ ฤ ฤ ฤ ", "...
llama_model_loader: - kv 17: tokenizer.ggml.bos_token_id u32 = 128000
llama_model_loader: - kv 18: tokenizer.ggml.eos_token_id u32 = 128001
llama_model_loader: - kv 19: tokenizer.chat_template str = {% set loop_messages = messages %}{% ...
llama_model_loader: - kv 20: general.quantization_version u32 = 2
llama_model_loader: - type f32: 65 tensors
llama_model_loader: - type q4_0: 225 tensors
llama_model_loader: - type q6_K: 1 tensors
time=2024-05-17T01:04:52.756-06:00 level=INFO source=server.go:524 msg="waiting for server to become available" status="llm server loading model"
llm_load_vocab: missing pre-tokenizer type, using: 'default'
llm_load_vocab:
llm_load_vocab: ************************************
llm_load_vocab: GENERATION QUALITY WILL BE DEGRADED!
llm_load_vocab: CONSIDER REGENERATING THE MODEL
llm_load_vocab: ************************************
llm_load_vocab:
llm_load_vocab: special tokens definition check successful ( 256/128256 ).
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 128256
llm_load_print_meta: n_merges = 280147
llm_load_print_meta: n_ctx_train = 8192
llm_load_print_meta: n_embd = 4096
llm_load_print_meta: n_head = 32
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_layer = 32
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 4
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 14336
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx = 8192
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: model type = 8B
llm_load_print_meta: model ftype = Q4_0
llm_load_print_meta: model params = 8.03 B
llm_load_print_meta: model size = 4.33 GiB (4.64 BPW)
llm_load_print_meta: general.name = Meta-Llama-3-8B-Instruct
llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>'
llm_load_print_meta: EOS token = 128001 '<|end_of_text|>'
llm_load_print_meta: LF token = 128 'ร'
llm_load_print_meta: EOT token = 128009 '<|eot_id|>'
rocBLAS error: Could not initialize Tensile host: No devices found
time=2024-05-17T01:05:11.060-06:00 level=ERROR source=sched.go:339 msg="error loading llama server" error="llama runner process has terminated: signal: aborted (core dumped) error:Could not initialize Tensile host: No devices found"
[GIN] 2024/05/17 - 01:05:11 | 500 | 19.269265911s | 192.168.1.112 | POST "/api/chat"
time=2024-05-17T01:05:11.064-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-17T01:05:11.319-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-17T01:05:11.568-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-17T01:05:11.818-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-17T01:05:12.068-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-17T01:05:12.318-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-17T01:05:12.568-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-17T01:05:12.819-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-17T01:05:13.069-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-17T01:05:13.319-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-17T01:05:13.569-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-17T01:05:13.819-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-17T01:05:14.068-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-17T01:05:14.319-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-17T01:05:14.569-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-17T01:05:14.819-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-17T01:05:15.069-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-17T01:05:15.319-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-17T01:05:15.569-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-17T01:05:15.819-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-17T01:05:16.065-06:00 level=WARN source=sched.go:507 msg="gpu VRAM usage didn't recover within timeout" seconds=5.00548985
time=2024-05-17T01:05:16.069-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-17T01:05:16.315-06:00 level=WARN source=sched.go:507 msg="gpu VRAM usage didn't recover within timeout" seconds=5.255538881
time=2024-05-17T01:05:16.319-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-17T01:05:16.565-06:00 level=WARN source=sched.go:507 msg="gpu VRAM usage didn't recover within timeout" seconds=5.505473591
time=2024-05-17T01:05:33.080-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-17T01:05:33.795-06:00 level=INFO source=memory.go:127 msg="offload to gpu" layers.real=-1 layers.estimate=33 memory.available="24.0 GiB" memory.required.full="7.7 GiB" memory.required.partial="7.7 GiB" memory.required.kv="2.0 GiB" memory.weights.total="4.2 GiB" memory.weights.repeating="3.8 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="1.1 GiB" memory.graph.partial="1.1 GiB"
time=2024-05-17T01:05:33.796-06:00 level=INFO source=memory.go:127 msg="offload to gpu" layers.real=-1 layers.estimate=33 memory.available="24.0 GiB" memory.required.full="7.7 GiB" memory.required.partial="7.7 GiB" memory.required.kv="2.0 GiB" memory.weights.total="4.2 GiB" memory.weights.repeating="3.8 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="1.1 GiB" memory.graph.partial="1.1 GiB"
time=2024-05-17T01:05:33.796-06:00 level=INFO source=server.go:318 msg="starting llama server" cmd="/tmp/ollama1104229860/runners/rocm_v60002/ollama_llama_server --model /root/.ollama/models/blobs/sha256-00e1317cbf74d901080d7100f57580ba8dd8de57203072dc6f668324ba545f29 --ctx-size 16384 --batch-size 512 --embedding --log-disable --n-gpu-layers 33 --parallel 8 --port 46195"
time=2024-05-17T01:05:33.796-06:00 level=INFO source=sched.go:333 msg="loaded runners" count=1
time=2024-05-17T01:05:33.796-06:00 level=INFO source=server.go:488 msg="waiting for llama runner to start responding"
time=2024-05-17T01:05:33.796-06:00 level=INFO source=server.go:524 msg="waiting for server to become available" status="llm server error"
INFO [main] build info | build=1 commit="952d03d" tid="133936011103296" timestamp=1715929533
INFO [main] system info | n_threads=12 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="133936011103296" timestamp=1715929533 total_threads=24
INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="23" port="46195" tid="133936011103296" timestamp=1715929533
llama_model_loader: loaded meta data with 21 key-value pairs and 291 tensors from /root/.ollama/models/blobs/sha256-00e1317cbf74d901080d7100f57580ba8dd8de57203072dc6f668324ba545f29 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.name str = Meta-Llama-3-8B-Instruct
llama_model_loader: - kv 2: llama.block_count u32 = 32
llama_model_loader: - kv 3: llama.context_length u32 = 8192
llama_model_loader: - kv 4: llama.embedding_length u32 = 4096
llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336
llama_model_loader: - kv 6: llama.attention.head_count u32 = 32
llama_model_loader: - kv 7: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 8: llama.rope.freq_base f32 = 500000.000000
llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 10: general.file_type u32 = 2
llama_model_loader: - kv 11: llama.vocab_size u32 = 128256
llama_model_loader: - kv 12: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 13: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 14: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 16: tokenizer.ggml.merges arr[str,280147] = ["ฤ ฤ ", "ฤ ฤ ฤ ฤ ", "ฤ ฤ ฤ ฤ ", "...
llama_model_loader: - kv 17: tokenizer.ggml.bos_token_id u32 = 128000
llama_model_loader: - kv 18: tokenizer.ggml.eos_token_id u32 = 128001
llama_model_loader: - kv 19: tokenizer.chat_template str = {% set loop_messages = messages %}{% ...
llama_model_loader: - kv 20: general.quantization_version u32 = 2
llama_model_loader: - type f32: 65 tensors
llama_model_loader: - type q4_0: 225 tensors
llama_model_loader: - type q6_K: 1 tensors
time=2024-05-17T01:05:34.048-06:00 level=INFO source=server.go:524 msg="waiting for server to become available" status="llm server loading model"
llm_load_vocab: missing pre-tokenizer type, using: 'default'
llm_load_vocab:
llm_load_vocab: ************************************
llm_load_vocab: GENERATION QUALITY WILL BE DEGRADED!
llm_load_vocab: CONSIDER REGENERATING THE MODEL
llm_load_vocab: ************************************
llm_load_vocab:
llm_load_vocab: special tokens definition check successful ( 256/128256 ).
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 128256
llm_load_print_meta: n_merges = 280147
llm_load_print_meta: n_ctx_train = 8192
llm_load_print_meta: n_embd = 4096
llm_load_print_meta: n_head = 32
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_layer = 32
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 4
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 14336
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx = 8192
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: model type = 8B
llm_load_print_meta: model ftype = Q4_0
llm_load_print_meta: model params = 8.03 B
llm_load_print_meta: model size = 4.33 GiB (4.64 BPW)
llm_load_print_meta: general.name = Meta-Llama-3-8B-Instruct
llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>'
llm_load_print_meta: EOS token = 128001 '<|end_of_text|>'
llm_load_print_meta: LF token = 128 'ร'
llm_load_print_meta: EOT token = 128009 '<|eot_id|>'
rocBLAS error: Could not initialize Tensile host: No devices found
[GIN] 2024/05/17 - 01:05:34 | 500 | 1.472888689s | 192.168.1.112 | POST "/api/chat"
time=2024-05-17T01:05:34.549-06:00 level=ERROR source=sched.go:339 msg="error loading llama server" error="llama runner process has terminated: signal: aborted (core dumped) error:Could not initialize Tensile host: No devices found"
time=2024-05-17T01:05:34.553-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-17T01:05:34.807-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-17T01:05:35.057-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-17T01:05:35.307-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-17T01:05:35.557-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-17T01:05:35.807-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-17T01:05:36.057-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-17T01:05:36.307-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-17T01:05:36.557-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-17T01:05:36.807-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-17T01:05:37.057-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-17T01:05:37.307-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-17T01:05:37.557-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-17T01:05:37.807-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-17T01:05:38.057-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-17T01:05:38.308-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-17T01:05:38.557-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-17T01:05:38.807-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-17T01:05:39.057-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-17T01:05:39.307-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-17T01:05:39.553-06:00 level=WARN source=sched.go:507 msg="gpu VRAM usage didn't recover within timeout" seconds=5.003790879
time=2024-05-17T01:05:39.557-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-17T01:05:39.803-06:00 level=WARN source=sched.go:507 msg="gpu VRAM usage didn't recover within timeout" seconds=5.253690762
time=2024-05-17T01:05:39.807-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-17T01:05:40.053-06:00 level=WARN source=sched.go:507 msg="gpu VRAM usage didn't recover within timeout" seconds=5.50376608
```
</details>
### OS
Linux, Docker
### GPU
AMD
### CPU
AMD
### Ollama version
ollama/ollama:0.1.37-rocm
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4492/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4492/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2309
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2309/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2309/comments
|
https://api.github.com/repos/ollama/ollama/issues/2309/events
|
https://github.com/ollama/ollama/issues/2309
| 2,112,098,981
|
I_kwDOJ0Z1Ps595BKl
| 2,309
|
Where are the models located in the filesystem?
|
{
"login": "LightningRhino",
"id": 49691885,
"node_id": "MDQ6VXNlcjQ5NjkxODg1",
"avatar_url": "https://avatars.githubusercontent.com/u/49691885?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LightningRhino",
"html_url": "https://github.com/LightningRhino",
"followers_url": "https://api.github.com/users/LightningRhino/followers",
"following_url": "https://api.github.com/users/LightningRhino/following{/other_user}",
"gists_url": "https://api.github.com/users/LightningRhino/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LightningRhino/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LightningRhino/subscriptions",
"organizations_url": "https://api.github.com/users/LightningRhino/orgs",
"repos_url": "https://api.github.com/users/LightningRhino/repos",
"events_url": "https://api.github.com/users/LightningRhino/events{/privacy}",
"received_events_url": "https://api.github.com/users/LightningRhino/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-02-01T10:14:34
| 2024-02-01T11:58:55
| 2024-02-01T11:58:55
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
On my Mac I want to exclude the models from my time machine backup.
So where are the models located at?
It looks like ollama uses some kind of docker technique for this.
Cant' believe this is undocumented.
|
{
"login": "LightningRhino",
"id": 49691885,
"node_id": "MDQ6VXNlcjQ5NjkxODg1",
"avatar_url": "https://avatars.githubusercontent.com/u/49691885?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LightningRhino",
"html_url": "https://github.com/LightningRhino",
"followers_url": "https://api.github.com/users/LightningRhino/followers",
"following_url": "https://api.github.com/users/LightningRhino/following{/other_user}",
"gists_url": "https://api.github.com/users/LightningRhino/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LightningRhino/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LightningRhino/subscriptions",
"organizations_url": "https://api.github.com/users/LightningRhino/orgs",
"repos_url": "https://api.github.com/users/LightningRhino/repos",
"events_url": "https://api.github.com/users/LightningRhino/events{/privacy}",
"received_events_url": "https://api.github.com/users/LightningRhino/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2309/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2309/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3472
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3472/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3472/comments
|
https://api.github.com/repos/ollama/ollama/issues/3472/events
|
https://github.com/ollama/ollama/pull/3472
| 2,222,023,303
|
PR_kwDOJ0Z1Ps5rhLiB
| 3,472
|
Allow more parts of model path
|
{
"login": "wzshiming",
"id": 6565744,
"node_id": "MDQ6VXNlcjY1NjU3NDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/6565744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wzshiming",
"html_url": "https://github.com/wzshiming",
"followers_url": "https://api.github.com/users/wzshiming/followers",
"following_url": "https://api.github.com/users/wzshiming/following{/other_user}",
"gists_url": "https://api.github.com/users/wzshiming/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wzshiming/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wzshiming/subscriptions",
"organizations_url": "https://api.github.com/users/wzshiming/orgs",
"repos_url": "https://api.github.com/users/wzshiming/repos",
"events_url": "https://api.github.com/users/wzshiming/events{/privacy}",
"received_events_url": "https://api.github.com/users/wzshiming/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-04-03T06:31:19
| 2024-11-21T09:03:34
| 2024-11-21T09:03:33
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3472",
"html_url": "https://github.com/ollama/ollama/pull/3472",
"diff_url": "https://github.com/ollama/ollama/pull/3472.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3472.patch",
"merged_at": null
}
|
I expect that in different Registry storage models, the path doesn't have to be only 3 parts, such as ghcr.io, which has 4 parts.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3472/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3472/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1297
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1297/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1297/comments
|
https://api.github.com/repos/ollama/ollama/issues/1297/events
|
https://github.com/ollama/ollama/pull/1297
| 2,014,069,086
|
PR_kwDOJ0Z1Ps5giNim
| 1,297
|
Updated instructions for Jetson setup and minimized requirements
|
{
"login": "bnodnarb",
"id": 97063458,
"node_id": "U_kgDOBckSIg",
"avatar_url": "https://avatars.githubusercontent.com/u/97063458?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bnodnarb",
"html_url": "https://github.com/bnodnarb",
"followers_url": "https://api.github.com/users/bnodnarb/followers",
"following_url": "https://api.github.com/users/bnodnarb/following{/other_user}",
"gists_url": "https://api.github.com/users/bnodnarb/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bnodnarb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bnodnarb/subscriptions",
"organizations_url": "https://api.github.com/users/bnodnarb/orgs",
"repos_url": "https://api.github.com/users/bnodnarb/repos",
"events_url": "https://api.github.com/users/bnodnarb/events{/privacy}",
"received_events_url": "https://api.github.com/users/bnodnarb/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 5
| 2023-11-28T10:01:10
| 2023-12-13T01:35:54
| 2023-12-13T01:35:54
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1297",
"html_url": "https://github.com/ollama/ollama/pull/1297",
"diff_url": "https://github.com/ollama/ollama/pull/1297.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1297.patch",
"merged_at": null
}
|
Revised NVIDIA Jetson tutorial to be simpler and also added a quickstart guide.
|
{
"login": "bnodnarb",
"id": 97063458,
"node_id": "U_kgDOBckSIg",
"avatar_url": "https://avatars.githubusercontent.com/u/97063458?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bnodnarb",
"html_url": "https://github.com/bnodnarb",
"followers_url": "https://api.github.com/users/bnodnarb/followers",
"following_url": "https://api.github.com/users/bnodnarb/following{/other_user}",
"gists_url": "https://api.github.com/users/bnodnarb/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bnodnarb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bnodnarb/subscriptions",
"organizations_url": "https://api.github.com/users/bnodnarb/orgs",
"repos_url": "https://api.github.com/users/bnodnarb/repos",
"events_url": "https://api.github.com/users/bnodnarb/events{/privacy}",
"received_events_url": "https://api.github.com/users/bnodnarb/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1297/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1297/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1922
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1922/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1922/comments
|
https://api.github.com/repos/ollama/ollama/issues/1922/events
|
https://github.com/ollama/ollama/issues/1922
| 2,076,113,270
|
I_kwDOJ0Z1Ps57vvl2
| 1,922
|
Support for calling third party APIs as part of formulating response?
|
{
"login": "boxabirds",
"id": 147305,
"node_id": "MDQ6VXNlcjE0NzMwNQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/147305?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/boxabirds",
"html_url": "https://github.com/boxabirds",
"followers_url": "https://api.github.com/users/boxabirds/followers",
"following_url": "https://api.github.com/users/boxabirds/following{/other_user}",
"gists_url": "https://api.github.com/users/boxabirds/gists{/gist_id}",
"starred_url": "https://api.github.com/users/boxabirds/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/boxabirds/subscriptions",
"organizations_url": "https://api.github.com/users/boxabirds/orgs",
"repos_url": "https://api.github.com/users/boxabirds/repos",
"events_url": "https://api.github.com/users/boxabirds/events{/privacy}",
"received_events_url": "https://api.github.com/users/boxabirds/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-01-11T09:13:32
| 2024-05-10T01:04:29
| 2024-05-10T01:04:28
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi one of the very useful things about GPT4 is its ability to call external APIs like web search. Apologies if there's already an issue related to this.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1922/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1922/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6920
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6920/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6920/comments
|
https://api.github.com/repos/ollama/ollama/issues/6920/events
|
https://github.com/ollama/ollama/issues/6920
| 2,542,815,167
|
I_kwDOJ0Z1Ps6XkEe_
| 6,920
|
Error: registry.ollama.ai/library/command-r:latest: EOF
|
{
"login": "remco-pc",
"id": 8077908,
"node_id": "MDQ6VXNlcjgwNzc5MDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/8077908?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/remco-pc",
"html_url": "https://github.com/remco-pc",
"followers_url": "https://api.github.com/users/remco-pc/followers",
"following_url": "https://api.github.com/users/remco-pc/following{/other_user}",
"gists_url": "https://api.github.com/users/remco-pc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/remco-pc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/remco-pc/subscriptions",
"organizations_url": "https://api.github.com/users/remco-pc/orgs",
"repos_url": "https://api.github.com/users/remco-pc/repos",
"events_url": "https://api.github.com/users/remco-pc/events{/privacy}",
"received_events_url": "https://api.github.com/users/remco-pc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 3
| 2024-09-23T14:19:19
| 2024-11-05T22:21:46
| 2024-11-05T22:21:46
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
trying to run/pull this model: command-r
### OS
Linux
### GPU
_No response_
### CPU
Intel
### Ollama version
ollama version is 0.3.6
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6920/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6920/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1777
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1777/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1777/comments
|
https://api.github.com/repos/ollama/ollama/issues/1777/events
|
https://github.com/ollama/ollama/issues/1777
| 2,064,666,081
|
I_kwDOJ0Z1Ps57EE3h
| 1,777
|
change in CMAKE flags in 0.1.18 causes illegal instruction on Intel mac
|
{
"login": "jhheider",
"id": 13246308,
"node_id": "MDQ6VXNlcjEzMjQ2MzA4",
"avatar_url": "https://avatars.githubusercontent.com/u/13246308?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jhheider",
"html_url": "https://github.com/jhheider",
"followers_url": "https://api.github.com/users/jhheider/followers",
"following_url": "https://api.github.com/users/jhheider/following{/other_user}",
"gists_url": "https://api.github.com/users/jhheider/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jhheider/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jhheider/subscriptions",
"organizations_url": "https://api.github.com/users/jhheider/orgs",
"repos_url": "https://api.github.com/users/jhheider/repos",
"events_url": "https://api.github.com/users/jhheider/events{/privacy}",
"received_events_url": "https://api.github.com/users/jhheider/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-01-03T21:24:00
| 2024-01-04T20:30:31
| 2024-01-04T20:08:38
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Reverting to the old flags as in [this](https://github.com/pkgxdev/pantry/pull/4710/files) seems to fix it for me (as far as our automated testing goes).
No fix: https://github.com/pkgxdev/pantry/actions/runs/7400916216/job/20135652033
Fix: https://github.com/pkgxdev/pantry/actions/runs/7402435273/job/20140385965
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1777/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1777/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7242
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7242/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7242/comments
|
https://api.github.com/repos/ollama/ollama/issues/7242/events
|
https://github.com/ollama/ollama/issues/7242
| 2,595,036,742
|
I_kwDOJ0Z1Ps6arR5G
| 7,242
|
`api/embed` endpoint is not working exactly like `api/embeddings` endpoint
|
{
"login": "hdnh2006",
"id": 17271049,
"node_id": "MDQ6VXNlcjE3MjcxMDQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/17271049?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hdnh2006",
"html_url": "https://github.com/hdnh2006",
"followers_url": "https://api.github.com/users/hdnh2006/followers",
"following_url": "https://api.github.com/users/hdnh2006/following{/other_user}",
"gists_url": "https://api.github.com/users/hdnh2006/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hdnh2006/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hdnh2006/subscriptions",
"organizations_url": "https://api.github.com/users/hdnh2006/orgs",
"repos_url": "https://api.github.com/users/hdnh2006/repos",
"events_url": "https://api.github.com/users/hdnh2006/events{/privacy}",
"received_events_url": "https://api.github.com/users/hdnh2006/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-10-17T15:12:53
| 2024-10-18T09:05:40
| 2024-10-18T09:05:39
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Hi guys!
Thanks for this fantastic tool. This is the first time I use embeddings with ollama, I have just tried inference with LLM, and I realized that there is a big difference between the endpoints `/embed` and `/embeddings`.
I understand the `/embeddings` was deprecated but it is a common endpoint used by `openai` compatible applications and I am facing these results:
```python
import requests
url = 'http://192.168.100.60:11434/api/embeddings'
data = {
"model": "bge-large",
"input": ["Why is the sky blue?", "Why is the grass green?"]
}
response = requests.post(url, json=data)
print(response.json())
{'embedding': []}
```
I should get the same results as the endpoint `/embed`:
```python
import requests
url = 'http://192.168.100.60:11434/api/embed'
data = {
"model": "bge-large",
"input": ["Why is the sky blue?", "Why is the grass green?"]
}
response = requests.post(url, json=data)
print(response.json())
{'model': 'bge-large', 'embeddings': [[0.0013820849, -0.0021153516, 0.026703326, -0.019793263, -0.028357966, -0.0038778095, -0.0074022487, 0.026467843, 0.0390734, 0.064318694, -0.029173901, -0.016984569, 0.02871944, ...], [-0.009015707, -0.007080707, 0.012702042, 0.026064986, 0.0032204844, 0.03592011, 0.025629202, -3.0089186e-05, 0.030164663, 0.06953627, 0.02197145, -0.019186007, 0.031450707, 0.004626421, -0.01037141, 0.0134581905, ...]], 'total_duration': 351441147, 'load_duration': 1014270, 'prompt_eval_count': 12}
```
Am I doing something wrong? I need to call the endpoint `embeddings` as it is required to simulate my openai compatible application.
### OS
Linux
### GPU
Nvidia
### CPU
AMD
### Ollama version
ollama version is 0.3.12
|
{
"login": "hdnh2006",
"id": 17271049,
"node_id": "MDQ6VXNlcjE3MjcxMDQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/17271049?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hdnh2006",
"html_url": "https://github.com/hdnh2006",
"followers_url": "https://api.github.com/users/hdnh2006/followers",
"following_url": "https://api.github.com/users/hdnh2006/following{/other_user}",
"gists_url": "https://api.github.com/users/hdnh2006/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hdnh2006/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hdnh2006/subscriptions",
"organizations_url": "https://api.github.com/users/hdnh2006/orgs",
"repos_url": "https://api.github.com/users/hdnh2006/repos",
"events_url": "https://api.github.com/users/hdnh2006/events{/privacy}",
"received_events_url": "https://api.github.com/users/hdnh2006/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7242/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7242/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1863
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1863/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1863/comments
|
https://api.github.com/repos/ollama/ollama/issues/1863/events
|
https://github.com/ollama/ollama/issues/1863
| 2,072,001,683
|
I_kwDOJ0Z1Ps57gDyT
| 1,863
|
Ollama stuck after few runs
|
{
"login": "jadhvank",
"id": 11309219,
"node_id": "MDQ6VXNlcjExMzA5MjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/11309219?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jadhvank",
"html_url": "https://github.com/jadhvank",
"followers_url": "https://api.github.com/users/jadhvank/followers",
"following_url": "https://api.github.com/users/jadhvank/following{/other_user}",
"gists_url": "https://api.github.com/users/jadhvank/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jadhvank/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jadhvank/subscriptions",
"organizations_url": "https://api.github.com/users/jadhvank/orgs",
"repos_url": "https://api.github.com/users/jadhvank/repos",
"events_url": "https://api.github.com/users/jadhvank/events{/privacy}",
"received_events_url": "https://api.github.com/users/jadhvank/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5808482718,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWjZpng",
"url": "https://api.github.com/repos/ollama/ollama/labels/performance",
"name": "performance",
"color": "A5B5C6",
"default": false,
"description": ""
}
] |
closed
| false
|
{
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users/jessegross/followers",
"following_url": "https://api.github.com/users/jessegross/following{/other_user}",
"gists_url": "https://api.github.com/users/jessegross/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jessegross/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jessegross/subscriptions",
"organizations_url": "https://api.github.com/users/jessegross/orgs",
"repos_url": "https://api.github.com/users/jessegross/repos",
"events_url": "https://api.github.com/users/jessegross/events{/privacy}",
"received_events_url": "https://api.github.com/users/jessegross/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users/jessegross/followers",
"following_url": "https://api.github.com/users/jessegross/following{/other_user}",
"gists_url": "https://api.github.com/users/jessegross/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jessegross/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jessegross/subscriptions",
"organizations_url": "https://api.github.com/users/jessegross/orgs",
"repos_url": "https://api.github.com/users/jessegross/repos",
"events_url": "https://api.github.com/users/jessegross/events{/privacy}",
"received_events_url": "https://api.github.com/users/jessegross/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 115
| 2024-01-09T09:45:01
| 2024-11-21T18:53:44
| 2024-11-21T18:53:43
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I updated Ollama from 0.1.16 to 0.1.18 and encountered the issue.
I am using python to use LLM models with Ollama and Langchain on Linux server(4 x A100 GPU).
There are 5,000 prompts to ask and get the results from LLM.
With Ollama 0.1.17, the Ollama server stops in 1 or 2 days.
Now it hung in 10 minutes.

This is the Ollama server message when it stops running.
It happens more when Phi 2 runs then when Mixtral runs
After the freeze, exit the server and run it again, then the prompt and the LLM answer is successfully received.
The environment
Linux: Ubuntu 22.04.3 LTS
python: 3.10.12
Ollama: 0.1.18
Langchain: 0.0.274
Mixtral: latest
Phi 2: latest
GPU: NVIDIA A100-SXM4-80GB x 4
Prompt size: ~10K
\# of Prompts: 5K

Read these articles, https://github.com/jmorganca/ollama/issues/1853, https://github.com/jmorganca/ollama/issues/1688
But none of them are works here.
Also, if there are any way to install previous version of Ollama (0.1.16), let me know
|
{
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users/jessegross/followers",
"following_url": "https://api.github.com/users/jessegross/following{/other_user}",
"gists_url": "https://api.github.com/users/jessegross/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jessegross/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jessegross/subscriptions",
"organizations_url": "https://api.github.com/users/jessegross/orgs",
"repos_url": "https://api.github.com/users/jessegross/repos",
"events_url": "https://api.github.com/users/jessegross/events{/privacy}",
"received_events_url": "https://api.github.com/users/jessegross/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1863/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1863/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7967
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7967/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7967/comments
|
https://api.github.com/repos/ollama/ollama/issues/7967/events
|
https://github.com/ollama/ollama/issues/7967
| 2,722,722,699
|
I_kwDOJ0Z1Ps6iSXOL
| 7,967
|
Add stop word <|endoftext|> to qwq models
|
{
"login": "elsewhat",
"id": 1133607,
"node_id": "MDQ6VXNlcjExMzM2MDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1133607?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/elsewhat",
"html_url": "https://github.com/elsewhat",
"followers_url": "https://api.github.com/users/elsewhat/followers",
"following_url": "https://api.github.com/users/elsewhat/following{/other_user}",
"gists_url": "https://api.github.com/users/elsewhat/gists{/gist_id}",
"starred_url": "https://api.github.com/users/elsewhat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/elsewhat/subscriptions",
"organizations_url": "https://api.github.com/users/elsewhat/orgs",
"repos_url": "https://api.github.com/users/elsewhat/repos",
"events_url": "https://api.github.com/users/elsewhat/events{/privacy}",
"received_events_url": "https://api.github.com/users/elsewhat/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 0
| 2024-12-06T11:02:02
| 2024-12-06T11:02:02
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
The [qwq models](https://ollama.com/library/qwq) currently go into an infinite loop.
The reasons for this appears that the model outputs <|endoftext|> at the end of its response, but ollama does not handle this as a stop word. The model therefore continues with hallucinations and goes into infinite loop.
Tested locally that the issue was resolved by using a custom model file containing
```
FROM qwq:latest
# Adding additional stop as otherwise the qwq model goes into an infinite loop
PARAMETER stop <|endoftext|>
```
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.4.7
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7967/reactions",
"total_count": 5,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7967/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/1315
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1315/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1315/comments
|
https://api.github.com/repos/ollama/ollama/issues/1315/events
|
https://github.com/ollama/ollama/issues/1315
| 2,016,885,646
|
I_kwDOJ0Z1Ps54NzuO
| 1,315
|
ollama on linux with amd rx 6900 XT
|
{
"login": "marekk1717",
"id": 25406173,
"node_id": "MDQ6VXNlcjI1NDA2MTcz",
"avatar_url": "https://avatars.githubusercontent.com/u/25406173?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/marekk1717",
"html_url": "https://github.com/marekk1717",
"followers_url": "https://api.github.com/users/marekk1717/followers",
"following_url": "https://api.github.com/users/marekk1717/following{/other_user}",
"gists_url": "https://api.github.com/users/marekk1717/gists{/gist_id}",
"starred_url": "https://api.github.com/users/marekk1717/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/marekk1717/subscriptions",
"organizations_url": "https://api.github.com/users/marekk1717/orgs",
"repos_url": "https://api.github.com/users/marekk1717/repos",
"events_url": "https://api.github.com/users/marekk1717/events{/privacy}",
"received_events_url": "https://api.github.com/users/marekk1717/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2023-11-29T15:45:19
| 2024-01-25T21:53:17
| 2024-01-25T21:53:17
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Is it possible to run ollama on linux with amd GPU ?
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1315/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1315/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6297
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6297/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6297/comments
|
https://api.github.com/repos/ollama/ollama/issues/6297/events
|
https://github.com/ollama/ollama/issues/6297
| 2,458,961,797
|
I_kwDOJ0Z1Ps6SkMeF
| 6,297
|
Models not loading when using ROCm(Radeon VII)
|
{
"login": "WannaBeOCer",
"id": 47701316,
"node_id": "MDQ6VXNlcjQ3NzAxMzE2",
"avatar_url": "https://avatars.githubusercontent.com/u/47701316?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/WannaBeOCer",
"html_url": "https://github.com/WannaBeOCer",
"followers_url": "https://api.github.com/users/WannaBeOCer/followers",
"following_url": "https://api.github.com/users/WannaBeOCer/following{/other_user}",
"gists_url": "https://api.github.com/users/WannaBeOCer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/WannaBeOCer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/WannaBeOCer/subscriptions",
"organizations_url": "https://api.github.com/users/WannaBeOCer/orgs",
"repos_url": "https://api.github.com/users/WannaBeOCer/repos",
"events_url": "https://api.github.com/users/WannaBeOCer/events{/privacy}",
"received_events_url": "https://api.github.com/users/WannaBeOCer/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5755339642,
"node_id": "LA_kwDOJ0Z1Ps8AAAABVwuDeg",
"url": "https://api.github.com/repos/ollama/ollama/labels/linux",
"name": "linux",
"color": "516E70",
"default": false,
"description": ""
},
{
"id": 6433346500,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf3UTxA",
"url": "https://api.github.com/repos/ollama/ollama/labels/amd",
"name": "amd",
"color": "000000",
"default": false,
"description": "Issues relating to AMD GPUs and ROCm"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 2
| 2024-08-10T07:39:03
| 2024-08-18T16:43:47
| 2024-08-18T16:43:47
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
This is deployed via Helm with a Radeon VII. I'm aware gfx906 is no longer supported and is in maintenance mode. Let me know if you suggest using a different version of ollama. I tried both 0.3.3 and 0.3.4 and both have the same issue even with smaller models like gemma2:2b. When the model gemma2:9b loads it works perfectly along with other models.
values.yaml
```
ollama:
image:
repository: ollama/ollama
tag: "rocm"
pullPolicy: "Always"
resources:
limits:
amd.com/gpu: 1
extraEnv:
- name: HSA_OVERRIDE_GFX_VERSION
value: "9.0.6"
```
```
time=2024-08-10T07:22:45.652Z level=INFO source=sched.go:710 msg="new model will fit in available VRAM in single GPU, loading" model=/root/.ollama/models/blobs/sha256-ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373 gpu=0 parallel=4 available=17152241664 required="8.8 GiB"
time=2024-08-10T07:22:45.652Z level=INFO source=memory.go:309 msg="offload to rocm" layers.requested=-1 layers.model=43 layers.offload=43 layers.split="" memory.available="[16.0 GiB]" memory.required.full="8.8 GiB" memory.required.partial="8.8 GiB" memory.required.kv="2.6 GiB" memory.required.allocations="[8.8 GiB]" memory.weights.total="7.0 GiB" memory.weights.repeating="6.3 GiB" memory.weights.nonrepeating="717.8 MiB" memory.graph.full="507.0 MiB" memory.graph.partial="1.2 GiB"
time=2024-08-10T07:22:45.653Z level=INFO source=server.go:392 msg="starting llama server" cmd="/tmp/ollama1813119290/runners/rocm_v60102/ollama_llama_server --model /root/.ollama/models/blobs/sha256-ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373 --ctx-size 8192 --batch-size 512 --embedding --log-disable --n-gpu-layers 43 --parallel 4 --port 46625"
time=2024-08-10T07:22:45.653Z level=INFO source=sched.go:445 msg="loaded runners" count=1
time=2024-08-10T07:22:45.653Z level=INFO source=server.go:592 msg="waiting for llama runner to start responding"
time=2024-08-10T07:22:45.654Z level=INFO source=server.go:626 msg="waiting for server to become available" status="llm server error"
INFO [main] build info | build=1 commit="1e6f655" tid="128049608143680" timestamp=1723274565
INFO [main] system info | n_threads=8 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="128049608143680" timestamp=1723274565 total_threads=8
INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="7" port="46625" tid="128049608143680" timestamp=1723274565
llama_model_loader: loaded meta data with 29 key-value pairs and 464 tensors from /root/.ollama/models/blobs/sha256-ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = gemma2
llama_model_loader: - kv 1: general.name str = gemma-2-9b-it
llama_model_loader: - kv 2: gemma2.context_length u32 = 8192
llama_model_loader: - kv 3: gemma2.embedding_length u32 = 3584
llama_model_loader: - kv 4: gemma2.block_count u32 = 42
llama_model_loader: - kv 5: gemma2.feed_forward_length u32 = 14336
llama_model_loader: - kv 6: gemma2.attention.head_count u32 = 16
llama_model_loader: - kv 7: gemma2.attention.head_count_kv u32 = 8
llama_model_loader: - kv 8: gemma2.attention.layer_norm_rms_epsilon f32 = 0.000001
llama_model_loader: - kv 9: gemma2.attention.key_length u32 = 256
llama_model_loader: - kv 10: gemma2.attention.value_length u32 = 256
llama_model_loader: - kv 11: general.file_type u32 = 2
llama_model_loader: - kv 12: gemma2.attn_logit_softcapping f32 = 50.000000
llama_model_loader: - kv 13: gemma2.final_logit_softcapping f32 = 30.000000
llama_model_loader: - kv 14: gemma2.attention.sliding_window u32 = 4096
llama_model_loader: - kv 15: tokenizer.ggml.model str = llama
llama_model_loader: - kv 16: tokenizer.ggml.pre str = default
llama_model_loader: - kv 17: tokenizer.ggml.tokens arr[str,256000] = ["<pad>", "<eos>", "<bos>", "<unk>", ...
llama_model_loader: - kv 18: tokenizer.ggml.scores arr[f32,256000] = [0.000000, 0.000000, 0.000000, 0.0000...
time=2024-08-10T07:22:45.904Z level=INFO source=server.go:626 msg="waiting for server to become available" status="llm server loading model"
llama_model_loader: - kv 19: tokenizer.ggml.token_type arr[i32,256000] = [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, ...
llama_model_loader: - kv 20: tokenizer.ggml.bos_token_id u32 = 2
llama_model_loader: - kv 21: tokenizer.ggml.eos_token_id u32 = 1
llama_model_loader: - kv 22: tokenizer.ggml.unknown_token_id u32 = 3
llama_model_loader: - kv 23: tokenizer.ggml.padding_token_id u32 = 0
llama_model_loader: - kv 24: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 25: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 26: tokenizer.chat_template str = {{ bos_token }}{% if messages[0]['rol...
llama_model_loader: - kv 27: tokenizer.ggml.add_space_prefix bool = false
llama_model_loader: - kv 28: general.quantization_version u32 = 2
llama_model_loader: - type f32: 169 tensors
llama_model_loader: - type q4_0: 294 tensors
llama_model_loader: - type q6_K: 1 tensors
llm_load_vocab: special tokens cache size = 108
[GIN] 2024/08/10 - 07:22:46 | 200 | 15.091ยตs | 192.168.1.27 | GET "/"
llm_load_vocab: token to piece cache size = 1.6014 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = gemma2
llm_load_print_meta: vocab type = SPM
llm_load_print_meta: n_vocab = 256000
llm_load_print_meta: n_merges = 0
llm_load_print_meta: vocab_only = 0
llm_load_print_meta: n_ctx_train = 8192
llm_load_print_meta: n_embd = 3584
llm_load_print_meta: n_layer = 42
llm_load_print_meta: n_head = 16
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_rot = 256
llm_load_print_meta: n_swa = 4096
llm_load_print_meta: n_embd_head_k = 256
llm_load_print_meta: n_embd_head_v = 256
llm_load_print_meta: n_gqa = 2
llm_load_print_meta: n_embd_k_gqa = 2048
llm_load_print_meta: n_embd_v_gqa = 2048
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-06
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 14336
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 2
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 8192
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: model type = 9B
llm_load_print_meta: model ftype = Q4_0
llm_load_print_meta: model params = 9.24 B
llm_load_print_meta: model size = 5.06 GiB (4.71 BPW)
llm_load_print_meta: general.name = gemma-2-9b-it
llm_load_print_meta: BOS token = 2 '<bos>'
llm_load_print_meta: EOS token = 1 '<eos>'
llm_load_print_meta: UNK token = 3 '<unk>'
llm_load_print_meta: PAD token = 0 '<pad>'
llm_load_print_meta: LF token = 227 '<0x0A>'
llm_load_print_meta: EOT token = 107 '<end_of_turn>'
llm_load_print_meta: max token length = 93
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 ROCm devices:
Device 0: AMD Radeon VII, compute capability 9.0, VMM: no
llm_load_tensors: ggml ctx size = 0.41 MiB
llm_load_tensors: offloading 42 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 43/43 layers to GPU
llm_load_tensors: ROCm0 buffer size = 5185.21 MiB
llm_load_tensors: CPU buffer size = 717.77 MiB
[GIN] 2024/08/10 - 07:22:51 | 200 | 14.554ยตs | 192.168.1.27 | GET "/"
[GIN] 2024/08/10 - 07:22:51 | 200 | 8.792ยตs | 192.168.1.27 | GET "/"
time=2024-08-10T07:22:52.377Z level=INFO source=server.go:626 msg="waiting for server to become available" status="llm server not responding"
time=2024-08-10T07:22:52.816Z level=INFO source=server.go:626 msg="waiting for server to become available" status="llm server loading model"
llama_new_context_with_model: n_ctx = 8192
llama_new_context_with_model: n_batch = 512
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 10000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: ROCm0 KV buffer size = 2688.00 MiB
llama_new_context_with_model: KV self size = 2688.00 MiB, K (f16): 1344.00 MiB, V (f16): 1344.00 MiB
llama_new_context_with_model: ROCm_Host output buffer size = 3.96 MiB
llama_new_context_with_model: ROCm0 compute buffer size = 507.00 MiB
llama_new_context_with_model: ROCm_Host compute buffer size = 39.01 MiB
llama_new_context_with_model: graph nodes = 1690
llama_new_context_with_model: graph splits = 2
Memory access fault by GPU node-1 (Agent handle: 0x1c54ea70) on address 0xe000. Reason: Page not present or supervisor privilege.
time=2024-08-10T07:22:53.115Z level=INFO source=server.go:626 msg="waiting for server to become available" status="llm server error"
time=2024-08-10T07:22:53.366Z level=ERROR source=sched.go:451 msg="error loading llama server" error="llama runner process has terminated: signal: aborted (core dumped)"
```
### OS
Linux
### GPU
AMD
### CPU
Intel
### Ollama version
0.3.4
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6297/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6297/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5139
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5139/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5139/comments
|
https://api.github.com/repos/ollama/ollama/issues/5139/events
|
https://github.com/ollama/ollama/pull/5139
| 2,362,074,844
|
PR_kwDOJ0Z1Ps5y8ozG
| 5,139
|
Update requirements.txt
|
{
"login": "dcasota",
"id": 14890243,
"node_id": "MDQ6VXNlcjE0ODkwMjQz",
"avatar_url": "https://avatars.githubusercontent.com/u/14890243?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dcasota",
"html_url": "https://github.com/dcasota",
"followers_url": "https://api.github.com/users/dcasota/followers",
"following_url": "https://api.github.com/users/dcasota/following{/other_user}",
"gists_url": "https://api.github.com/users/dcasota/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dcasota/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dcasota/subscriptions",
"organizations_url": "https://api.github.com/users/dcasota/orgs",
"repos_url": "https://api.github.com/users/dcasota/repos",
"events_url": "https://api.github.com/users/dcasota/events{/privacy}",
"received_events_url": "https://api.github.com/users/dcasota/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-06-19T11:07:36
| 2024-09-12T01:56:56
| 2024-09-12T01:56:56
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5139",
"html_url": "https://github.com/ollama/ollama/pull/5139",
"diff_url": "https://github.com/ollama/ollama/pull/5139.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5139.patch",
"merged_at": "2024-09-12T01:56:56"
}
|
With chromadb==0.4.7, ingest.py still fails with
`Cannot submit more than 5,461 embeddings at once. Please submit your embeddings in batches of size 5,461 or less.`
See
- https://github.com/ollama/ollama/issues/4476
- https://github.com/ollama/ollama/issues/2572
- https://github.com/ollama/ollama/issues/533
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5139/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5139/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/54
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/54/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/54/comments
|
https://api.github.com/repos/ollama/ollama/issues/54/events
|
https://github.com/ollama/ollama/pull/54
| 1,794,020,070
|
PR_kwDOJ0Z1Ps5U8VtL
| 54
|
no prompt on empty line
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2023-07-07T18:26:41
| 2023-07-07T18:29:43
| 2023-07-07T18:29:39
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/54",
"html_url": "https://github.com/ollama/ollama/pull/54",
"diff_url": "https://github.com/ollama/ollama/pull/54.diff",
"patch_url": "https://github.com/ollama/ollama/pull/54.patch",
"merged_at": "2023-07-07T18:29:39"
}
| null |
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/54/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/54/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/8262
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8262/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8262/comments
|
https://api.github.com/repos/ollama/ollama/issues/8262/events
|
https://github.com/ollama/ollama/issues/8262
| 2,761,725,190
|
I_kwDOJ0Z1Ps6knJUG
| 8,262
|
Segmentation Fault in AMD GPGPU Applications on 780M
|
{
"login": "zw963",
"id": 549126,
"node_id": "MDQ6VXNlcjU0OTEyNg==",
"avatar_url": "https://avatars.githubusercontent.com/u/549126?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zw963",
"html_url": "https://github.com/zw963",
"followers_url": "https://api.github.com/users/zw963/followers",
"following_url": "https://api.github.com/users/zw963/following{/other_user}",
"gists_url": "https://api.github.com/users/zw963/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zw963/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zw963/subscriptions",
"organizations_url": "https://api.github.com/users/zw963/orgs",
"repos_url": "https://api.github.com/users/zw963/repos",
"events_url": "https://api.github.com/users/zw963/events{/privacy}",
"received_events_url": "https://api.github.com/users/zw963/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 4
| 2024-12-28T13:36:31
| 2025-01-17T00:38:44
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Hi, I start my ollama model failed again when try use AMD 780M iGPU.
following is the log for `HSA_OVERRIDE_GFX_VERSION=11.0.0 /usr/bin/ollama serve`
```sh
โฐโโโค $ HSA_OVERRIDE_GFX_VERSION=11.0.0 /usr/bin/ollama serve
2024/12/28 21:16:53 routes.go:1259: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION:11.0.0 HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/zw963/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2024-12-28T21:16:53.340+08:00 level=INFO source=images.go:757 msg="total blobs: 32"
time=2024-12-28T21:16:53.340+08:00 level=INFO source=images.go:764 msg="total unused blobs removed: 0"
time=2024-12-28T21:16:53.341+08:00 level=INFO source=routes.go:1310 msg="Listening on 127.0.0.1:11434 (version 0.5.4)"
time=2024-12-28T21:16:53.341+08:00 level=INFO source=routes.go:1339 msg="Dynamic LLM libraries" runners="[cpu cpu_avx cpu_avx2 rocm_avx]"
time=2024-12-28T21:16:53.341+08:00 level=INFO source=gpu.go:226 msg="looking for compatible GPUs"
time=2024-12-28T21:16:53.365+08:00 level=WARN source=amd_linux.go:61 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
time=2024-12-28T21:16:53.366+08:00 level=INFO source=amd_linux.go:391 msg="skipping rocm gfx compatibility check" HSA_OVERRIDE_GFX_VERSION=11.0.0
time=2024-12-28T21:16:53.366+08:00 level=INFO source=types.go:131 msg="inference compute" id=0 library=rocm variant="" compute=gfx1103 driver=0.0 name=1002:15bf total="16.0 GiB" available="14.8 GiB"
^[[O[GIN] 2024/12/28 - 21:17:00 | 200 | 31.846ยตs | 127.0.0.1 | HEAD "/"
[GIN] 2024/12/28 - 21:17:00 | 200 | 19.231074ms | 127.0.0.1 | POST "/api/show"
time=2024-12-28T21:17:01.006+08:00 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/home/zw963/.ollama/models/blobs/sha256-ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373 gpu=0 parallel=4 available=15936040960 required="8.8 GiB"
time=2024-12-28T21:17:01.006+08:00 level=INFO source=server.go:104 msg="system memory" total="46.8 GiB" free="42.7 GiB" free_swap="63.0 GiB"
time=2024-12-28T21:17:01.006+08:00 level=INFO source=memory.go:356 msg="offload to rocm" layers.requested=-1 layers.model=43 layers.offload=43 layers.split="" memory.available="[14.8 GiB]" memory.gpu_overhead="0 B" memory.required.full="8.8 GiB" memory.required.partial="8.8 GiB" memory.required.kv="2.6 GiB" memory.required.allocations="[8.8 GiB]" memory.weights.total="7.0 GiB" memory.weights.repeating="6.3 GiB" memory.weights.nonrepeating="717.8 MiB" memory.graph.full="507.0 MiB" memory.graph.partial="1.2 GiB"
time=2024-12-28T21:17:01.007+08:00 level=INFO source=server.go:376 msg="starting llama server" cmd="/usr/lib/ollama/runners/rocm_avx/ollama_llama_server runner --model /home/zw963/.ollama/models/blobs/sha256-ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373 --ctx-size 8192 --batch-size 512 --n-gpu-layers 43 --threads 8 --parallel 4 --port 12215"
time=2024-12-28T21:17:01.007+08:00 level=INFO source=sched.go:449 msg="loaded runners" count=1
time=2024-12-28T21:17:01.007+08:00 level=INFO source=server.go:555 msg="waiting for llama runner to start responding"
time=2024-12-28T21:17:01.007+08:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error"
time=2024-12-28T21:17:01.036+08:00 level=INFO source=runner.go:945 msg="starting go runner"
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 ROCm devices:
Device 0: AMD Radeon 780M, compute capability 11.0, VMM: no
time=2024-12-28T21:17:02.349+08:00 level=INFO source=runner.go:946 msg=system info="ROCm : PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(gcc)" threads=8
llama_load_model_from_file: using device ROCm0 (AMD Radeon 780M) - 23866 MiB free
time=2024-12-28T21:17:02.349+08:00 level=INFO source=.:0 msg="Server listening on 127.0.0.1:12215"
llama_model_loader: loaded meta data with 29 key-value pairs and 464 tensors from /home/zw963/.ollama/models/blobs/sha256-ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = gemma2
llama_model_loader: - kv 1: general.name str = gemma-2-9b-it
llama_model_loader: - kv 2: gemma2.context_length u32 = 8192
llama_model_loader: - kv 3: gemma2.embedding_length u32 = 3584
llama_model_loader: - kv 4: gemma2.block_count u32 = 42
llama_model_loader: - kv 5: gemma2.feed_forward_length u32 = 14336
llama_model_loader: - kv 6: gemma2.attention.head_count u32 = 16
llama_model_loader: - kv 7: gemma2.attention.head_count_kv u32 = 8
llama_model_loader: - kv 8: gemma2.attention.layer_norm_rms_epsilon f32 = 0.000001
llama_model_loader: - kv 9: gemma2.attention.key_length u32 = 256
llama_model_loader: - kv 10: gemma2.attention.value_length u32 = 256
llama_model_loader: - kv 11: general.file_type u32 = 2
llama_model_loader: - kv 12: gemma2.attn_logit_softcapping f32 = 50.000000
llama_model_loader: - kv 13: gemma2.final_logit_softcapping f32 = 30.000000
llama_model_loader: - kv 14: gemma2.attention.sliding_window u32 = 4096
llama_model_loader: - kv 15: tokenizer.ggml.model str = llama
llama_model_loader: - kv 16: tokenizer.ggml.pre str = default
llama_model_loader: - kv 17: tokenizer.ggml.tokens arr[str,256000] = ["<pad>", "<eos>", "<bos>", "<unk>", ...
llama_model_loader: - kv 18: tokenizer.ggml.scores arr[f32,256000] = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv 19: tokenizer.ggml.token_type arr[i32,256000] = [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, ...
llama_model_loader: - kv 20: tokenizer.ggml.bos_token_id u32 = 2
llama_model_loader: - kv 21: tokenizer.ggml.eos_token_id u32 = 1
llama_model_loader: - kv 22: tokenizer.ggml.unknown_token_id u32 = 3
llama_model_loader: - kv 23: tokenizer.ggml.padding_token_id u32 = 0
llama_model_loader: - kv 24: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 25: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 26: tokenizer.chat_template str = {{ bos_token }}{% if messages[0]['rol...
llama_model_loader: - kv 27: tokenizer.ggml.add_space_prefix bool = false
llama_model_loader: - kv 28: general.quantization_version u32 = 2
llama_model_loader: - type f32: 169 tensors
llama_model_loader: - type q4_0: 294 tensors
llama_model_loader: - type q6_K: 1 tensors
llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
llm_load_vocab: special tokens cache size = 108
llm_load_vocab: token to piece cache size = 1.6014 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = gemma2
llm_load_print_meta: vocab type = SPM
llm_load_print_meta: n_vocab = 256000
llm_load_print_meta: n_merges = 0
llm_load_print_meta: vocab_only = 0
llm_load_print_meta: n_ctx_train = 8192
llm_load_print_meta: n_embd = 3584
llm_load_print_meta: n_layer = 42
llm_load_print_meta: n_head = 16
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_rot = 256
llm_load_print_meta: n_swa = 4096
llm_load_print_meta: n_embd_head_k = 256
llm_load_print_meta: n_embd_head_v = 256
llm_load_print_meta: n_gqa = 2
llm_load_print_meta: n_embd_k_gqa = 2048
llm_load_print_meta: n_embd_v_gqa = 2048
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-06
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 14336
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 2
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 8192
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: ssm_dt_b_c_rms = 0
llm_load_print_meta: model type = 9B
llm_load_print_meta: model ftype = Q4_0
llm_load_print_meta: model params = 9.24 B
llm_load_print_meta: model size = 5.06 GiB (4.71 BPW)
llm_load_print_meta: general.name = gemma-2-9b-it
llm_load_print_meta: BOS token = 2 '<bos>'
llm_load_print_meta: EOS token = 1 '<eos>'
llm_load_print_meta: EOT token = 107 '<end_of_turn>'
llm_load_print_meta: UNK token = 3 '<unk>'
llm_load_print_meta: PAD token = 0 '<pad>'
llm_load_print_meta: LF token = 227 '<0x0A>'
llm_load_print_meta: EOG token = 1 '<eos>'
llm_load_print_meta: EOG token = 107 '<end_of_turn>'
llm_load_print_meta: max token length = 93
time=2024-12-28T21:17:02.535+08:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model"
llm_load_tensors: offloading 42 repeating layers to GPU
llm_load_tensors: offloading output layer to GPU
llm_load_tensors: offloaded 43/43 layers to GPU
llm_load_tensors: CPU_Mapped model buffer size = 717.77 MiB
llm_load_tensors: ROCm0 model buffer size = 5185.21 MiB
```
----------------
Following is the failed logs when i try run `ollama run gemma2` in another opened terminal.
```
SIGSEGV: segmentation violation
PC=0x713070f0fe2b m=5 sigcode=1 addr=0x18
signal arrived during cgo execution
goroutine 20 gp=0xc000104a80 m=5 mp=0xc000100008 [syscall]:
runtime.cgocall(0x5693bccf4990, 0xc000204b78)
runtime/cgocall.go:167 +0x4b fp=0xc000204b50 sp=0xc000204b18 pc=0x5693bcaa896b
github.com/ollama/ollama/llama._Cfunc_llama_load_model_from_file(0x712ed4000be0, {0x0, 0x2b, 0x1, 0x0, 0x0, 0x0, 0x5693bccf41e0, 0xc000208000, 0x0, ...})
_cgo_gotypes.go:707 +0x50 fp=0xc000204b78 sp=0xc000204b50 pc=0x5693bcb53250
github.com/ollama/ollama/llama.LoadModelFromFile.func1({0x7ffc8d222d0e?, 0x0?}, {0x0, 0x2b, 0x1, 0x0, 0x0, 0x0, 0x5693bccf41e0, 0xc000208000, ...})
github.com/ollama/ollama/llama/llama.go:311 +0x127 fp=0xc000204c78 sp=0xc000204b78 pc=0x5693bcb55e67
github.com/ollama/ollama/llama.LoadModelFromFile({0x7ffc8d222d0e, 0x68}, {0x2b, 0x0, 0x1, 0x0, {0x0, 0x0, 0x0}, 0xc00011e1b0, ...})
github.com/ollama/ollama/llama/llama.go:311 +0x2d6 fp=0xc000204dc8 sp=0xc000204c78 pc=0x5693bcb55b56
github.com/ollama/ollama/llama/runner.(*Server).loadModel(0xc0001461b0, {0x2b, 0x0, 0x1, 0x0, {0x0, 0x0, 0x0}, 0xc00011e1b0, 0x0}, ...)
github.com/ollama/ollama/llama/runner/runner.go:859 +0xc5 fp=0xc000204f10 sp=0xc000204dc8 pc=0x5693bccf1c25
github.com/ollama/ollama/llama/runner.Execute.gowrap1()
github.com/ollama/ollama/llama/runner/runner.go:979 +0xda fp=0xc000204fe0 sp=0xc000204f10 pc=0x5693bccf357a
runtime.goexit({})
runtime/asm_amd64.s:1700 +0x1 fp=0xc000204fe8 sp=0xc000204fe0 pc=0x5693bcab63a1
created by github.com/ollama/ollama/llama/runner.Execute in goroutine 1
github.com/ollama/ollama/llama/runner/runner.go:979 +0xd0d
goroutine 1 gp=0xc0000061c0 m=nil [IO wait]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
runtime/proc.go:424 +0xce fp=0xc0000637b0 sp=0xc000063790 pc=0x5693bcaae76e
runtime.netpollblock(0xc000063800?, 0xbca46fc6?, 0x93?)
runtime/netpoll.go:575 +0xf7 fp=0xc0000637e8 sp=0xc0000637b0 pc=0x5693bca734d7
internal/poll.runtime_pollWait(0x712f89fca730, 0x72)
runtime/netpoll.go:351 +0x85 fp=0xc000063808 sp=0xc0000637e8 pc=0x5693bcaada65
internal/poll.(*pollDesc).wait(0xc000190100?, 0x2c?, 0x0)
internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc000063830 sp=0xc000063808 pc=0x5693bcb038a7
internal/poll.(*pollDesc).waitRead(...)
internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc000190100)
internal/poll/fd_unix.go:620 +0x295 fp=0xc0000638d8 sp=0xc000063830 pc=0x5693bcb04e15
net.(*netFD).accept(0xc000190100)
net/fd_unix.go:172 +0x29 fp=0xc000063990 sp=0xc0000638d8 pc=0x5693bcb7d7a9
net.(*TCPListener).accept(0xc00012e6c0)
net/tcpsock_posix.go:159 +0x1e fp=0xc0000639e0 sp=0xc000063990 pc=0x5693bcb8ddfe
net.(*TCPListener).Accept(0xc00012e6c0)
net/tcpsock.go:372 +0x30 fp=0xc000063a10 sp=0xc0000639e0 pc=0x5693bcb8d130
net/http.(*onceCloseListener).Accept(0xc000146240?)
<autogenerated>:1 +0x24 fp=0xc000063a28 sp=0xc000063a10 pc=0x5693bcccbd04
net/http.(*Server).Serve(0xc00018e4b0, {0x5693bd0cbeb8, 0xc00012e6c0})
net/http/server.go:3330 +0x30c fp=0xc000063b58 sp=0xc000063a28 pc=0x5693bccbda4c
github.com/ollama/ollama/llama/runner.Execute({0xc000132010?, 0x5693bcab5ffc?, 0x0?})
github.com/ollama/ollama/llama/runner/runner.go:1005 +0x11a9 fp=0xc000063ef8 sp=0xc000063b58 pc=0x5693bccf3149
main.main()
github.com/ollama/ollama/cmd/runner/main.go:11 +0x54 fp=0xc000063f50 sp=0xc000063ef8 pc=0x5693bccf40d4
runtime.main()
runtime/proc.go:272 +0x29d fp=0xc000063fe0 sp=0xc000063f50 pc=0x5693bca7aabd
runtime.goexit({})
runtime/asm_amd64.s:1700 +0x1 fp=0xc000063fe8 sp=0xc000063fe0 pc=0x5693bcab63a1
goroutine 2 gp=0xc000006c40 m=nil [force gc (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
runtime/proc.go:424 +0xce fp=0xc000098fa8 sp=0xc000098f88 pc=0x5693bcaae76e
runtime.goparkunlock(...)
runtime/proc.go:430
runtime.forcegchelper()
runtime/proc.go:337 +0xb8 fp=0xc000098fe0 sp=0xc000098fa8 pc=0x5693bca7adf8
runtime.goexit({})
runtime/asm_amd64.s:1700 +0x1 fp=0xc000098fe8 sp=0xc000098fe0 pc=0x5693bcab63a1
created by runtime.init.7 in goroutine 1
runtime/proc.go:325 +0x1a
goroutine 3 gp=0xc000007180 m=nil [GC sweep wait]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
runtime/proc.go:424 +0xce fp=0xc000099780 sp=0xc000099760 pc=0x5693bcaae76e
runtime.goparkunlock(...)
runtime/proc.go:430
runtime.bgsweep(0xc000026400)
runtime/mgcsweep.go:277 +0x94 fp=0xc0000997c8 sp=0xc000099780 pc=0x5693bca65634
runtime.gcenable.gowrap1()
runtime/mgc.go:204 +0x25 fp=0xc0000997e0 sp=0xc0000997c8 pc=0x5693bca59ee5
runtime.goexit({})
runtime/asm_amd64.s:1700 +0x1 fp=0xc0000997e8 sp=0xc0000997e0 pc=0x5693bcab63a1
created by runtime.gcenable in goroutine 1
runtime/mgc.go:204 +0x66
goroutine 4 gp=0xc000007340 m=nil [GC scavenge wait]:
runtime.gopark(0xc000026400?, 0x5693bcfb8fc0?, 0x1?, 0x0?, 0xc000007340?)
runtime/proc.go:424 +0xce fp=0xc000099f78 sp=0xc000099f58 pc=0x5693bcaae76e
runtime.goparkunlock(...)
runtime/proc.go:430
runtime.(*scavengerState).park(0x5693bd2b6380)
runtime/mgcscavenge.go:425 +0x49 fp=0xc000099fa8 sp=0xc000099f78 pc=0x5693bca63069
runtime.bgscavenge(0xc000026400)
runtime/mgcscavenge.go:653 +0x3c fp=0xc000099fc8 sp=0xc000099fa8 pc=0x5693bca635dc
runtime.gcenable.gowrap2()
runtime/mgc.go:205 +0x25 fp=0xc000099fe0 sp=0xc000099fc8 pc=0x5693bca59e85
runtime.goexit({})
runtime/asm_amd64.s:1700 +0x1 fp=0xc000099fe8 sp=0xc000099fe0 pc=0x5693bcab63a1
created by runtime.gcenable in goroutine 1
runtime/mgc.go:205 +0xa5
goroutine 18 gp=0xc000104700 m=nil [finalizer wait]:
runtime.gopark(0xc000098648?, 0x5693bca503e5?, 0xb0?, 0x1?, 0xc0000061c0?)
runtime/proc.go:424 +0xce fp=0xc000098620 sp=0xc000098600 pc=0x5693bcaae76e
runtime.runfinq()
runtime/mfinal.go:193 +0x107 fp=0xc0000987e0 sp=0xc000098620 pc=0x5693bca58f67
runtime.goexit({})
runtime/asm_amd64.s:1700 +0x1 fp=0xc0000987e8 sp=0xc0000987e0 pc=0x5693bcab63a1
created by runtime.createfing in goroutine 1
runtime/mfinal.go:163 +0x3d
goroutine 19 gp=0xc0001048c0 m=nil [chan receive]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
runtime/proc.go:424 +0xce fp=0xc000094718 sp=0xc0000946f8 pc=0x5693bcaae76e
runtime.chanrecv(0xc0001120e0, 0x0, 0x1)
runtime/chan.go:639 +0x41c fp=0xc000094790 sp=0xc000094718 pc=0x5693bca49bbc
runtime.chanrecv1(0x0?, 0x0?)
runtime/chan.go:489 +0x12 fp=0xc0000947b8 sp=0xc000094790 pc=0x5693bca49792
runtime.unique_runtime_registerUniqueMapCleanup.func1(...)
runtime/mgc.go:1781
runtime.unique_runtime_registerUniqueMapCleanup.gowrap1()
runtime/mgc.go:1784 +0x2f fp=0xc0000947e0 sp=0xc0000947b8 pc=0x5693bca5cd4f
runtime.goexit({})
runtime/asm_amd64.s:1700 +0x1 fp=0xc0000947e8 sp=0xc0000947e0 pc=0x5693bcab63a1
created by unique.runtime_registerUniqueMapCleanup in goroutine 1
runtime/mgc.go:1779 +0x96
goroutine 21 gp=0xc000104c40 m=nil [semacquire]:
runtime.gopark(0x0?, 0x0?, 0x20?, 0x81?, 0x0?)
runtime/proc.go:424 +0xce fp=0xc000095618 sp=0xc0000955f8 pc=0x5693bcaae76e
runtime.goparkunlock(...)
runtime/proc.go:430
runtime.semacquire1(0xc0001461b8, 0x0, 0x1, 0x0, 0x12)
runtime/sema.go:178 +0x22c fp=0xc000095680 sp=0xc000095618 pc=0x5693bca8da8c
sync.runtime_Semacquire(0x0?)
runtime/sema.go:71 +0x25 fp=0xc0000956b8 sp=0xc000095680 pc=0x5693bcaaf9a5
sync.(*WaitGroup).Wait(0x0?)
sync/waitgroup.go:118 +0x48 fp=0xc0000956e0 sp=0xc0000956b8 pc=0x5693bcacbc48
github.com/ollama/ollama/llama/runner.(*Server).run(0xc0001461b0, {0x5693bd0cc4a0, 0xc000196050})
github.com/ollama/ollama/llama/runner/runner.go:315 +0x47 fp=0xc0000957b8 sp=0xc0000956e0 pc=0x5693bccee2c7
github.com/ollama/ollama/llama/runner.Execute.gowrap2()
github.com/ollama/ollama/llama/runner/runner.go:984 +0x28 fp=0xc0000957e0 sp=0xc0000957b8 pc=0x5693bccf3468
runtime.goexit({})
runtime/asm_amd64.s:1700 +0x1 fp=0xc0000957e8 sp=0xc0000957e0 pc=0x5693bcab63a1
created by github.com/ollama/ollama/llama/runner.Execute in goroutine 1
github.com/ollama/ollama/llama/runner/runner.go:984 +0xde5
goroutine 22 gp=0xc000105340 m=nil [IO wait]:
runtime.gopark(0xc0002a6000?, 0xc000185958?, 0x3e?, 0x1?, 0xb?)
runtime/proc.go:424 +0xce fp=0xc000185918 sp=0xc0001858f8 pc=0x5693bcaae76e
runtime.netpollblock(0x5693bcae9f98?, 0xbca46fc6?, 0x93?)
runtime/netpoll.go:575 +0xf7 fp=0xc000185950 sp=0xc000185918 pc=0x5693bca734d7
internal/poll.runtime_pollWait(0x712f89fca618, 0x72)
runtime/netpoll.go:351 +0x85 fp=0xc000185970 sp=0xc000185950 pc=0x5693bcaada65
internal/poll.(*pollDesc).wait(0xc000190180?, 0xc0001b8000?, 0x0)
internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc000185998 sp=0xc000185970 pc=0x5693bcb038a7
internal/poll.(*pollDesc).waitRead(...)
internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc000190180, {0xc0001b8000, 0x1000, 0x1000})
internal/poll/fd_unix.go:165 +0x27a fp=0xc000185a30 sp=0xc000185998 pc=0x5693bcb043fa
net.(*netFD).Read(0xc000190180, {0xc0001b8000?, 0xc000185aa0?, 0x5693bcb03d65?})
net/fd_posix.go:55 +0x25 fp=0xc000185a78 sp=0xc000185a30 pc=0x5693bcb7c6c5
net.(*conn).Read(0xc000124098, {0xc0001b8000?, 0x0?, 0xc00012d058?})
net/net.go:189 +0x45 fp=0xc000185ac0 sp=0xc000185a78 pc=0x5693bcb860c5
net.(*TCPConn).Read(0xc00012d050?, {0xc0001b8000?, 0xc000190180?, 0xc000185af8?})
<autogenerated>:1 +0x25 fp=0xc000185af0 sp=0xc000185ac0 pc=0x5693bcb93165
net/http.(*connReader).Read(0xc00012d050, {0xc0001b8000, 0x1000, 0x1000})
net/http/server.go:798 +0x14b fp=0xc000185b40 sp=0xc000185af0 pc=0x5693bccb434b
bufio.(*Reader).fill(0xc000130480)
bufio/bufio.go:110 +0x103 fp=0xc000185b78 sp=0xc000185b40 pc=0x5693bcc72f63
bufio.(*Reader).Peek(0xc000130480, 0x4)
bufio/bufio.go:148 +0x53 fp=0xc000185b98 sp=0xc000185b78 pc=0x5693bcc73093
net/http.(*conn).serve(0xc000146240, {0x5693bd0cc468, 0xc00012cf60})
net/http/server.go:2127 +0x738 fp=0xc000185fb8 sp=0xc000185b98 pc=0x5693bccb9698
net/http.(*Server).Serve.gowrap3()
net/http/server.go:3360 +0x28 fp=0xc000185fe0 sp=0xc000185fb8 pc=0x5693bccbde48
runtime.goexit({})
runtime/asm_amd64.s:1700 +0x1 fp=0xc000185fe8 sp=0xc000185fe0 pc=0x5693bcab63a1
created by net/http.(*Server).Serve in goroutine 1
net/http/server.go:3360 +0x485
rax 0x712ed72c8ad0
rbx 0x712ed72ced40
rcx 0x713070da6663
rdx 0x712ed4005130
rdi 0x712ed72ced40
rsi 0x3
rbp 0x712edbff61d0
rsp 0x712edbff61a0
r8 0x0
r9 0x0
r10 0x4
r11 0xa66e143e45c2eb86
r12 0x0
r13 0x18
r14 0xffffffffffffffc0
r15 0x712dc3ef8e80
rip 0x713070f0fe2b
rflags 0x10206
cs 0x33
fs 0x0
gs 0x0
time=2024-12-28T21:17:03.119+08:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error"
time=2024-12-28T21:17:03.370+08:00 level=ERROR source=sched.go:455 msg="error loading llama server" error="llama runner process has terminated: exit status 2"
[GIN] 2024/12/28 - 21:17:03 | 500 | 2.421283447s | 127.0.0.1 | POST "/api/generate"
```
I----------------
Following is my packages info:
```
โฐโโโค $ 1 pacman -Q |grep 'ollama\|rocm'
ollama 0.5.4-1
ollama-rocm 0.5.4-1
python-pytorch-rocm 2.5.1-7
rocm-clang-ocl 6.1.2-1
rocm-cmake 6.2.4-1
rocm-core 6.2.4-2
rocm-device-libs 6.2.4-1
rocm-hip-libraries 6.2.2-1
rocm-hip-runtime 6.2.2-1
rocm-hip-sdk 6.2.2-1
rocm-language-runtime 6.2.2-1
rocm-llvm 6.2.4-1
rocm-opencl-runtime 6.2.4-1
rocm-opencl-sdk 6.2.2-1
rocm-smi-lib 6.2.4-1
rocminfo 6.2.4-1
```
Thanks
### OS
Linux
### GPU
AMD
### CPU
AMD
### Ollama version
0.5.4-1 tested both on arch linux installed version and github release page downloaded version.
It works before and broken after i update my arch linux before create this issue.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8262/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8262/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/6164
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6164/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6164/comments
|
https://api.github.com/repos/ollama/ollama/issues/6164/events
|
https://github.com/ollama/ollama/pull/6164
| 2,447,252,236
|
PR_kwDOJ0Z1Ps53YJXF
| 6,164
|
Update gpu.go to support older amdgpu
|
{
"login": "vjr",
"id": 612302,
"node_id": "MDQ6VXNlcjYxMjMwMg==",
"avatar_url": "https://avatars.githubusercontent.com/u/612302?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vjr",
"html_url": "https://github.com/vjr",
"followers_url": "https://api.github.com/users/vjr/followers",
"following_url": "https://api.github.com/users/vjr/following{/other_user}",
"gists_url": "https://api.github.com/users/vjr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vjr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vjr/subscriptions",
"organizations_url": "https://api.github.com/users/vjr/orgs",
"repos_url": "https://api.github.com/users/vjr/repos",
"events_url": "https://api.github.com/users/vjr/events{/privacy}",
"received_events_url": "https://api.github.com/users/vjr/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-08-04T18:42:45
| 2024-08-05T07:40:21
| 2024-08-05T07:40:15
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6164",
"html_url": "https://github.com/ollama/ollama/pull/6164",
"diff_url": "https://github.com/ollama/ollama/pull/6164.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6164.patch",
"merged_at": null
}
|
Unblock users trying to run on older GPUs like AMD RX 580 when setting env vars `ROC_ENABLE_PRE_VEGA=1` and `HSA_OVERRIDE_GFX_VERSION=8.0.3` which should fix the error from https://github.com/ollama/ollama/blob/main/gpu/gpu.go#L62 and https://github.com/ollama/ollama/blob/main/gpu/amd_linux.go#L200 about gpu being too old.
|
{
"login": "vjr",
"id": 612302,
"node_id": "MDQ6VXNlcjYxMjMwMg==",
"avatar_url": "https://avatars.githubusercontent.com/u/612302?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vjr",
"html_url": "https://github.com/vjr",
"followers_url": "https://api.github.com/users/vjr/followers",
"following_url": "https://api.github.com/users/vjr/following{/other_user}",
"gists_url": "https://api.github.com/users/vjr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vjr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vjr/subscriptions",
"organizations_url": "https://api.github.com/users/vjr/orgs",
"repos_url": "https://api.github.com/users/vjr/repos",
"events_url": "https://api.github.com/users/vjr/events{/privacy}",
"received_events_url": "https://api.github.com/users/vjr/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6164/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6164/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/457
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/457/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/457/comments
|
https://api.github.com/repos/ollama/ollama/issues/457/events
|
https://github.com/ollama/ollama/pull/457
| 1,878,105,668
|
PR_kwDOJ0Z1Ps5ZYCUX
| 457
|
do not HTML-escape prompt
|
{
"login": "sqs",
"id": 1976,
"node_id": "MDQ6VXNlcjE5NzY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1976?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sqs",
"html_url": "https://github.com/sqs",
"followers_url": "https://api.github.com/users/sqs/followers",
"following_url": "https://api.github.com/users/sqs/following{/other_user}",
"gists_url": "https://api.github.com/users/sqs/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sqs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sqs/subscriptions",
"organizations_url": "https://api.github.com/users/sqs/orgs",
"repos_url": "https://api.github.com/users/sqs/repos",
"events_url": "https://api.github.com/users/sqs/events{/privacy}",
"received_events_url": "https://api.github.com/users/sqs/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2023-09-01T22:17:50
| 2023-09-03T04:24:31
| 2023-09-02T00:41:54
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/457",
"html_url": "https://github.com/ollama/ollama/pull/457",
"diff_url": "https://github.com/ollama/ollama/pull/457.diff",
"patch_url": "https://github.com/ollama/ollama/pull/457.patch",
"merged_at": "2023-09-02T00:41:53"
}
|
The `html/template` package automatically HTML-escapes interpolated strings in templates. This behavior is undesirable because it causes prompts like `<h1>hello` to be escaped to `<h1>hello` before being passed to the LLM.
The included test case passes, but before the code change, it failed:
```
--- FAIL: TestModelPrompt
images_test.go:21: got "a<h1>b", want "a<h1>b"
```
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/457/reactions",
"total_count": 4,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 3,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/457/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3806
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3806/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3806/comments
|
https://api.github.com/repos/ollama/ollama/issues/3806/events
|
https://github.com/ollama/ollama/issues/3806
| 2,255,276,030
|
I_kwDOJ0Z1Ps6GbMf-
| 3,806
|
sustained beep during inference with any model
|
{
"login": "matt453",
"id": 5316518,
"node_id": "MDQ6VXNlcjUzMTY1MTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5316518?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/matt453",
"html_url": "https://github.com/matt453",
"followers_url": "https://api.github.com/users/matt453/followers",
"following_url": "https://api.github.com/users/matt453/following{/other_user}",
"gists_url": "https://api.github.com/users/matt453/gists{/gist_id}",
"starred_url": "https://api.github.com/users/matt453/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/matt453/subscriptions",
"organizations_url": "https://api.github.com/users/matt453/orgs",
"repos_url": "https://api.github.com/users/matt453/repos",
"events_url": "https://api.github.com/users/matt453/events{/privacy}",
"received_events_url": "https://api.github.com/users/matt453/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-04-21T21:40:06
| 2024-04-21T22:29:06
| 2024-04-21T22:29:06
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I am getting a sustained beep from my motherboard every time I am doing inference with an ollama model while asking questions from a command line shell. the inference happens near instantly but I am not sure what is causing the beeping.
I have tried running a few models
`llama2:latest`
`llama3:8b-instruct-fp16`
In either case I see the model successfully load into VRAM on my graphics card and can see activity on the GPU using things like `nvtop` `nvitop` and `nividia-smi`
the other thing I notice is that the `ollama serve` process always begins to use 100% of one CPU during this inference and the beeping.
I have confirmed my fans are all running both in my case, on the cpu and on the nvidia GPU, and the GPU temperature never goes above 40C - 50C. The CPU temp gets into the 60C -70C during these windows.
I have tried other things to stress the CPU or regular RAM (using `stress`) and nothing else triggers this beeping, even with higher CPU temperatures. any ideas what is unique to running ollama that could cause this?
### OS
Linux
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.1.30
|
{
"login": "matt453",
"id": 5316518,
"node_id": "MDQ6VXNlcjUzMTY1MTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5316518?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/matt453",
"html_url": "https://github.com/matt453",
"followers_url": "https://api.github.com/users/matt453/followers",
"following_url": "https://api.github.com/users/matt453/following{/other_user}",
"gists_url": "https://api.github.com/users/matt453/gists{/gist_id}",
"starred_url": "https://api.github.com/users/matt453/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/matt453/subscriptions",
"organizations_url": "https://api.github.com/users/matt453/orgs",
"repos_url": "https://api.github.com/users/matt453/repos",
"events_url": "https://api.github.com/users/matt453/events{/privacy}",
"received_events_url": "https://api.github.com/users/matt453/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3806/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3806/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2474
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2474/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2474/comments
|
https://api.github.com/repos/ollama/ollama/issues/2474/events
|
https://github.com/ollama/ollama/issues/2474
| 2,132,338,986
|
I_kwDOJ0Z1Ps5_GOkq
| 2,474
|
OpenAI compatibility : getting 404s
|
{
"login": "clairefro",
"id": 9841162,
"node_id": "MDQ6VXNlcjk4NDExNjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/9841162?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/clairefro",
"html_url": "https://github.com/clairefro",
"followers_url": "https://api.github.com/users/clairefro/followers",
"following_url": "https://api.github.com/users/clairefro/following{/other_user}",
"gists_url": "https://api.github.com/users/clairefro/gists{/gist_id}",
"starred_url": "https://api.github.com/users/clairefro/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/clairefro/subscriptions",
"organizations_url": "https://api.github.com/users/clairefro/orgs",
"repos_url": "https://api.github.com/users/clairefro/repos",
"events_url": "https://api.github.com/users/clairefro/events{/privacy}",
"received_events_url": "https://api.github.com/users/clairefro/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-02-13T13:34:38
| 2024-02-14T00:50:53
| 2024-02-14T00:50:52
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Excited about OpenAI compatibility! I can't quite seem to get the OpenAI interfaced endpoint working and keep getting 404. Does it require an update of Ollama? (I'm on mac so I think there are auto updates)
`ollama version 0.1.9`
`baseUrl` = `http://localhost:11434`
OpenAI endpoint
<img width="1224" alt="image" src="https://github.com/ollama/ollama/assets/9841162/9527414e-93e2-4e4e-b502-aa9fe627fc74">
It's working fine with the same model using the traditional completion endpoint
<img width="1249" alt="image" src="https://github.com/ollama/ollama/assets/9841162/6bb331b2-3abf-4f85-9e37-c3c47c4860c1">
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2474/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2474/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7065
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7065/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7065/comments
|
https://api.github.com/repos/ollama/ollama/issues/7065/events
|
https://github.com/ollama/ollama/pull/7065
| 2,559,723,683
|
PR_kwDOJ0Z1Ps59SBQC
| 7,065
|
llama: adjust clip patch for mingw utf-16
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-10-01T16:35:34
| 2024-10-01T22:24:28
| 2024-10-01T22:24:26
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7065",
"html_url": "https://github.com/ollama/ollama/pull/7065",
"diff_url": "https://github.com/ollama/ollama/pull/7065.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7065.patch",
"merged_at": "2024-10-01T22:24:26"
}
|
Fix the patch to compile under mingw, and remove extraneous runtime dependencies
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7065/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7065/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6605
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6605/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6605/comments
|
https://api.github.com/repos/ollama/ollama/issues/6605/events
|
https://github.com/ollama/ollama/pull/6605
| 2,502,356,632
|
PR_kwDOJ0Z1Ps56PHkm
| 6,605
|
Added the tool to generate 3D CAD models using Ollama
|
{
"login": "openvmp",
"id": 113321465,
"node_id": "U_kgDOBsEl-Q",
"avatar_url": "https://avatars.githubusercontent.com/u/113321465?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/openvmp",
"html_url": "https://github.com/openvmp",
"followers_url": "https://api.github.com/users/openvmp/followers",
"following_url": "https://api.github.com/users/openvmp/following{/other_user}",
"gists_url": "https://api.github.com/users/openvmp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/openvmp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/openvmp/subscriptions",
"organizations_url": "https://api.github.com/users/openvmp/orgs",
"repos_url": "https://api.github.com/users/openvmp/repos",
"events_url": "https://api.github.com/users/openvmp/events{/privacy}",
"received_events_url": "https://api.github.com/users/openvmp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-09-03T09:19:49
| 2024-09-03T16:28:02
| 2024-09-03T16:28:01
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6605",
"html_url": "https://github.com/ollama/ollama/pull/6605",
"diff_url": "https://github.com/ollama/ollama/pull/6605.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6605.patch",
"merged_at": "2024-09-03T16:28:01"
}
| null |
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6605/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6605/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/328
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/328/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/328/comments
|
https://api.github.com/repos/ollama/ollama/issues/328/events
|
https://github.com/ollama/ollama/issues/328
| 1,846,178,338
|
I_kwDOJ0Z1Ps5uCnIi
| 328
|
Allow pulling multiple models with `ollama pull <model 1> <model 2> <model 3>`
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 5667396210,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2acg",
"url": "https://api.github.com/repos/ollama/ollama/labels/good%20first%20issue",
"name": "good first issue",
"color": "7057ff",
"default": true,
"description": "Good for newcomers"
}
] |
closed
| false
| null |
[] | null | 1
| 2023-08-11T04:28:37
| 2023-08-23T17:48:14
| 2023-08-23T17:48:14
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Originally seen here: https://www.promptfoo.dev/docs/guides/llama2-uncensored-benchmark-ollama/#requirements
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/328/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/328/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7905
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7905/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7905/comments
|
https://api.github.com/repos/ollama/ollama/issues/7905/events
|
https://github.com/ollama/ollama/issues/7905
| 2,711,120,638
|
I_kwDOJ0Z1Ps6hmGr-
| 7,905
|
ollama 0.4.7 results in: 127
|
{
"login": "bucovaina",
"id": 42909233,
"node_id": "MDQ6VXNlcjQyOTA5MjMz",
"avatar_url": "https://avatars.githubusercontent.com/u/42909233?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bucovaina",
"html_url": "https://github.com/bucovaina",
"followers_url": "https://api.github.com/users/bucovaina/followers",
"following_url": "https://api.github.com/users/bucovaina/following{/other_user}",
"gists_url": "https://api.github.com/users/bucovaina/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bucovaina/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bucovaina/subscriptions",
"organizations_url": "https://api.github.com/users/bucovaina/orgs",
"repos_url": "https://api.github.com/users/bucovaina/repos",
"events_url": "https://api.github.com/users/bucovaina/events{/privacy}",
"received_events_url": "https://api.github.com/users/bucovaina/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info",
"name": "needs more info",
"color": "BA8041",
"default": false,
"description": "More information is needed to assist"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-12-02T09:00:07
| 2024-12-23T08:02:31
| 2024-12-23T08:02:31
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
On a freshly installed RHEL8 host, ollama 4.7.0 does not start. It does when I install 4.6.0. Only changes I made to /etc/systemd/system/ollama.service is setting OLLAMA_HOST to 0.0.0.0
```[root@host ~]# curl -X POST http://localhost:11434/api/generate -d '{
> "model": "phi3",
> "prompt":"Why is the sky blue?"
> }'
{"error":"llama runner process has terminated: exit status 127"}[root@host ~]# ollama run phi3
Error: llama runner process has terminated: exit status 127
[root@host ~]# curl -fsSL https://ollama.com/install.sh | OLLAMA_VERSION=0.4.6 sh
>>> Installing ollama to /usr/local
>>> Downloading Linux amd64 bundle
######################################################################## 100.0%
>>> Adding ollama user to render group...
>>> Adding ollama user to video group...
>>> Adding current user to ollama group...
>>> Creating ollama systemd service...
>>> Enabling and starting ollama service...
>>> NVIDIA GPU installed.
[root@host ~]# curl -X POST http://localhost:11434/api/generate -d '{
"model": "phi3",
"prompt":"Why is the sky blue?"
}'
{"model":"phi3","created_at":"2024-12-02T08:51:31.215490419Z","response":"The","done":false}
{"model":"phi3","created_at":"2024-12-02T08:51:31.243051448Z","response":" sky","done":false}
{"model":"phi3","created_at":"2024-12-02T08:51:31.270859148Z","response":" appears","done":false}
{"model":"phi3","created_at":"2024-12-02T08:51:31.299653528Z","response":" pre","done":false}
{"model":"phi3","created_at":"2024-12-02T08:51:31.329058248Z","response":"domin","done":false}
{"model":"phi3","created_at":"2024-12-02T08:51:31.359087445Z","response":"antly","done":false}
{"model":"phi3","created_at":"2024-12-02T08:51:31.389054145Z","response":" blue","done":false}
{"model":"phi3","created_at":"2024-12-02T08:51:31.418885377Z","response":" to","done":false}
{"model":"phi3","created_at":"2024-12-02T08:51:31.448781861Z","response":" the","done":false}
...
...
```
My GPU is officially not supported. So might be because of that. If you feel like closing this won't fix because it's highly likely because of the Tesla M6, feel free to do so.
```
[root@host ~]# lshw -class video
*-display
description: VGA compatible controller
product: bochs-drmdrmfb
physical id: 1
bus info: pci@0000:00:01.0
logical name: /dev/fb0
version: 02
width: 32 bits
clock: 33MHz
capabilities: vga_controller bus_master rom fb
configuration: depth=32 driver=bochs-drm latency=0 resolution=1280,800
resources: irq:0 memory:80000000-80ffffff memory:8304b000-8304bfff memory:c0000-dffff
*-display
description: VGA compatible controller
product: GM204GL [Tesla M6]
vendor: NVIDIA Corporation
physical id: 0
bus info: pci@0000:01:00.0
logical name: /dev/fb0
version: a1
width: 64 bits
clock: 33MHz
capabilities: pm msi pciexpress vga_controller bus_master cap_list fb
configuration: depth=32 driver=nvidia latency=0 mode=1280x800 visual=truecolor xres=1280 yres=800
resources: iomemory:38000-37fff iomemory:38000-37fff irq:44 memory:81000000-81ffffff memory:380000000000-38000fffffff memory:380010000000-380011ffffff ioport:7000(size=128)
[root@host ~]#
```
This is also a VM on a proxmox host running on a dual Intel e5-2667v3.
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.4.7
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7905/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7905/timeline
| null |
not_planned
| false
|
https://api.github.com/repos/ollama/ollama/issues/1696
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1696/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1696/comments
|
https://api.github.com/repos/ollama/ollama/issues/1696/events
|
https://github.com/ollama/ollama/issues/1696
| 2,055,107,411
|
I_kwDOJ0Z1Ps56fnNT
| 1,696
|
Setting 'num_gpu 0' shouldn't preclude the use of cuBLASS for prompt evaluation
|
{
"login": "jukofyork",
"id": 69222624,
"node_id": "MDQ6VXNlcjY5MjIyNjI0",
"avatar_url": "https://avatars.githubusercontent.com/u/69222624?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jukofyork",
"html_url": "https://github.com/jukofyork",
"followers_url": "https://api.github.com/users/jukofyork/followers",
"following_url": "https://api.github.com/users/jukofyork/following{/other_user}",
"gists_url": "https://api.github.com/users/jukofyork/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jukofyork/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jukofyork/subscriptions",
"organizations_url": "https://api.github.com/users/jukofyork/orgs",
"repos_url": "https://api.github.com/users/jukofyork/repos",
"events_url": "https://api.github.com/users/jukofyork/events{/privacy}",
"received_events_url": "https://api.github.com/users/jukofyork/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2023-12-24T13:49:23
| 2024-03-06T14:10:37
| 2023-12-24T14:09:46
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi,
I've just moved from llama.cpp to ollama and my use case is to feed large prompts to high-parameter/high-quantization models for code evaluation, but I've found there to be quite a serious problem with ollama compared to llama.cpp:
With llama.cpp I am able to run up to 70b 'q6_K' or 'q5_K_M' parameter models on my system with 64gb RAM and 24gb VRAM by compiling with '-DLLAMA_BLAS=ON' and then running with '-ngl 0'. This allows me to use cuBLAS for the prompt evaluation on the GPU but the rest of the evaluation is run on the CPU:
A 70b parameter model will use a max of 50-60gb of system RAM (depending on the quantization level) and you can quite clearly see if offload the de-quantization and other work off to the GPU during the prompt evaluation.
With ollama:
- If I set 'num_gpu' to 0 then nothing gets offloaded to the GPU at all and the prompt evaluation is done at unbearably slow speed on the CPU...
- If I set 'num_gpu' to 1 or more then the work is offloaded to the GPU for prompt evaluation, but because of the way the wrapped llama.cpp's server works; it ends up with an extra unnecessary copy of the model stored in system RAM too!
I'm also getting lots of "out of memory" type crashes for models that get close to the 24gb VRAM limit but otherwise work fine using llama.cpp, but I see from reading the discussion here that this might just be related to the v0.1.14 changes (it doesn't bother me anyway as I'm only interested in speeding up the prompt evaluation).
Well, after 2 days of pulling my hair out trying to work out why none of my changes to the source seem to make any difference... I finally found out I had a copy of '/usr/share/bin/ollama serve' running from the stock installer all along :facepalm:
The problem lies with this code in 'llama.go':
```
if runner.Accelerated && numGPU == 0 {
log.Printf("skipping accelerated runner because num_gpu=0")
continue
}
```
If I comment that out and recompile then everything works as expected!
I didn't want to do a pull request as I've no idea of the logic behind this test or how it would effect others who are just using CPU only inference.
|
{
"login": "jukofyork",
"id": 69222624,
"node_id": "MDQ6VXNlcjY5MjIyNjI0",
"avatar_url": "https://avatars.githubusercontent.com/u/69222624?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jukofyork",
"html_url": "https://github.com/jukofyork",
"followers_url": "https://api.github.com/users/jukofyork/followers",
"following_url": "https://api.github.com/users/jukofyork/following{/other_user}",
"gists_url": "https://api.github.com/users/jukofyork/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jukofyork/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jukofyork/subscriptions",
"organizations_url": "https://api.github.com/users/jukofyork/orgs",
"repos_url": "https://api.github.com/users/jukofyork/repos",
"events_url": "https://api.github.com/users/jukofyork/events{/privacy}",
"received_events_url": "https://api.github.com/users/jukofyork/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1696/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1696/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5605
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5605/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5605/comments
|
https://api.github.com/repos/ollama/ollama/issues/5605/events
|
https://github.com/ollama/ollama/pull/5605
| 2,401,147,776
|
PR_kwDOJ0Z1Ps50_TTF
| 5,605
|
Remove duplicate merge glitch
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-07-10T16:02:06
| 2024-07-10T18:47:11
| 2024-07-10T18:47:08
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5605",
"html_url": "https://github.com/ollama/ollama/pull/5605",
"diff_url": "https://github.com/ollama/ollama/pull/5605.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5605.patch",
"merged_at": "2024-07-10T18:47:08"
}
|
Fixes #5594
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5605/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5605/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7275
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7275/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7275/comments
|
https://api.github.com/repos/ollama/ollama/issues/7275/events
|
https://github.com/ollama/ollama/issues/7275
| 2,600,218,633
|
I_kwDOJ0Z1Ps6a_DAJ
| 7,275
|
New professional model for analyzing images of human organs
|
{
"login": "DewiarQR",
"id": 64423698,
"node_id": "MDQ6VXNlcjY0NDIzNjk4",
"avatar_url": "https://avatars.githubusercontent.com/u/64423698?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DewiarQR",
"html_url": "https://github.com/DewiarQR",
"followers_url": "https://api.github.com/users/DewiarQR/followers",
"following_url": "https://api.github.com/users/DewiarQR/following{/other_user}",
"gists_url": "https://api.github.com/users/DewiarQR/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DewiarQR/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DewiarQR/subscriptions",
"organizations_url": "https://api.github.com/users/DewiarQR/orgs",
"repos_url": "https://api.github.com/users/DewiarQR/repos",
"events_url": "https://api.github.com/users/DewiarQR/events{/privacy}",
"received_events_url": "https://api.github.com/users/DewiarQR/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
open
| false
| null |
[] | null | 0
| 2024-10-20T09:23:16
| 2024-10-20T09:23:16
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
A digital vision model that shows excellent results in image analysis https://developer.nvidia.com/blog/ai-medical-imagery-model-offers-fast-cost-efficient-expert-analysis/
Here is the model itself https://github.com/cozygene/SLIViT
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7275/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7275/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/5739
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5739/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5739/comments
|
https://api.github.com/repos/ollama/ollama/issues/5739/events
|
https://github.com/ollama/ollama/pull/5739
| 2,412,498,804
|
PR_kwDOJ0Z1Ps51lXaw
| 5,739
|
Make `tool_call` response a `string`
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-07-17T02:45:03
| 2024-07-17T03:14:24
| 2024-07-17T03:14:17
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5739",
"html_url": "https://github.com/ollama/ollama/pull/5739",
"diff_url": "https://github.com/ollama/ollama/pull/5739.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5739.patch",
"merged_at": "2024-07-17T03:14:17"
}
| null |
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5739/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5739/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/8277
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8277/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8277/comments
|
https://api.github.com/repos/ollama/ollama/issues/8277/events
|
https://github.com/ollama/ollama/issues/8277
| 2,764,653,228
|
I_kwDOJ0Z1Ps6kyUKs
| 8,277
|
mistral-nemo - context window 1024000?
|
{
"login": "mjaniec2013",
"id": 5925782,
"node_id": "MDQ6VXNlcjU5MjU3ODI=",
"avatar_url": "https://avatars.githubusercontent.com/u/5925782?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mjaniec2013",
"html_url": "https://github.com/mjaniec2013",
"followers_url": "https://api.github.com/users/mjaniec2013/followers",
"following_url": "https://api.github.com/users/mjaniec2013/following{/other_user}",
"gists_url": "https://api.github.com/users/mjaniec2013/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mjaniec2013/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mjaniec2013/subscriptions",
"organizations_url": "https://api.github.com/users/mjaniec2013/orgs",
"repos_url": "https://api.github.com/users/mjaniec2013/repos",
"events_url": "https://api.github.com/users/mjaniec2013/events{/privacy}",
"received_events_url": "https://api.github.com/users/mjaniec2013/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 0
| 2024-12-31T20:16:33
| 2024-12-31T20:16:33
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
model_name='mistral-nemo'
ollama.show(model_name)['modelinfo']
{'general.architecture': 'llama',
'general.basename': 'Mistral-Nemo',
'general.file_type': 2,
'general.finetune': 'Instruct',
'general.languages': ['en', 'fr', 'de', 'es', 'it', 'pt', 'ru', 'zh', 'ja'],
'general.license': 'apache-2.0',
'general.parameter_count': 12247782400,
'general.quantization_version': 2,
'general.size_label': '12B',
'general.type': 'model',
'general.version': '2407',
'llama.attention.head_count': 32,
'llama.attention.head_count_kv': 8,
'llama.attention.key_length': 128,
'llama.attention.layer_norm_rms_epsilon': 1e-05,
'llama.attention.value_length': 128,
'llama.block_count': 40,
'llama.context_length': 1024000, <<<<<
'llama.embedding_length': 5120,
'llama.feed_forward_length': 14336,
'llama.rope.dimension_count': 128,
'llama.rope.freq_base': 1000000,
'llama.vocab_size': 131072,
'tokenizer.ggml.add_bos_token': True,
'tokenizer.ggml.add_eos_token': False,
'tokenizer.ggml.add_space_prefix': False,
'tokenizer.ggml.bos_token_id': 1,
'tokenizer.ggml.eos_token_id': 2,
'tokenizer.ggml.merges': None,
'tokenizer.ggml.model': 'gpt2',
'tokenizer.ggml.pre': 'tekken',
'tokenizer.ggml.token_type': None,
'tokenizer.ggml.tokens': None,
'tokenizer.ggml.unknown_token_id': 0}
Shouldn't it be 128k instead of 1M+?
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.5.4
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8277/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8277/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/1695
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1695/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1695/comments
|
https://api.github.com/repos/ollama/ollama/issues/1695/events
|
https://github.com/ollama/ollama/issues/1695
| 2,055,088,189
|
I_kwDOJ0Z1Ps56fig9
| 1,695
|
How to make the model stop generating response when using via API?
|
{
"login": "EliasPereirah",
"id": 16616409,
"node_id": "MDQ6VXNlcjE2NjE2NDA5",
"avatar_url": "https://avatars.githubusercontent.com/u/16616409?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/EliasPereirah",
"html_url": "https://github.com/EliasPereirah",
"followers_url": "https://api.github.com/users/EliasPereirah/followers",
"following_url": "https://api.github.com/users/EliasPereirah/following{/other_user}",
"gists_url": "https://api.github.com/users/EliasPereirah/gists{/gist_id}",
"starred_url": "https://api.github.com/users/EliasPereirah/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/EliasPereirah/subscriptions",
"organizations_url": "https://api.github.com/users/EliasPereirah/orgs",
"repos_url": "https://api.github.com/users/EliasPereirah/repos",
"events_url": "https://api.github.com/users/EliasPereirah/events{/privacy}",
"received_events_url": "https://api.github.com/users/EliasPereirah/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 4
| 2023-12-24T12:38:37
| 2024-01-03T02:27:54
| 2024-01-03T02:27:54
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
When using Via Cli I can give Ctrl+C, but how to do it via API?
Can anyone help me with this?
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1695/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1695/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6621
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6621/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6621/comments
|
https://api.github.com/repos/ollama/ollama/issues/6621/events
|
https://github.com/ollama/ollama/pull/6621
| 2,504,100,743
|
PR_kwDOJ0Z1Ps56VESa
| 6,621
|
llama: sync llama.cpp to commit 8962422
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-09-04T01:27:11
| 2024-09-04T19:14:51
| 2024-09-04T19:14:50
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6621",
"html_url": "https://github.com/ollama/ollama/pull/6621",
"diff_url": "https://github.com/ollama/ollama/pull/6621.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6621.patch",
"merged_at": "2024-09-04T19:14:50"
}
| null |
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6621/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6621/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6742
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6742/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6742/comments
|
https://api.github.com/repos/ollama/ollama/issues/6742/events
|
https://github.com/ollama/ollama/issues/6742
| 2,518,549,020
|
I_kwDOJ0Z1Ps6WHgIc
| 6,742
|
Add OLMoE 1b-7b
|
{
"login": "Meshwa428",
"id": 135232056,
"node_id": "U_kgDOCA96OA",
"avatar_url": "https://avatars.githubusercontent.com/u/135232056?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Meshwa428",
"html_url": "https://github.com/Meshwa428",
"followers_url": "https://api.github.com/users/Meshwa428/followers",
"following_url": "https://api.github.com/users/Meshwa428/following{/other_user}",
"gists_url": "https://api.github.com/users/Meshwa428/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Meshwa428/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Meshwa428/subscriptions",
"organizations_url": "https://api.github.com/users/Meshwa428/orgs",
"repos_url": "https://api.github.com/users/Meshwa428/repos",
"events_url": "https://api.github.com/users/Meshwa428/events{/privacy}",
"received_events_url": "https://api.github.com/users/Meshwa428/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
closed
| false
| null |
[] | null | 4
| 2024-09-11T05:41:16
| 2024-09-25T22:08:19
| 2024-09-25T22:08:19
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
It's the best for mobile applications and can run on edge devices with only 1b active params
Hf reference: https://huggingface.co/allenai/OLMoE-1B-7B-0924-Instruct
Motivation:
It is fast, reduces carbon emissions and runs on edge devices. What else do we need ๐?
With only 1b active params it is on par with other llms
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6742/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6742/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/909
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/909/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/909/comments
|
https://api.github.com/repos/ollama/ollama/issues/909/events
|
https://github.com/ollama/ollama/issues/909
| 1,962,663,798
|
I_kwDOJ0Z1Ps50-992
| 909
|
Error: unexpected end of JSON input
|
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5667396210,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2acg",
"url": "https://api.github.com/repos/ollama/ollama/labels/good%20first%20issue",
"name": "good first issue",
"color": "7057ff",
"default": true,
"description": "Good for newcomers"
}
] |
closed
| false
| null |
[] | null | 5
| 2023-10-26T04:06:11
| 2024-03-11T19:09:05
| 2024-03-11T19:09:05
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I was testing using fly.io on an A100 40GB in ORD region:
```
curl -X POST https://REDACTED.fly.dev/api/pull -d '{"name": "llama2:13b"}'
{"error":"unexpected end of JSON input"}
```
In the logs:
```
2023-10-26T04:01:54.300 app[REDACTED] ord [info] [GIN] 2023/10/26 - 04:01:54 | 200 | 314.1ยตs | 174.xxx.xxx.xxx | POST "/api/pull"
```
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/909/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/909/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2010
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2010/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2010/comments
|
https://api.github.com/repos/ollama/ollama/issues/2010/events
|
https://github.com/ollama/ollama/issues/2010
| 2,083,086,017
|
I_kwDOJ0Z1Ps58KV7B
| 2,010
|
How to use Ollama in Google Colab?
|
{
"login": "MonikaVijayakumar25",
"id": 156766855,
"node_id": "U_kgDOCVgShw",
"avatar_url": "https://avatars.githubusercontent.com/u/156766855?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MonikaVijayakumar25",
"html_url": "https://github.com/MonikaVijayakumar25",
"followers_url": "https://api.github.com/users/MonikaVijayakumar25/followers",
"following_url": "https://api.github.com/users/MonikaVijayakumar25/following{/other_user}",
"gists_url": "https://api.github.com/users/MonikaVijayakumar25/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MonikaVijayakumar25/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MonikaVijayakumar25/subscriptions",
"organizations_url": "https://api.github.com/users/MonikaVijayakumar25/orgs",
"repos_url": "https://api.github.com/users/MonikaVijayakumar25/repos",
"events_url": "https://api.github.com/users/MonikaVijayakumar25/events{/privacy}",
"received_events_url": "https://api.github.com/users/MonikaVijayakumar25/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 4
| 2024-01-16T06:10:27
| 2024-02-27T10:25:16
| 2024-01-16T19:16:56
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I have tried it via langchain but getting connection error.
ConnectionError: HTTPConnectionPool(host='localhost', port=11434): Max retries exceeded with url: /api/generate/ (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fd433ce48e0>: Failed to establish a new connection: [Errno 111] Connection refused'))
Is there any way to use Ollama in Colab?
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2010/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2010/timeline
| null |
not_planned
| false
|
https://api.github.com/repos/ollama/ollama/issues/297
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/297/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/297/comments
|
https://api.github.com/repos/ollama/ollama/issues/297/events
|
https://github.com/ollama/ollama/issues/297
| 1,838,055,497
|
I_kwDOJ0Z1Ps5tjoBJ
| 297
|
Provide a way to override the entire prompt template at runtime
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 0
| 2023-08-06T05:18:59
| 2023-08-08T04:56:23
| 2023-08-08T04:56:22
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
This would be for (perhaps more advanced) use cases where a prompt is built outside of Ollama
```bash
curl -X POST http://localhost:11434/api/generate -d '{
"model": "llama2",
"template": "<<SYS>>..."
}
```
On prompt vs prompt template:
**Prompt:** The direct input given to a language model to start generating text. It's the first "nudge" that sets the direction for the model's output. For example, if you're using a chat model, the user's initial question or statement can be considered the prompt. If you're generating a story, the first sentence or paragraph could be the prompt.
**Prompt Template:** This is a more structured form of a prompt that is used to consistently structure prompts for specific use cases. It can contain placeholders or variables that get filled in with specific content depending on the context. This can be especially useful for maintaining consistency when working with large volumes of data or for certain applications that require a uniform input structure.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/297/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/297/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/566
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/566/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/566/comments
|
https://api.github.com/repos/ollama/ollama/issues/566/events
|
https://github.com/ollama/ollama/pull/566
| 1,907,388,494
|
PR_kwDOJ0Z1Ps5a6VLE
| 566
|
Use API to check if model exists and pull if necessary
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-09-21T16:52:05
| 2023-09-21T17:35:15
| 2023-09-21T17:35:14
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/566",
"html_url": "https://github.com/ollama/ollama/pull/566",
"diff_url": "https://github.com/ollama/ollama/pull/566.diff",
"patch_url": "https://github.com/ollama/ollama/pull/566.patch",
"merged_at": "2023-09-21T17:35:14"
}
|
Resolves #484
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/566/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/566/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/598
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/598/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/598/comments
|
https://api.github.com/repos/ollama/ollama/issues/598/events
|
https://github.com/ollama/ollama/pull/598
| 1,912,431,987
|
PR_kwDOJ0Z1Ps5bLJ3I
| 598
|
add painter message for exit
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-09-25T23:31:15
| 2023-09-26T22:17:41
| 2023-09-26T22:17:40
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/598",
"html_url": "https://github.com/ollama/ollama/pull/598",
"diff_url": "https://github.com/ollama/ollama/pull/598.diff",
"patch_url": "https://github.com/ollama/ollama/pull/598.patch",
"merged_at": "2023-09-26T22:17:40"
}
|
tell the user how to exit. other options are available (`/exit`, `ctrl+c`) but don't want to be too verbose
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/598/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/598/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/8611
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8611/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8611/comments
|
https://api.github.com/repos/ollama/ollama/issues/8611/events
|
https://github.com/ollama/ollama/issues/8611
| 2,813,726,762
|
I_kwDOJ0Z1Ps6nthAq
| 8,611
|
/clear not actually clearing
|
{
"login": "iplayfast",
"id": 751306,
"node_id": "MDQ6VXNlcjc1MTMwNg==",
"avatar_url": "https://avatars.githubusercontent.com/u/751306?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iplayfast",
"html_url": "https://github.com/iplayfast",
"followers_url": "https://api.github.com/users/iplayfast/followers",
"following_url": "https://api.github.com/users/iplayfast/following{/other_user}",
"gists_url": "https://api.github.com/users/iplayfast/gists{/gist_id}",
"starred_url": "https://api.github.com/users/iplayfast/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iplayfast/subscriptions",
"organizations_url": "https://api.github.com/users/iplayfast/orgs",
"repos_url": "https://api.github.com/users/iplayfast/repos",
"events_url": "https://api.github.com/users/iplayfast/events{/privacy}",
"received_events_url": "https://api.github.com/users/iplayfast/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 4
| 2025-01-27T18:15:01
| 2025-01-29T21:05:05
| 2025-01-29T21:05:05
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Steps that I've done, which shows the bug. There might be a simpler sequence but this is mine.
1. ollama run hf.co/mradermacher/DS-R1-Distill-Q2.5-7B-RP-GGUF:latest
2. /set parameter num_ctx 16384
3. save chrisdeepseek
4. /bye
5. ollama run chrisdeepseek
6. create flappybird.py code.
7. (do some testing extra)
8. /bye
9. ollama run chrisdeepseek
10. flappy bird code comes back!
11. /clear
12. /bye
13. ollama run chrisdeepseek
14. flaapy bird code comes back!
Seems like the larger context doesn't actually get cleared.
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.5.7
|
{
"login": "iplayfast",
"id": 751306,
"node_id": "MDQ6VXNlcjc1MTMwNg==",
"avatar_url": "https://avatars.githubusercontent.com/u/751306?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iplayfast",
"html_url": "https://github.com/iplayfast",
"followers_url": "https://api.github.com/users/iplayfast/followers",
"following_url": "https://api.github.com/users/iplayfast/following{/other_user}",
"gists_url": "https://api.github.com/users/iplayfast/gists{/gist_id}",
"starred_url": "https://api.github.com/users/iplayfast/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iplayfast/subscriptions",
"organizations_url": "https://api.github.com/users/iplayfast/orgs",
"repos_url": "https://api.github.com/users/iplayfast/repos",
"events_url": "https://api.github.com/users/iplayfast/events{/privacy}",
"received_events_url": "https://api.github.com/users/iplayfast/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8611/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8611/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3539
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3539/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3539/comments
|
https://api.github.com/repos/ollama/ollama/issues/3539/events
|
https://github.com/ollama/ollama/pull/3539
| 2,231,447,035
|
PR_kwDOJ0Z1Ps5sBhDO
| 3,539
|
Update README.md
|
{
"login": "writinwaters",
"id": 93570324,
"node_id": "U_kgDOBZPFFA",
"avatar_url": "https://avatars.githubusercontent.com/u/93570324?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/writinwaters",
"html_url": "https://github.com/writinwaters",
"followers_url": "https://api.github.com/users/writinwaters/followers",
"following_url": "https://api.github.com/users/writinwaters/following{/other_user}",
"gists_url": "https://api.github.com/users/writinwaters/gists{/gist_id}",
"starred_url": "https://api.github.com/users/writinwaters/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/writinwaters/subscriptions",
"organizations_url": "https://api.github.com/users/writinwaters/orgs",
"repos_url": "https://api.github.com/users/writinwaters/repos",
"events_url": "https://api.github.com/users/writinwaters/events{/privacy}",
"received_events_url": "https://api.github.com/users/writinwaters/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-04-08T14:56:36
| 2024-04-08T14:58:14
| 2024-04-08T14:58:14
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3539",
"html_url": "https://github.com/ollama/ollama/pull/3539",
"diff_url": "https://github.com/ollama/ollama/pull/3539.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3539.patch",
"merged_at": "2024-04-08T14:58:14"
}
|
RAGFlow now supports integration with Ollama.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3539/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3539/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2221
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2221/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2221/comments
|
https://api.github.com/repos/ollama/ollama/issues/2221/events
|
https://github.com/ollama/ollama/pull/2221
| 2,103,045,835
|
PR_kwDOJ0Z1Ps5lNBrF
| 2,221
|
adjust download and upload concurrency based on available bandwidth
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-01-26T23:15:44
| 2024-03-07T17:27:34
| 2024-03-07T17:27:33
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/2221",
"html_url": "https://github.com/ollama/ollama/pull/2221",
"diff_url": "https://github.com/ollama/ollama/pull/2221.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2221.patch",
"merged_at": "2024-03-07T17:27:33"
}
|
use basic heuristics to determine concurrency.
1. start with 2 concurrency
2. watch the rate
3. if the rate is increasing, add more concurrency
4. stop adding concurrency if the rate plateaus
this only scales concurrency up, never down
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2221/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2221/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/137
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/137/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/137/comments
|
https://api.github.com/repos/ollama/ollama/issues/137/events
|
https://github.com/ollama/ollama/issues/137
| 1,814,240,059
|
I_kwDOJ0Z1Ps5sIxs7
| 137
|
Can't clone repo on Windows directly: invalid path library/modelfiles/llama2:13b
|
{
"login": "nathanleclaire",
"id": 1476820,
"node_id": "MDQ6VXNlcjE0NzY4MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/1476820?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nathanleclaire",
"html_url": "https://github.com/nathanleclaire",
"followers_url": "https://api.github.com/users/nathanleclaire/followers",
"following_url": "https://api.github.com/users/nathanleclaire/following{/other_user}",
"gists_url": "https://api.github.com/users/nathanleclaire/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nathanleclaire/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nathanleclaire/subscriptions",
"organizations_url": "https://api.github.com/users/nathanleclaire/orgs",
"repos_url": "https://api.github.com/users/nathanleclaire/repos",
"events_url": "https://api.github.com/users/nathanleclaire/events{/privacy}",
"received_events_url": "https://api.github.com/users/nathanleclaire/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 2
| 2023-07-20T15:14:57
| 2023-07-21T21:06:45
| 2023-07-21T21:06:45
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Weird result trying to clone repo on my Windows computer.
```
error: invalid path 'library/modelfiles/llama2:13b'
fatal: unable to checkout working tree
warning: Clone succeeded, but checkout failed.
You can inspect what was checked out with 'git status'
and retry with 'git restore --source=HEAD :/'
```
|
{
"login": "nathanleclaire",
"id": 1476820,
"node_id": "MDQ6VXNlcjE0NzY4MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/1476820?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nathanleclaire",
"html_url": "https://github.com/nathanleclaire",
"followers_url": "https://api.github.com/users/nathanleclaire/followers",
"following_url": "https://api.github.com/users/nathanleclaire/following{/other_user}",
"gists_url": "https://api.github.com/users/nathanleclaire/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nathanleclaire/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nathanleclaire/subscriptions",
"organizations_url": "https://api.github.com/users/nathanleclaire/orgs",
"repos_url": "https://api.github.com/users/nathanleclaire/repos",
"events_url": "https://api.github.com/users/nathanleclaire/events{/privacy}",
"received_events_url": "https://api.github.com/users/nathanleclaire/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/137/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/137/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5328
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5328/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5328/comments
|
https://api.github.com/repos/ollama/ollama/issues/5328/events
|
https://github.com/ollama/ollama/pull/5328
| 2,378,235,770
|
PR_kwDOJ0Z1Ps5zxvgn
| 5,328
|
Add a `stop model` command to CLI.
|
{
"login": "asdf93074",
"id": 19619718,
"node_id": "MDQ6VXNlcjE5NjE5NzE4",
"avatar_url": "https://avatars.githubusercontent.com/u/19619718?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/asdf93074",
"html_url": "https://github.com/asdf93074",
"followers_url": "https://api.github.com/users/asdf93074/followers",
"following_url": "https://api.github.com/users/asdf93074/following{/other_user}",
"gists_url": "https://api.github.com/users/asdf93074/gists{/gist_id}",
"starred_url": "https://api.github.com/users/asdf93074/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/asdf93074/subscriptions",
"organizations_url": "https://api.github.com/users/asdf93074/orgs",
"repos_url": "https://api.github.com/users/asdf93074/repos",
"events_url": "https://api.github.com/users/asdf93074/events{/privacy}",
"received_events_url": "https://api.github.com/users/asdf93074/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 3
| 2024-06-27T13:57:32
| 2024-09-11T07:34:25
| 2024-09-11T07:34:24
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5328",
"html_url": "https://github.com/ollama/ollama/pull/5328",
"diff_url": "https://github.com/ollama/ollama/pull/5328.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5328.patch",
"merged_at": null
}
|
Adds a stop command to the CLI which takes just the model name and makes a generate request to stop it by setting keep_alive to 0 as suggested in the FAQ.
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5328/reactions",
"total_count": 4,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 2,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5328/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7711
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7711/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7711/comments
|
https://api.github.com/repos/ollama/ollama/issues/7711/events
|
https://github.com/ollama/ollama/issues/7711
| 2,666,422,422
|
I_kwDOJ0Z1Ps6e7mCW
| 7,711
|
Large host RAM allocation when using full gpu offloading
|
{
"login": "CkovMk",
"id": 29831136,
"node_id": "MDQ6VXNlcjI5ODMxMTM2",
"avatar_url": "https://avatars.githubusercontent.com/u/29831136?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CkovMk",
"html_url": "https://github.com/CkovMk",
"followers_url": "https://api.github.com/users/CkovMk/followers",
"following_url": "https://api.github.com/users/CkovMk/following{/other_user}",
"gists_url": "https://api.github.com/users/CkovMk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CkovMk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CkovMk/subscriptions",
"organizations_url": "https://api.github.com/users/CkovMk/orgs",
"repos_url": "https://api.github.com/users/CkovMk/repos",
"events_url": "https://api.github.com/users/CkovMk/events{/privacy}",
"received_events_url": "https://api.github.com/users/CkovMk/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info",
"name": "needs more info",
"color": "BA8041",
"default": false,
"description": "More information is needed to assist"
}
] |
closed
| false
| null |
[] | null | 14
| 2024-11-17T20:05:27
| 2024-12-23T07:55:49
| 2024-12-23T07:55:49
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Ollama allocates same size of host (CPU) RAM to what the model requires in VRAM when loading the model. If my understanding is correct, it's not actuallly using these RAM. (Perhaps these are just mapped memory to write VRAM? I'm not expert in these...) And when the model is fully loaded, the buffered RAM usage goes back to normal.
See picture below:

I do have enough VRAM, but If host RAM is not big enough, it exits with this error:
```
Error: llama runner process has terminated: error loading model: unable to allocate backend buffer
```
Is this expected behavior? Is it possible to optimize host memory requirement when loading model?
I'm asking this because I'm running a passthroughed GPU in a VM, and I want to save some host RAM by only allocating 8GiB RAM to a VM with 24GiB VRAM.
Thanks!
### OS
Windows
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.4.2
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7711/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7711/timeline
| null |
not_planned
| false
|
https://api.github.com/repos/ollama/ollama/issues/4103
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4103/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4103/comments
|
https://api.github.com/repos/ollama/ollama/issues/4103/events
|
https://github.com/ollama/ollama/issues/4103
| 2,276,276,785
|
I_kwDOJ0Z1Ps6HrTox
| 4,103
|
Apple's OpenELM: model request!
|
{
"login": "andrewcampi",
"id": 63204545,
"node_id": "MDQ6VXNlcjYzMjA0NTQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/63204545?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/andrewcampi",
"html_url": "https://github.com/andrewcampi",
"followers_url": "https://api.github.com/users/andrewcampi/followers",
"following_url": "https://api.github.com/users/andrewcampi/following{/other_user}",
"gists_url": "https://api.github.com/users/andrewcampi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/andrewcampi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/andrewcampi/subscriptions",
"organizations_url": "https://api.github.com/users/andrewcampi/orgs",
"repos_url": "https://api.github.com/users/andrewcampi/repos",
"events_url": "https://api.github.com/users/andrewcampi/events{/privacy}",
"received_events_url": "https://api.github.com/users/andrewcampi/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-05-02T19:03:18
| 2024-05-02T20:21:15
| 2024-05-02T20:21:15
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
https://huggingface.co/apple/OpenELM-3B
Please add this model to the Ollama library. Thanks!
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4103/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4103/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7673
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7673/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7673/comments
|
https://api.github.com/repos/ollama/ollama/issues/7673/events
|
https://github.com/ollama/ollama/issues/7673
| 2,660,291,826
|
I_kwDOJ0Z1Ps6ekNTy
| 7,673
|
CUDA error: out of memory - Llama 3.2 3B on laptop with 13 GB RAM
|
{
"login": "kripper",
"id": 1479804,
"node_id": "MDQ6VXNlcjE0Nzk4MDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1479804?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kripper",
"html_url": "https://github.com/kripper",
"followers_url": "https://api.github.com/users/kripper/followers",
"following_url": "https://api.github.com/users/kripper/following{/other_user}",
"gists_url": "https://api.github.com/users/kripper/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kripper/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kripper/subscriptions",
"organizations_url": "https://api.github.com/users/kripper/orgs",
"repos_url": "https://api.github.com/users/kripper/repos",
"events_url": "https://api.github.com/users/kripper/events{/privacy}",
"received_events_url": "https://api.github.com/users/kripper/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 26
| 2024-11-14T22:59:15
| 2024-12-04T21:05:54
| 2024-12-04T21:05:54
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Hardware has 11.1 GiB (RAM) + 1.9 GiB (GPU) = 13 GiB, but fails to run a 3B model.
Any idea why?
```
Nov 14 17:49:49 fedora ollama[1197]: r14 0x6
Nov 14 17:49:49 fedora ollama[1197]: r15 0x626b00000
Nov 14 17:49:49 fedora ollama[1197]: rip 0x7fd1485c4664
Nov 14 17:49:49 fedora ollama[1197]: rflags 0x246
Nov 14 17:49:49 fedora ollama[1197]: cs 0x33
Nov 14 17:49:49 fedora ollama[1197]: fs 0x0
Nov 14 17:49:49 fedora ollama[1197]: gs 0x0
Nov 14 17:49:49 fedora ollama[1197]: [GIN] 2024/11/14 - 17:49:49 | 200 | 1m56s | 192.168.0.7 | POST "/api/chat"
Nov 14 17:52:06 fedora ollama[1197]: [GIN] 2024/11/14 - 17:52:06 | 200 | 74.935ยตs | 192.168.0.7 | GET "/api/version"
Nov 14 17:52:13 fedora ollama[1197]: [GIN] 2024/11/14 - 17:52:13 | 200 | 41.147ยตs | 192.168.0.7 | GET "/api/version"
Nov 14 17:52:33 fedora ollama[1197]: [GIN] 2024/11/14 - 17:52:33 | 200 | 1.082721ms | 192.168.0.7 | GET "/api/tags"
Nov 14 17:52:34 fedora ollama[1197]: time=2024-11-14T17:52:34.562-05:00 level=INFO source=sched.go:507 msg="updated VRAM based on existing loaded models" gpu=GPU-347193f9-2627-a9eb-8c2e-e2158c820e98 library=cuda total="1.9 GiB" available="94.7 MiB"
Nov 14 17:52:39 fedora ollama[1197]: time=2024-11-14T17:52:39.610-05:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.047152054 model=/usr/share/ollama/.ollama/models/blobs/sha256-2bada8a7450677000f678be90653b85d364de7db25eb5ea54136ada5f3933730
Nov 14 17:52:39 fedora ollama[1197]: time=2024-11-14T17:52:39.797-05:00 level=INFO source=server.go:105 msg="system memory" total="11.1 GiB" free="9.9 GiB" free_swap="8.0 GiB"
Nov 14 17:52:39 fedora ollama[1197]: time=2024-11-14T17:52:39.798-05:00 level=INFO source=memory.go:343 msg="offload to cuda" layers.requested=-1 layers.model=29 layers.offload=13 layers.split="" memory.available="[1.9 GiB]" memory.gpu_overhead="0 B" memory.required.full="3.2 GiB" memory.required.partial="1.9 GiB" memory.required.kv="224.0 MiB" memory.required.allocations="[1.9 GiB]" memory.weights.total="1.8 GiB" memory.weights.repeating="1.5 GiB" memory.weights.nonrepeating="308.2 MiB" memory.graph.full="256.5 MiB" memory.graph.partial="570.7 MiB"
Nov 14 17:52:39 fedora ollama[1197]: time=2024-11-14T17:52:39.799-05:00 level=INFO source=server.go:383 msg="starting llama server" cmd="/tmp/ollama1543119167/runners/cuda_v11/ollama_llama_server --model /usr/share/ollama/.ollama/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff --ctx-size 2048 --batch-size 512 --n-gpu-layers 13 --threads 2 --parallel 1 --port 42457"
Nov 14 17:52:39 fedora ollama[1197]: time=2024-11-14T17:52:39.800-05:00 level=INFO source=sched.go:449 msg="loaded runners" count=1
Nov 14 17:52:39 fedora ollama[1197]: time=2024-11-14T17:52:39.800-05:00 level=INFO source=server.go:562 msg="waiting for llama runner to start responding"
Nov 14 17:52:39 fedora ollama[1197]: time=2024-11-14T17:52:39.800-05:00 level=INFO source=server.go:596 msg="waiting for server to become available" status="llm server error"
Nov 14 17:52:39 fedora ollama[1197]: time=2024-11-14T17:52:39.811-05:00 level=INFO source=runner.go:863 msg="starting go runner"
Nov 14 17:52:39 fedora ollama[1197]: time=2024-11-14T17:52:39.811-05:00 level=INFO source=runner.go:864 msg=system info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | cgo(gcc)" threads=2
Nov 14 17:52:39 fedora ollama[1197]: time=2024-11-14T17:52:39.811-05:00 level=INFO source=.:0 msg="Server listening on 127.0.0.1:42457"
Nov 14 17:52:39 fedora ollama[1197]: time=2024-11-14T17:52:39.861-05:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.297443815 model=/usr/share/ollama/.ollama/models/blobs/sha256-2bada8a7450677000f678be90653b85d364de7db25eb5ea54136ada5f3933730
Nov 14 17:52:39 fedora ollama[1197]: llama_model_loader: loaded meta data with 30 key-value pairs and 255 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff (version GGUF V3 (latest))
Nov 14 17:52:39 fedora ollama[1197]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
Nov 14 17:52:39 fedora ollama[1197]: llama_model_loader: - kv 0: general.architecture str = llama
Nov 14 17:52:39 fedora ollama[1197]: llama_model_loader: - kv 1: general.type str = model
Nov 14 17:52:39 fedora ollama[1197]: llama_model_loader: - kv 2: general.name str = Llama 3.2 3B Instruct
Nov 14 17:52:39 fedora ollama[1197]: llama_model_loader: - kv 3: general.finetune str = Instruct
Nov 14 17:52:39 fedora ollama[1197]: llama_model_loader: - kv 4: general.basename str = Llama-3.2
Nov 14 17:52:39 fedora ollama[1197]: llama_model_loader: - kv 5: general.size_label str = 3B
Nov 14 17:52:39 fedora ollama[1197]: llama_model_loader: - kv 6: general.tags arr[str,6] = ["facebook", "meta", "pytorch", "llam...
Nov 14 17:52:39 fedora ollama[1197]: llama_model_loader: - kv 7: general.languages arr[str,8] = ["en", "de", "fr", "it", "pt", "hi", ...
Nov 14 17:52:39 fedora ollama[1197]: llama_model_loader: - kv 8: llama.block_count u32 = 28
Nov 14 17:52:39 fedora ollama[1197]: llama_model_loader: - kv 9: llama.context_length u32 = 131072
Nov 14 17:52:39 fedora ollama[1197]: llama_model_loader: - kv 10: llama.embedding_length u32 = 3072
Nov 14 17:52:39 fedora ollama[1197]: llama_model_loader: - kv 11: llama.feed_forward_length u32 = 8192
Nov 14 17:52:39 fedora ollama[1197]: llama_model_loader: - kv 12: llama.attention.head_count u32 = 24
Nov 14 17:52:39 fedora ollama[1197]: llama_model_loader: - kv 13: llama.attention.head_count_kv u32 = 8
Nov 14 17:52:39 fedora ollama[1197]: llama_model_loader: - kv 14: llama.rope.freq_base f32 = 500000.000000
Nov 14 17:52:39 fedora ollama[1197]: llama_model_loader: - kv 15: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
Nov 14 17:52:39 fedora ollama[1197]: llama_model_loader: - kv 16: llama.attention.key_length u32 = 128
Nov 14 17:52:39 fedora ollama[1197]: llama_model_loader: - kv 17: llama.attention.value_length u32 = 128
Nov 14 17:52:39 fedora ollama[1197]: llama_model_loader: - kv 18: general.file_type u32 = 15
Nov 14 17:52:39 fedora ollama[1197]: llama_model_loader: - kv 19: llama.vocab_size u32 = 128256
Nov 14 17:52:39 fedora ollama[1197]: llama_model_loader: - kv 20: llama.rope.dimension_count u32 = 128
Nov 14 17:52:39 fedora ollama[1197]: llama_model_loader: - kv 21: tokenizer.ggml.model str = gpt2
Nov 14 17:52:39 fedora ollama[1197]: llama_model_loader: - kv 22: tokenizer.ggml.pre str = llama-bpe
Nov 14 17:52:39 fedora ollama[1197]: llama_model_loader: - kv 23: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ...
Nov 14 17:52:39 fedora ollama[1197]: llama_model_loader: - kv 24: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
Nov 14 17:52:40 fedora ollama[1197]: time=2024-11-14T17:52:40.051-05:00 level=INFO source=server.go:596 msg="waiting for server to become available" status="llm server loading model"
Nov 14 17:52:40 fedora ollama[1197]: llama_model_loader: - kv 25: tokenizer.ggml.merges arr[str,280147] = ["ฤ ฤ ", "ฤ ฤ ฤ ฤ ", "ฤ ฤ ฤ ฤ ", "...
Nov 14 17:52:40 fedora ollama[1197]: llama_model_loader: - kv 26: tokenizer.ggml.bos_token_id u32 = 128000
Nov 14 17:52:40 fedora ollama[1197]: llama_model_loader: - kv 27: tokenizer.ggml.eos_token_id u32 = 128009
Nov 14 17:52:40 fedora ollama[1197]: llama_model_loader: - kv 28: tokenizer.chat_template str = {{- bos_token }}\n{%- if custom_tools ...
Nov 14 17:52:40 fedora ollama[1197]: llama_model_loader: - kv 29: general.quantization_version u32 = 2
Nov 14 17:52:40 fedora ollama[1197]: llama_model_loader: - type f32: 58 tensors
Nov 14 17:52:40 fedora ollama[1197]: llama_model_loader: - type q4_K: 168 tensors
Nov 14 17:52:40 fedora ollama[1197]: llama_model_loader: - type q6_K: 29 tensors
Nov 14 17:52:40 fedora ollama[1197]: time=2024-11-14T17:52:40.110-05:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.547122288 model=/usr/share/ollama/.ollama/models/blobs/sha256-2bada8a7450677000f678be90653b85d364de7db25eb5ea54136ada5f3933730
Nov 14 17:52:40 fedora ollama[1197]: llm_load_vocab: special tokens cache size = 256
Nov 14 17:52:40 fedora ollama[1197]: llm_load_vocab: token to piece cache size = 0.7999 MB
Nov 14 17:52:40 fedora ollama[1197]: llm_load_print_meta: format = GGUF V3 (latest)
Nov 14 17:52:40 fedora ollama[1197]: llm_load_print_meta: arch = llama
Nov 14 17:52:40 fedora ollama[1197]: llm_load_print_meta: vocab type = BPE
Nov 14 17:52:40 fedora ollama[1197]: llm_load_print_meta: n_vocab = 128256
Nov 14 17:52:40 fedora ollama[1197]: llm_load_print_meta: n_merges = 280147
Nov 14 17:52:40 fedora ollama[1197]: llm_load_print_meta: vocab_only = 0
Nov 14 17:52:40 fedora ollama[1197]: llm_load_print_meta: n_ctx_train = 131072
Nov 14 17:52:40 fedora ollama[1197]: llm_load_print_meta: n_embd = 3072
Nov 14 17:52:40 fedora ollama[1197]: llm_load_print_meta: n_layer = 28
Nov 14 17:52:40 fedora ollama[1197]: llm_load_print_meta: n_head = 24
Nov 14 17:52:40 fedora ollama[1197]: llm_load_print_meta: n_head_kv = 8
Nov 14 17:52:40 fedora ollama[1197]: llm_load_print_meta: n_rot = 128
Nov 14 17:52:40 fedora ollama[1197]: llm_load_print_meta: n_swa = 0
Nov 14 17:52:40 fedora ollama[1197]: llm_load_print_meta: n_embd_head_k = 128
Nov 14 17:52:40 fedora ollama[1197]: llm_load_print_meta: n_embd_head_v = 128
Nov 14 17:52:40 fedora ollama[1197]: llm_load_print_meta: n_gqa = 3
Nov 14 17:52:40 fedora ollama[1197]: llm_load_print_meta: n_embd_k_gqa = 1024
Nov 14 17:52:40 fedora ollama[1197]: llm_load_print_meta: n_embd_v_gqa = 1024
Nov 14 17:52:40 fedora ollama[1197]: llm_load_print_meta: f_norm_eps = 0.0e+00
Nov 14 17:52:40 fedora ollama[1197]: llm_load_print_meta: f_norm_rms_eps = 1.0e-05
Nov 14 17:52:40 fedora ollama[1197]: llm_load_print_meta: f_clamp_kqv = 0.0e+00
Nov 14 17:52:40 fedora ollama[1197]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00
Nov 14 17:52:40 fedora ollama[1197]: llm_load_print_meta: f_logit_scale = 0.0e+00
Nov 14 17:52:40 fedora ollama[1197]: llm_load_print_meta: n_ff = 8192
Nov 14 17:52:40 fedora ollama[1197]: llm_load_print_meta: n_expert = 0
Nov 14 17:52:40 fedora ollama[1197]: llm_load_print_meta: n_expert_used = 0
Nov 14 17:52:40 fedora ollama[1197]: llm_load_print_meta: causal attn = 1
Nov 14 17:52:40 fedora ollama[1197]: llm_load_print_meta: pooling type = 0
Nov 14 17:52:40 fedora ollama[1197]: llm_load_print_meta: rope type = 0
Nov 14 17:52:40 fedora ollama[1197]: llm_load_print_meta: rope scaling = linear
Nov 14 17:52:40 fedora ollama[1197]: llm_load_print_meta: freq_base_train = 500000.0
Nov 14 17:52:40 fedora ollama[1197]: llm_load_print_meta: freq_scale_train = 1
Nov 14 17:52:40 fedora ollama[1197]: llm_load_print_meta: n_ctx_orig_yarn = 131072
Nov 14 17:52:40 fedora ollama[1197]: llm_load_print_meta: rope_finetuned = unknown
Nov 14 17:52:40 fedora ollama[1197]: llm_load_print_meta: ssm_d_conv = 0
Nov 14 17:52:40 fedora ollama[1197]: llm_load_print_meta: ssm_d_inner = 0
Nov 14 17:52:40 fedora ollama[1197]: llm_load_print_meta: ssm_d_state = 0
Nov 14 17:52:40 fedora ollama[1197]: llm_load_print_meta: ssm_dt_rank = 0
Nov 14 17:52:40 fedora ollama[1197]: llm_load_print_meta: ssm_dt_b_c_rms = 0
Nov 14 17:52:40 fedora ollama[1197]: llm_load_print_meta: model type = 3B
Nov 14 17:52:40 fedora ollama[1197]: llm_load_print_meta: model ftype = Q4_K - Medium
Nov 14 17:52:40 fedora ollama[1197]: llm_load_print_meta: model params = 3.21 B
Nov 14 17:52:40 fedora ollama[1197]: llm_load_print_meta: model size = 1.87 GiB (5.01 BPW)
Nov 14 17:52:40 fedora ollama[1197]: llm_load_print_meta: general.name = Llama 3.2 3B Instruct
Nov 14 17:52:40 fedora ollama[1197]: llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>'
Nov 14 17:52:40 fedora ollama[1197]: llm_load_print_meta: EOS token = 128009 '<|eot_id|>'
Nov 14 17:52:40 fedora ollama[1197]: llm_load_print_meta: LF token = 128 'ร'
Nov 14 17:52:40 fedora ollama[1197]: llm_load_print_meta: EOT token = 128009 '<|eot_id|>'
Nov 14 17:52:40 fedora ollama[1197]: llm_load_print_meta: EOM token = 128008 '<|eom_id|>'
Nov 14 17:52:40 fedora ollama[1197]: llm_load_print_meta: EOG token = 128008 '<|eom_id|>'
Nov 14 17:52:40 fedora ollama[1197]: llm_load_print_meta: EOG token = 128009 '<|eot_id|>'
Nov 14 17:52:40 fedora ollama[1197]: llm_load_print_meta: max token length = 256
Nov 14 17:52:40 fedora ollama[1197]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
Nov 14 17:52:40 fedora ollama[1197]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
Nov 14 17:52:40 fedora ollama[1197]: ggml_cuda_init: found 1 CUDA devices:
Nov 14 17:52:40 fedora ollama[1197]: Device 0: NVIDIA GeForce 940M, compute capability 5.0, VMM: yes
Nov 14 17:52:40 fedora ollama[1197]: llm_load_tensors: ggml ctx size = 0.24 MiB
Nov 14 17:53:05 fedora ollama[1197]: llm_load_tensors: offloading 13 repeating layers to GPU
Nov 14 17:53:05 fedora ollama[1197]: llm_load_tensors: offloaded 13/29 layers to GPU
Nov 14 17:53:05 fedora ollama[1197]: llm_load_tensors: CPU buffer size = 1918.35 MiB
Nov 14 17:53:05 fedora ollama[1197]: llm_load_tensors: CUDA0 buffer size = 757.22 MiB
Nov 14 17:53:06 fedora ollama[1197]: llama_new_context_with_model: n_ctx = 2048
Nov 14 17:53:06 fedora ollama[1197]: llama_new_context_with_model: n_batch = 512
Nov 14 17:53:06 fedora ollama[1197]: llama_new_context_with_model: n_ubatch = 512
Nov 14 17:53:06 fedora ollama[1197]: llama_new_context_with_model: flash_attn = 0
Nov 14 17:53:06 fedora ollama[1197]: llama_new_context_with_model: freq_base = 500000.0
Nov 14 17:53:06 fedora ollama[1197]: llama_new_context_with_model: freq_scale = 1
Nov 14 17:53:06 fedora ollama[1197]: llama_kv_cache_init: CUDA_Host KV buffer size = 120.00 MiB
Nov 14 17:53:06 fedora ollama[1197]: llama_kv_cache_init: CUDA0 KV buffer size = 104.00 MiB
Nov 14 17:53:06 fedora ollama[1197]: llama_new_context_with_model: KV self size = 224.00 MiB, K (f16): 112.00 MiB, V (f16): 112.00 MiB
Nov 14 17:53:06 fedora ollama[1197]: llama_new_context_with_model: CUDA_Host output buffer size = 0.50 MiB
Nov 14 17:53:06 fedora ollama[1197]: llama_new_context_with_model: CUDA0 compute buffer size = 564.73 MiB
Nov 14 17:53:06 fedora ollama[1197]: llama_new_context_with_model: CUDA_Host compute buffer size = 14.01 MiB
Nov 14 17:53:06 fedora ollama[1197]: llama_new_context_with_model: graph nodes = 902
Nov 14 17:53:06 fedora ollama[1197]: llama_new_context_with_model: graph splits = 199
Nov 14 17:53:06 fedora ollama[1197]: time=2024-11-14T17:53:06.409-05:00 level=INFO source=server.go:601 msg="llama runner started in 26.61 seconds"
Nov 14 17:53:16 fedora ollama[1197]: CUDA error: out of memory
Nov 14 17:53:16 fedora ollama[1197]: current device: 0, in function alloc at ggml-cuda.cu:406
Nov 14 17:53:16 fedora ollama[1197]: cuMemCreate(&handle, reserve_size, &prop, 0)
Nov 14 17:53:16 fedora ollama[1197]: ggml-cuda.cu:132: CUDA error
Nov 14 17:53:16 fedora ollama[2556]: [New LWP 2509]
Nov 14 17:53:16 fedora ollama[2556]: [New LWP 2508]
Nov 14 17:53:16 fedora ollama[2556]: [New LWP 2507]
Nov 14 17:53:16 fedora ollama[2556]: [New LWP 2506]
Nov 14 17:53:16 fedora ollama[2556]: [New LWP 2503]
Nov 14 17:53:16 fedora ollama[2556]: [New LWP 2502]
Nov 14 17:53:16 fedora ollama[2556]: [New LWP 2501]
Nov 14 17:53:16 fedora ollama[2556]: [New LWP 2500]
Nov 14 17:53:16 fedora ollama[2556]: [New LWP 2499]
Nov 14 17:53:16 fedora ollama[2556]: [Thread debugging using libthread_db enabled]
Nov 14 17:53:16 fedora ollama[2556]: Using host libthread_db library "/lib64/libthread_db.so.1".
Nov 14 17:53:16 fedora ollama[2556]: 0x00005595604abba3 in ?? ()
Nov 14 17:53:16 fedora ollama[2556]: #0 0x00005595604abba3 in ?? ()
Nov 14 17:53:16 fedora ollama[2556]: #1 0x0000559560470ef0 in _start ()
Nov 14 17:53:16 fedora ollama[2556]: [Inferior 1 (process 2498) detached]
Nov 14 17:53:16 fedora ollama[1197]: SIGABRT: abort
Nov 14 17:53:16 fedora ollama[1197]: PC=0x7fc2f38a8664 m=4 sigcode=18446744073709551610
Nov 14 17:53:16 fedora ollama[1197]: signal arrived during cgo execution
Nov 14 17:53:16 fedora ollama[1197]: goroutine 7 gp=0xc000156000 m=4 mp=0xc000049808 [syscall]:
Nov 14 17:53:16 fedora ollama[1197]: runtime.cgocall(0x5595606bee90, 0xc000052b60)
Nov 14 17:53:16 fedora ollama[1197]: runtime/cgocall.go:157 +0x4b fp=0xc000052b38 sp=0xc000052b00 pc=0x5595604413cb
Nov 14 17:53:16 fedora ollama[1197]: github.com/ollama/ollama/llama._Cfunc_llama_decode(0x7fc27c0068f0, {0x200, 0x7fc27c028e80, 0x0, 0x0, 0x7fc27c029690, 0x7fc27c029ea0, 0x7fc27c02a6b0, 0x7fc2559b1600, 0x0, ...})
Nov 14 17:53:16 fedora ollama[1197]: _cgo_gotypes.go:543 +0x52 fp=0xc000052b60 sp=0xc000052b38 pc=0x55956053e952
Nov 14 17:53:16 fedora ollama[1197]: github.com/ollama/ollama/llama.(*Context).Decode.func1(0x5595606bad4b?, 0x7fc27c0068f0?)
Nov 14 17:53:16 fedora ollama[1197]: github.com/ollama/ollama/llama/llama.go:167 +0xd8 fp=0xc000052c80 sp=0xc000052b60 pc=0x559560540e78
Nov 14 17:53:16 fedora ollama[1197]: github.com/ollama/ollama/llama.(*Context).Decode(0x559560cb3060?, 0x0?)
Nov 14 17:53:16 fedora ollama[1197]: github.com/ollama/ollama/llama/llama.go:167 +0x17 fp=0xc000052cc8 sp=0xc000052c80 pc=0x559560540cd7
Nov 14 17:53:16 fedora ollama[1197]: main.(*Server).processBatch(0xc000122120, 0xc0000ce000, 0xc000052f10)
Nov 14 17:53:16 fedora ollama[1197]: github.com/ollama/ollama/llama/runner/runner.go:424 +0x29e fp=0xc000052ed0 sp=0xc000052cc8 pc=0x5595606b9d7e
Nov 14 17:53:16 fedora ollama[1197]: main.(*Server).run(0xc000122120, {0x5595609fca40, 0xc000078050})
Nov 14 17:53:16 fedora ollama[1197]: github.com/ollama/ollama/llama/runner/runner.go:338 +0x1a5 fp=0xc000052fb8 sp=0xc000052ed0 pc=0x5595606b9765
Nov 14 17:53:16 fedora ollama[1197]: main.main.gowrap2()
Nov 14 17:53:16 fedora ollama[1197]: github.com/ollama/ollama/llama/runner/runner.go:901 +0x28 fp=0xc000052fe0 sp=0xc000052fb8 pc=0x5595606bdec8
Nov 14 17:53:16 fedora ollama[1197]: runtime.goexit({})
Nov 14 17:53:16 fedora ollama[1197]: runtime/asm_amd64.s:1695 +0x1 fp=0xc000052fe8 sp=0xc000052fe0 pc=0x5595604a9de1
Nov 14 17:53:16 fedora ollama[1197]: created by main.main in goroutine 1
Nov 14 17:53:16 fedora ollama[1197]: github.com/ollama/ollama/llama/runner/runner.go:901 +0xc2b
Nov 14 17:53:16 fedora ollama[1197]: goroutine 1 gp=0xc0000061c0 m=nil [IO wait]:
Nov 14 17:53:16 fedora ollama[1197]: runtime.gopark(0x1?, 0xc000029908?, 0xf4?, 0x7d?, 0xc0000298e8?)
Nov 14 17:53:16 fedora ollama[1197]: runtime/proc.go:402 +0xce fp=0xc000029888 sp=0xc000029868 pc=0x55956047800e
Nov 14 17:53:16 fedora ollama[1197]: runtime.netpollblock(0x10?, 0x60440b26?, 0x95?)
Nov 14 17:53:16 fedora ollama[1197]: runtime/netpoll.go:573 +0xf7 fp=0xc0000298c0 sp=0xc000029888 pc=0x559560470257
Nov 14 17:53:16 fedora ollama[1197]: internal/poll.runtime_pollWait(0x7fc2f36d6fe0, 0x72)
Nov 14 17:53:16 fedora ollama[1197]: runtime/netpoll.go:345 +0x85 fp=0xc0000298e0 sp=0xc0000298c0 pc=0x5595604a4aa5
Nov 14 17:53:16 fedora ollama[1197]: internal/poll.(*pollDesc).wait(0x3?, 0x7fc2f55512c8?, 0x0)
Nov 14 17:53:16 fedora ollama[1197]: internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc000029908 sp=0xc0000298e0 pc=0x5595604f49c7
Nov 14 17:53:16 fedora ollama[1197]: internal/poll.(*pollDesc).waitRead(...)
Nov 14 17:53:16 fedora ollama[1197]: internal/poll/fd_poll_runtime.go:89
Nov 14 17:53:16 fedora ollama[1197]: internal/poll.(*FD).Accept(0xc000150080)
Nov 14 17:53:16 fedora ollama[1197]: internal/poll/fd_unix.go:611 +0x2ac fp=0xc0000299b0 sp=0xc000029908 pc=0x5595604f5e8c
Nov 14 17:53:16 fedora ollama[1197]: net.(*netFD).accept(0xc000150080)
Nov 14 17:53:16 fedora ollama[1197]: net/fd_unix.go:172 +0x29 fp=0xc000029a68 sp=0xc0000299b0 pc=0x5595605648a9
Nov 14 17:53:16 fedora ollama[1197]: net.(*TCPListener).accept(0xc00002e1e0)
Nov 14 17:53:16 fedora ollama[1197]: net/tcpsock_posix.go:159 +0x1e fp=0xc000029a90 sp=0xc000029a68 pc=0x5595605755de
Nov 14 17:53:16 fedora ollama[1197]: net.(*TCPListener).Accept(0xc00002e1e0)
Nov 14 17:53:16 fedora ollama[1197]: net/tcpsock.go:327 +0x30 fp=0xc000029ac0 sp=0xc000029a90 pc=0x559560574930
Nov 14 17:53:16 fedora ollama[1197]: net/http.(*onceCloseListener).Accept(0xc00009e000?)
Nov 14 17:53:16 fedora ollama[1197]: <autogenerated>:1 +0x24 fp=0xc000029ad8 sp=0xc000029ac0 pc=0x55956069ba44
Nov 14 17:53:16 fedora ollama[1197]: net/http.(*Server).Serve(0xc0000163c0, {0x5595609fc400, 0xc00002e1e0})
Nov 14 17:53:16 fedora ollama[1197]: net/http/server.go:3260 +0x33e fp=0xc000029c08 sp=0xc000029ad8 pc=0x55956069285e
Nov 14 17:53:16 fedora ollama[1197]: main.main()
Nov 14 17:53:16 fedora ollama[1197]: github.com/ollama/ollama/llama/runner/runner.go:921 +0xfcc fp=0xc000029f50 sp=0xc000029c08 pc=0x5595606bdc4c
Nov 14 17:53:16 fedora ollama[1197]: runtime.main()
Nov 14 17:53:16 fedora ollama[1197]: runtime/proc.go:271 +0x29d fp=0xc000029fe0 sp=0xc000029f50 pc=0x559560477bdd
Nov 14 17:53:16 fedora ollama[1197]: runtime.goexit({})
Nov 14 17:53:16 fedora ollama[1197]: runtime/asm_amd64.s:1695 +0x1 fp=0xc000029fe8 sp=0xc000029fe0 pc=0x5595604a9de1
Nov 14 17:53:16 fedora ollama[1197]: goroutine 2 gp=0xc000006c40 m=nil [force gc (idle)]:
Nov 14 17:53:16 fedora ollama[1197]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
Nov 14 17:53:16 fedora ollama[1197]: runtime/proc.go:402 +0xce fp=0xc000042fa8 sp=0xc000042f88 pc=0x55956047800e
Nov 14 17:53:16 fedora ollama[1197]: runtime.goparkunlock(...)
Nov 14 17:53:16 fedora ollama[1197]: runtime/proc.go:408
Nov 14 17:53:16 fedora ollama[1197]: runtime.forcegchelper()
Nov 14 17:53:16 fedora ollama[1197]: runtime/proc.go:326 +0xb8 fp=0xc000042fe0 sp=0xc000042fa8 pc=0x559560477e98
Nov 14 17:53:16 fedora ollama[1197]: runtime.goexit({})
Nov 14 17:53:16 fedora ollama[1197]: runtime/asm_amd64.s:1695 +0x1 fp=0xc000042fe8 sp=0xc000042fe0 pc=0x5595604a9de1
Nov 14 17:53:16 fedora ollama[1197]: created by runtime.init.6 in goroutine 1
Nov 14 17:53:16 fedora ollama[1197]: runtime/proc.go:314 +0x1a
Nov 14 17:53:16 fedora ollama[1197]: goroutine 3 gp=0xc000007180 m=nil [GC sweep wait]:
Nov 14 17:53:16 fedora ollama[1197]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
Nov 14 17:53:16 fedora ollama[1197]: runtime/proc.go:402 +0xce fp=0xc000043780 sp=0xc000043760 pc=0x55956047800e
Nov 14 17:53:16 fedora ollama[1197]: runtime.goparkunlock(...)
Nov 14 17:53:16 fedora ollama[1197]: runtime/proc.go:408
Nov 14 17:53:16 fedora ollama[1197]: runtime.bgsweep(0xc00006a000)
Nov 14 17:53:16 fedora ollama[1197]: runtime/mgcsweep.go:278 +0x94 fp=0xc0000437c8 sp=0xc000043780 pc=0x559560462b54
Nov 14 17:53:16 fedora ollama[1197]: runtime.gcenable.gowrap1()
Nov 14 17:53:16 fedora ollama[1197]: runtime/mgc.go:203 +0x25 fp=0xc0000437e0 sp=0xc0000437c8 pc=0x559560457685
Nov 14 17:53:16 fedora ollama[1197]: runtime.goexit({})
Nov 14 17:53:16 fedora ollama[1197]: runtime/asm_amd64.s:1695 +0x1 fp=0xc0000437e8 sp=0xc0000437e0 pc=0x5595604a9de1
Nov 14 17:53:16 fedora ollama[1197]: created by runtime.gcenable in goroutine 1
Nov 14 17:53:16 fedora ollama[1197]: runtime/mgc.go:203 +0x66
Nov 14 17:53:16 fedora ollama[1197]: goroutine 4 gp=0xc000007340 m=nil [GC scavenge wait]:
Nov 14 17:53:16 fedora ollama[1197]: runtime.gopark(0xc00006a000?, 0x5595608fce98?, 0x1?, 0x0?, 0xc000007340?)
Nov 14 17:53:16 fedora ollama[1197]: runtime/proc.go:402 +0xce fp=0xc000043f78 sp=0xc000043f58 pc=0x55956047800e
Nov 14 17:53:16 fedora ollama[1197]: runtime.goparkunlock(...)
Nov 14 17:53:16 fedora ollama[1197]: runtime/proc.go:408
Nov 14 17:53:16 fedora ollama[1197]: runtime.(*scavengerState).park(0x559560bca4c0)
Nov 14 17:53:16 fedora ollama[1197]: runtime/mgcscavenge.go:425 +0x49 fp=0xc000043fa8 sp=0xc000043f78 pc=0x559560460549
Nov 14 17:53:16 fedora ollama[1197]: runtime.bgscavenge(0xc00006a000)
Nov 14 17:53:16 fedora ollama[1197]: runtime/mgcscavenge.go:653 +0x3c fp=0xc000043fc8 sp=0xc000043fa8 pc=0x559560460adc
Nov 14 17:53:16 fedora ollama[1197]: runtime.gcenable.gowrap2()
Nov 14 17:53:16 fedora ollama[1197]: runtime/mgc.go:204 +0x25 fp=0xc000043fe0 sp=0xc000043fc8 pc=0x559560457625
Nov 14 17:53:16 fedora ollama[1197]: runtime.goexit({})
Nov 14 17:53:16 fedora ollama[1197]: runtime/asm_amd64.s:1695 +0x1 fp=0xc000043fe8 sp=0xc000043fe0 pc=0x5595604a9de1
Nov 14 17:53:16 fedora ollama[1197]: created by runtime.gcenable in goroutine 1
Nov 14 17:53:16 fedora ollama[1197]: runtime/mgc.go:204 +0xa5
Nov 14 17:53:16 fedora ollama[1197]: goroutine 5 gp=0xc000007c00 m=nil [finalizer wait]:
Nov 14 17:53:16 fedora ollama[1197]: runtime.gopark(0xc000042648?, 0x55956044af85?, 0xa8?, 0x1?, 0xc0000061c0?)
Nov 14 17:53:16 fedora ollama[1197]: runtime/proc.go:402 +0xce fp=0xc000042620 sp=0xc000042600 pc=0x55956047800e
Nov 14 17:53:16 fedora ollama[1197]: runtime.runfinq()
Nov 14 17:53:16 fedora ollama[1197]: runtime/mfinal.go:194 +0x107 fp=0xc0000427e0 sp=0xc000042620 pc=0x5595604566c7
Nov 14 17:53:16 fedora ollama[1197]: runtime.goexit({})
Nov 14 17:53:16 fedora ollama[1197]: runtime/asm_amd64.s:1695 +0x1 fp=0xc0000427e8 sp=0xc0000427e0 pc=0x5595604a9de1
Nov 14 17:53:16 fedora ollama[1197]: created by runtime.createfing in goroutine 1
Nov 14 17:53:16 fedora ollama[1197]: runtime/mfinal.go:164 +0x3d
Nov 14 17:53:16 fedora ollama[1197]: goroutine 108 gp=0xc000007dc0 m=nil [IO wait]:
Nov 14 17:53:16 fedora ollama[1197]: runtime.gopark(0x10?, 0x10?, 0xf0?, 0x4d?, 0xb?)
Nov 14 17:53:16 fedora ollama[1197]: runtime/proc.go:402 +0xce fp=0xc000044da8 sp=0xc000044d88 pc=0x55956047800e
Nov 14 17:53:16 fedora ollama[1197]: runtime.netpollblock(0x5595604de558?, 0x60440b26?, 0x95?)
Nov 14 17:53:16 fedora ollama[1197]: runtime/netpoll.go:573 +0xf7 fp=0xc000044de0 sp=0xc000044da8 pc=0x559560470257
Nov 14 17:53:16 fedora ollama[1197]: internal/poll.runtime_pollWait(0x7fc2f36d6ee8, 0x72)
Nov 14 17:53:16 fedora ollama[1197]: runtime/netpoll.go:345 +0x85 fp=0xc000044e00 sp=0xc000044de0 pc=0x5595604a4aa5
Nov 14 17:53:16 fedora ollama[1197]: internal/poll.(*pollDesc).wait(0xc00009c000?, 0xc000092101?, 0x0)
Nov 14 17:53:16 fedora ollama[1197]: internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc000044e28 sp=0xc000044e00 pc=0x5595604f49c7
Nov 14 17:53:16 fedora ollama[1197]: internal/poll.(*pollDesc).waitRead(...)
Nov 14 17:53:16 fedora ollama[1197]: internal/poll/fd_poll_runtime.go:89
Nov 14 17:53:16 fedora ollama[1197]: internal/poll.(*FD).Read(0xc00009c000, {0xc000092101, 0x1, 0x1})
Nov 14 17:53:16 fedora ollama[1197]: internal/poll/fd_unix.go:164 +0x27a fp=0xc000044ec0 sp=0xc000044e28 pc=0x5595604f551a
Nov 14 17:53:16 fedora ollama[1197]: net.(*netFD).Read(0xc00009c000, {0xc000092101?, 0xc000044f48?, 0x5595604a66d0?})
Nov 14 17:53:16 fedora ollama[1197]: net/fd_posix.go:55 +0x25 fp=0xc000044f08 sp=0xc000044ec0 pc=0x5595605637a5
Nov 14 17:53:16 fedora ollama[1197]: net.(*conn).Read(0xc000094008, {0xc000092101?, 0x0?, 0x559560cb3060?})
Nov 14 17:53:16 fedora ollama[1197]: net/net.go:185 +0x45 fp=0xc000044f50 sp=0xc000044f08 pc=0x55956056da65
Nov 14 17:53:16 fedora ollama[1197]: net.(*TCPConn).Read(0xc0000920f0?, {0xc000092101?, 0x0?, 0x0?})
Nov 14 17:53:16 fedora ollama[1197]: <autogenerated>:1 +0x25 fp=0xc000044f80 sp=0xc000044f50 pc=0x559560579445
Nov 14 17:53:16 fedora ollama[1197]: net/http.(*connReader).backgroundRead(0xc0000920f0)
Nov 14 17:53:16 fedora ollama[1197]: net/http/server.go:681 +0x37 fp=0xc000044fc8 sp=0xc000044f80 pc=0x5595606881d7
Nov 14 17:53:16 fedora ollama[1197]: net/http.(*connReader).startBackgroundRead.gowrap2()
Nov 14 17:53:16 fedora ollama[1197]: net/http/server.go:677 +0x25 fp=0xc000044fe0 sp=0xc000044fc8 pc=0x559560688105
Nov 14 17:53:16 fedora ollama[1197]: runtime.goexit({})
Nov 14 17:53:16 fedora ollama[1197]: runtime/asm_amd64.s:1695 +0x1 fp=0xc000044fe8 sp=0xc000044fe0 pc=0x5595604a9de1
Nov 14 17:53:16 fedora ollama[1197]: created by net/http.(*connReader).startBackgroundRead in goroutine 18
Nov 14 17:53:16 fedora ollama[1197]: net/http/server.go:677 +0xba
Nov 14 17:53:16 fedora ollama[1197]: goroutine 18 gp=0xc000082380 m=nil [select]:
Nov 14 17:53:16 fedora ollama[1197]: runtime.gopark(0xc00029fa80?, 0x2?, 0x60?, 0x0?, 0xc00029f824?)
Nov 14 17:53:16 fedora ollama[1197]: runtime/proc.go:402 +0xce fp=0xc00029f698 sp=0xc00029f678 pc=0x55956047800e
Nov 14 17:53:16 fedora ollama[1197]: runtime.selectgo(0xc00029fa80, 0xc00029f820, 0x59a?, 0x0, 0x1?, 0x1)
Nov 14 17:53:16 fedora ollama[1197]: runtime/select.go:327 +0x725 fp=0xc00029f7b8 sp=0xc00029f698 pc=0x5595604893e5
Nov 14 17:53:16 fedora ollama[1197]: main.(*Server).completion(0xc000122120, {0x5595609fc5b0, 0xc000280460}, 0xc00017ab40)
Nov 14 17:53:16 fedora ollama[1197]: github.com/ollama/ollama/llama/runner/runner.go:652 +0x8fe fp=0xc00029fab8 sp=0xc00029f7b8 pc=0x5595606bb6de
Nov 14 17:53:16 fedora ollama[1197]: main.(*Server).completion-fm({0x5595609fc5b0?, 0xc000280460?}, 0x559560696b8d?)
Nov 14 17:53:16 fedora ollama[1197]: <autogenerated>:1 +0x36 fp=0xc00029fae8 sp=0xc00029fab8 pc=0x5595606be6b6
Nov 14 17:53:16 fedora ollama[1197]: net/http.HandlerFunc.ServeHTTP(0xc00007ed00?, {0x5595609fc5b0?, 0xc000280460?}, 0x10?)
Nov 14 17:53:16 fedora ollama[1197]: net/http/server.go:2171 +0x29 fp=0xc00029fb10 sp=0xc00029fae8 pc=0x55956068f629
Nov 14 17:53:16 fedora ollama[1197]: net/http.(*ServeMux).ServeHTTP(0x55956044af85?, {0x5595609fc5b0, 0xc000280460}, 0xc00017ab40)
Nov 14 17:53:16 fedora ollama[1197]: net/http/server.go:2688 +0x1ad fp=0xc00029fb60 sp=0xc00029fb10 pc=0x5595606914ad
Nov 14 17:53:16 fedora ollama[1197]: net/http.serverHandler.ServeHTTP({0x5595609fb900?}, {0x5595609fc5b0?, 0xc000280460?}, 0x6?)
Nov 14 17:53:16 fedora ollama[1197]: net/http/server.go:3142 +0x8e fp=0xc00029fb90 sp=0xc00029fb60 pc=0x5595606924ce
Nov 14 17:53:16 fedora ollama[1197]: net/http.(*conn).serve(0xc00009e000, {0x5595609fca08, 0xc00007cdb0})
Nov 14 17:53:16 fedora ollama[1197]: net/http/server.go:2044 +0x5e8 fp=0xc00029ffb8 sp=0xc00029fb90 pc=0x55956068e268
Nov 14 17:53:16 fedora ollama[1197]: net/http.(*Server).Serve.gowrap3()
Nov 14 17:53:16 fedora ollama[1197]: net/http/server.go:3290 +0x28 fp=0xc00029ffe0 sp=0xc00029ffb8 pc=0x559560692c48
Nov 14 17:53:16 fedora ollama[1197]: runtime.goexit({})
Nov 14 17:53:16 fedora ollama[1197]: runtime/asm_amd64.s:1695 +0x1 fp=0xc00029ffe8 sp=0xc00029ffe0 pc=0x5595604a9de1
Nov 14 17:53:16 fedora ollama[1197]: created by net/http.(*Server).Serve in goroutine 1
Nov 14 17:53:16 fedora ollama[1197]: net/http/server.go:3290 +0x4b4
Nov 14 17:53:16 fedora ollama[1197]: rax 0x0
Nov 14 17:53:16 fedora ollama[1197]: rbx 0x9c5
Nov 14 17:53:16 fedora ollama[1197]: rcx 0x7fc2f38a8664
Nov 14 17:53:16 fedora ollama[1197]: rdx 0x6
Nov 14 17:53:16 fedora ollama[1197]: rdi 0x9c2
Nov 14 17:53:16 fedora ollama[1197]: rsi 0x9c5
Nov 14 17:53:16 fedora ollama[1197]: rbp 0x7fc2933f6410
Nov 14 17:53:16 fedora ollama[1197]: rsp 0x7fc2933f63d0
Nov 14 17:53:16 fedora ollama[1197]: r8 0x0
Nov 14 17:53:16 fedora ollama[1197]: r9 0xfffffffc
Nov 14 17:53:16 fedora ollama[1197]: r10 0x8
Nov 14 17:53:16 fedora ollama[1197]: r11 0x246
Nov 14 17:53:16 fedora ollama[1197]: r12 0x7fc293400000
Nov 14 17:53:16 fedora ollama[1197]: r13 0x84
Nov 14 17:53:16 fedora ollama[1197]: r14 0x6
Nov 14 17:53:16 fedora ollama[1197]: r15 0x637f60000
Nov 14 17:53:16 fedora ollama[1197]: rip 0x7fc2f38a8664
Nov 14 17:53:16 fedora ollama[1197]: rflags 0x246
Nov 14 17:53:16 fedora ollama[1197]: cs 0x33
Nov 14 17:53:16 fedora ollama[1197]: fs 0x0
Nov 14 17:53:16 fedora ollama[1197]: gs 0x0
Nov 14 17:53:16 fedora ollama[1197]: [GIN] 2024/11/14 - 17:53:16 | 200 | 42.62644068s | 192.168.0.7 | POST "/api/chat"
Nov 14 17:53:17 fedora ollama[1197]: [GIN] 2024/11/14 - 17:53:17 | 200 | 752.265ยตs | 192.168.0.7 | GET "/api/tags"
Nov 14 17:54:25 fedora ollama[1197]: [GIN] 2024/11/14 - 17:54:25 | 200 | 819.913ยตs | 192.168.0.7 | GET "/api/tags"
```
### OS
_No response_
### GPU
_No response_
### CPU
_No response_
### Ollama version
_No response_
|
{
"login": "kripper",
"id": 1479804,
"node_id": "MDQ6VXNlcjE0Nzk4MDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1479804?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kripper",
"html_url": "https://github.com/kripper",
"followers_url": "https://api.github.com/users/kripper/followers",
"following_url": "https://api.github.com/users/kripper/following{/other_user}",
"gists_url": "https://api.github.com/users/kripper/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kripper/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kripper/subscriptions",
"organizations_url": "https://api.github.com/users/kripper/orgs",
"repos_url": "https://api.github.com/users/kripper/repos",
"events_url": "https://api.github.com/users/kripper/events{/privacy}",
"received_events_url": "https://api.github.com/users/kripper/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7673/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7673/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2389
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2389/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2389/comments
|
https://api.github.com/repos/ollama/ollama/issues/2389/events
|
https://github.com/ollama/ollama/pull/2389
| 2,123,290,612
|
PR_kwDOJ0Z1Ps5mRqp0
| 2,389
|
Allows settings of `rope_freq_base` and `rope_freq_scale` again in modelfile
|
{
"login": "jukofyork",
"id": 69222624,
"node_id": "MDQ6VXNlcjY5MjIyNjI0",
"avatar_url": "https://avatars.githubusercontent.com/u/69222624?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jukofyork",
"html_url": "https://github.com/jukofyork",
"followers_url": "https://api.github.com/users/jukofyork/followers",
"following_url": "https://api.github.com/users/jukofyork/following{/other_user}",
"gists_url": "https://api.github.com/users/jukofyork/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jukofyork/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jukofyork/subscriptions",
"organizations_url": "https://api.github.com/users/jukofyork/orgs",
"repos_url": "https://api.github.com/users/jukofyork/repos",
"events_url": "https://api.github.com/users/jukofyork/events{/privacy}",
"received_events_url": "https://api.github.com/users/jukofyork/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-02-07T15:28:04
| 2024-04-07T18:37:55
| 2024-04-07T18:37:55
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/2389",
"html_url": "https://github.com/ollama/ollama/pull/2389",
"diff_url": "https://github.com/ollama/ollama/pull/2389.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2389.patch",
"merged_at": null
}
|
This adds back the ability to set `rope_freq_base` and `rope_freq_scale` again in the model file.
If the values aren't set then defaults to passing `0.0f` for both to the `llama.cpp` server which in turn gets the values from the GGUF file itself.
---
This also includes the code for the PR that allows `split_mode` and `tensor_split` to be set from the modelfile (Github won't let me make a second fork and I'm too dumb to work out how to split off just the changes for the `rope_freq_base` and `rope_freq_scale` - sorry).
|
{
"login": "jukofyork",
"id": 69222624,
"node_id": "MDQ6VXNlcjY5MjIyNjI0",
"avatar_url": "https://avatars.githubusercontent.com/u/69222624?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jukofyork",
"html_url": "https://github.com/jukofyork",
"followers_url": "https://api.github.com/users/jukofyork/followers",
"following_url": "https://api.github.com/users/jukofyork/following{/other_user}",
"gists_url": "https://api.github.com/users/jukofyork/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jukofyork/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jukofyork/subscriptions",
"organizations_url": "https://api.github.com/users/jukofyork/orgs",
"repos_url": "https://api.github.com/users/jukofyork/repos",
"events_url": "https://api.github.com/users/jukofyork/events{/privacy}",
"received_events_url": "https://api.github.com/users/jukofyork/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2389/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2389/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1443
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1443/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1443/comments
|
https://api.github.com/repos/ollama/ollama/issues/1443/events
|
https://github.com/ollama/ollama/pull/1443
| 2,033,527,326
|
PR_kwDOJ0Z1Ps5hkgjW
| 1,443
|
fix: retry on concurrent request failure
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-12-09T01:38:50
| 2023-12-09T01:52:06
| 2023-12-09T01:52:06
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1443",
"html_url": "https://github.com/ollama/ollama/pull/1443",
"diff_url": "https://github.com/ollama/ollama/pull/1443.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1443.patch",
"merged_at": null
}
|
As of the most recent llama.cpp update concurrent requests had a race condition that would result in an empty response.
This was not easy to observe since the response from the llm runner subprocess was a 200 with the error `{"content":"slot unavailable"}` in the response stream, which just silently closed the channel.
This change resolves this by adding a retry on prediction. @dhiltgen this may be a case we need to account for in the cgo changes.
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1443/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1443/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/8689
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8689/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8689/comments
|
https://api.github.com/repos/ollama/ollama/issues/8689/events
|
https://github.com/ollama/ollama/issues/8689
| 2,820,234,513
|
I_kwDOJ0Z1Ps6oGV0R
| 8,689
|
Error LLama runner process has terminated: %!w(<nil>)
|
{
"login": "Saatvik-droid",
"id": 55750489,
"node_id": "MDQ6VXNlcjU1NzUwNDg5",
"avatar_url": "https://avatars.githubusercontent.com/u/55750489?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Saatvik-droid",
"html_url": "https://github.com/Saatvik-droid",
"followers_url": "https://api.github.com/users/Saatvik-droid/followers",
"following_url": "https://api.github.com/users/Saatvik-droid/following{/other_user}",
"gists_url": "https://api.github.com/users/Saatvik-droid/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Saatvik-droid/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Saatvik-droid/subscriptions",
"organizations_url": "https://api.github.com/users/Saatvik-droid/orgs",
"repos_url": "https://api.github.com/users/Saatvik-droid/repos",
"events_url": "https://api.github.com/users/Saatvik-droid/events{/privacy}",
"received_events_url": "https://api.github.com/users/Saatvik-droid/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 1
| 2025-01-30T08:49:09
| 2025-01-30T08:57:59
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Sometimes when infering from ollama using the python module I get this error. After retrying a couple of times it works and looks random to me.
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.5.7
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8689/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8689/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/6459
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6459/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6459/comments
|
https://api.github.com/repos/ollama/ollama/issues/6459/events
|
https://github.com/ollama/ollama/pull/6459
| 2,480,127,911
|
PR_kwDOJ0Z1Ps55FyKu
| 6,459
|
Add autogpt integration to list of community integrations
|
{
"login": "aarushik93",
"id": 50577581,
"node_id": "MDQ6VXNlcjUwNTc3NTgx",
"avatar_url": "https://avatars.githubusercontent.com/u/50577581?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aarushik93",
"html_url": "https://github.com/aarushik93",
"followers_url": "https://api.github.com/users/aarushik93/followers",
"following_url": "https://api.github.com/users/aarushik93/following{/other_user}",
"gists_url": "https://api.github.com/users/aarushik93/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aarushik93/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aarushik93/subscriptions",
"organizations_url": "https://api.github.com/users/aarushik93/orgs",
"repos_url": "https://api.github.com/users/aarushik93/repos",
"events_url": "https://api.github.com/users/aarushik93/events{/privacy}",
"received_events_url": "https://api.github.com/users/aarushik93/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-08-22T08:11:50
| 2024-11-21T08:51:39
| 2024-11-21T08:51:38
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6459",
"html_url": "https://github.com/ollama/ollama/pull/6459",
"diff_url": "https://github.com/ollama/ollama/pull/6459.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6459.patch",
"merged_at": "2024-11-21T08:51:38"
}
|
AutoGPT now supports building agents using Ollama ๐ฅณ. Updating Ollama's readme so people are aware of yet another way to use Ollama within their apps!
|
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6459/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6459/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7373
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7373/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7373/comments
|
https://api.github.com/repos/ollama/ollama/issues/7373/events
|
https://github.com/ollama/ollama/issues/7373
| 2,615,657,472
|
I_kwDOJ0Z1Ps6b58QA
| 7,373
|
HTTP generates API and returns 500 codes within a fixed one minute timeframe
|
{
"login": "eldoradoel",
"id": 55902524,
"node_id": "MDQ6VXNlcjU1OTAyNTI0",
"avatar_url": "https://avatars.githubusercontent.com/u/55902524?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eldoradoel",
"html_url": "https://github.com/eldoradoel",
"followers_url": "https://api.github.com/users/eldoradoel/followers",
"following_url": "https://api.github.com/users/eldoradoel/following{/other_user}",
"gists_url": "https://api.github.com/users/eldoradoel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eldoradoel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eldoradoel/subscriptions",
"organizations_url": "https://api.github.com/users/eldoradoel/orgs",
"repos_url": "https://api.github.com/users/eldoradoel/repos",
"events_url": "https://api.github.com/users/eldoradoel/events{/privacy}",
"received_events_url": "https://api.github.com/users/eldoradoel/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 5
| 2024-10-26T08:31:31
| 2024-11-06T14:49:54
| 2024-10-26T09:00:31
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
When using http api/generate and stream=False, HTTP returns a 500 error code within a fixed 1-minute period
### OS
Linux
### GPU
Other
### CPU
AMD
### Ollama version
0.4.0-rc5
|
{
"login": "eldoradoel",
"id": 55902524,
"node_id": "MDQ6VXNlcjU1OTAyNTI0",
"avatar_url": "https://avatars.githubusercontent.com/u/55902524?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eldoradoel",
"html_url": "https://github.com/eldoradoel",
"followers_url": "https://api.github.com/users/eldoradoel/followers",
"following_url": "https://api.github.com/users/eldoradoel/following{/other_user}",
"gists_url": "https://api.github.com/users/eldoradoel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eldoradoel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eldoradoel/subscriptions",
"organizations_url": "https://api.github.com/users/eldoradoel/orgs",
"repos_url": "https://api.github.com/users/eldoradoel/repos",
"events_url": "https://api.github.com/users/eldoradoel/events{/privacy}",
"received_events_url": "https://api.github.com/users/eldoradoel/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7373/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7373/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5133
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5133/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5133/comments
|
https://api.github.com/repos/ollama/ollama/issues/5133/events
|
https://github.com/ollama/ollama/issues/5133
| 2,361,406,913
|
I_kwDOJ0Z1Ps6MwDXB
| 5,133
|
How do you ensure that the same questions you asked before are not used and that each time you ask a new conversation question through the api request /api/generate interface
|
{
"login": "mingLvft",
"id": 50644675,
"node_id": "MDQ6VXNlcjUwNjQ0Njc1",
"avatar_url": "https://avatars.githubusercontent.com/u/50644675?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mingLvft",
"html_url": "https://github.com/mingLvft",
"followers_url": "https://api.github.com/users/mingLvft/followers",
"following_url": "https://api.github.com/users/mingLvft/following{/other_user}",
"gists_url": "https://api.github.com/users/mingLvft/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mingLvft/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mingLvft/subscriptions",
"organizations_url": "https://api.github.com/users/mingLvft/orgs",
"repos_url": "https://api.github.com/users/mingLvft/repos",
"events_url": "https://api.github.com/users/mingLvft/events{/privacy}",
"received_events_url": "https://api.github.com/users/mingLvft/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-06-19T06:08:22
| 2024-07-09T00:10:53
| 2024-07-09T00:10:53
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
How do you ensure that the same questions you asked before are not used and that each time you ask a new conversation question through the api request /api/generate interface
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5133/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5133/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6182
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6182/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6182/comments
|
https://api.github.com/repos/ollama/ollama/issues/6182/events
|
https://github.com/ollama/ollama/pull/6182
| 2,448,978,744
|
PR_kwDOJ0Z1Ps53eD7z
| 6,182
|
Catch one more error log
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-08-05T16:30:05
| 2024-08-08T19:33:20
| 2024-08-08T19:33:17
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6182",
"html_url": "https://github.com/ollama/ollama/pull/6182",
"diff_url": "https://github.com/ollama/ollama/pull/6182.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6182.patch",
"merged_at": "2024-08-08T19:33:17"
}
|
Example from recent user reported error log on a fine tune that didn't load correctly
```
C:\a\ollama\ollama\llm\llama.cpp\src\llama.cpp:5511: GGML_ASSERT(vocab.id_to_token.size() == vocab.token_to_id.size()) failed
time=2024-08-05T10:44:06.625+02:00 level=INFO source=server.go:618 msg="waiting for server to become available" status="llm server not responding"
time=2024-08-05T10:44:08.717+02:00 level=INFO source=server.go:618 msg="waiting for server to become available" status="llm server error"
time=2024-08-05T10:44:09.223+02:00 level=ERROR source=sched.go:451 msg="error loading llama server" error="llama runner process has terminated: exit status 0xc0000409"
```
This will help bubble up the underlying error instead of the ~useless exit status.
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6182/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6182/timeline
| null | null | true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.