url
stringlengths 51
54
| repository_url
stringclasses 1
value | labels_url
stringlengths 65
68
| comments_url
stringlengths 60
63
| events_url
stringlengths 58
61
| html_url
stringlengths 39
44
| id
int64 1.78B
2.82B
| node_id
stringlengths 18
19
| number
int64 1
8.69k
| title
stringlengths 1
382
| user
dict | labels
listlengths 0
5
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
2
| milestone
null | comments
int64 0
323
| created_at
timestamp[s] | updated_at
timestamp[s] | closed_at
timestamp[s] | author_association
stringclasses 4
values | sub_issues_summary
dict | active_lock_reason
null | draft
bool 2
classes | pull_request
dict | body
stringlengths 2
118k
⌀ | closed_by
dict | reactions
dict | timeline_url
stringlengths 60
63
| performed_via_github_app
null | state_reason
stringclasses 4
values | is_pull_request
bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/ollama/ollama/issues/5342
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5342/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5342/comments
|
https://api.github.com/repos/ollama/ollama/issues/5342/events
|
https://github.com/ollama/ollama/pull/5342
| 2,379,101,561
|
PR_kwDOJ0Z1Ps5z0qG2
| 5,342
|
Include Show Info in Interactive Mode
|
{
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjhan/followers",
"following_url": "https://api.github.com/users/royjhan/following{/other_user}",
"gists_url": "https://api.github.com/users/royjhan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/royjhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/royjhan/subscriptions",
"organizations_url": "https://api.github.com/users/royjhan/orgs",
"repos_url": "https://api.github.com/users/royjhan/repos",
"events_url": "https://api.github.com/users/royjhan/events{/privacy}",
"received_events_url": "https://api.github.com/users/royjhan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-06-27T21:13:48
| 2024-06-28T20:15:54
| 2024-06-28T20:15:52
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5342",
"html_url": "https://github.com/ollama/ollama/pull/5342",
"diff_url": "https://github.com/ollama/ollama/pull/5342.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5342.patch",
"merged_at": "2024-06-28T20:15:52"
}
|
Before:
<img width="226" alt="Screenshot 2024-06-27 at 2 13 32 PM" src="https://github.com/ollama/ollama/assets/65097070/b8153a26-9474-42f4-aa98-c8fc576b27e6">
After:
<img width="495" alt="Screenshot 2024-06-27 at 2 13 12 PM" src="https://github.com/ollama/ollama/assets/65097070/168699de-9d57-4a6d-9275-db3ad1e93c54">
Resolves #5281
|
{
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjhan/followers",
"following_url": "https://api.github.com/users/royjhan/following{/other_user}",
"gists_url": "https://api.github.com/users/royjhan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/royjhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/royjhan/subscriptions",
"organizations_url": "https://api.github.com/users/royjhan/orgs",
"repos_url": "https://api.github.com/users/royjhan/repos",
"events_url": "https://api.github.com/users/royjhan/events{/privacy}",
"received_events_url": "https://api.github.com/users/royjhan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5342/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5342/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6541
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6541/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6541/comments
|
https://api.github.com/repos/ollama/ollama/issues/6541/events
|
https://github.com/ollama/ollama/issues/6541
| 2,491,969,648
|
I_kwDOJ0Z1Ps6UiHBw
| 6,541
|
llama runner process has terminated: exit status127
|
{
"login": "sosojust1984",
"id": 88603497,
"node_id": "MDQ6VXNlcjg4NjAzNDk3",
"avatar_url": "https://avatars.githubusercontent.com/u/88603497?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sosojust1984",
"html_url": "https://github.com/sosojust1984",
"followers_url": "https://api.github.com/users/sosojust1984/followers",
"following_url": "https://api.github.com/users/sosojust1984/following{/other_user}",
"gists_url": "https://api.github.com/users/sosojust1984/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sosojust1984/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sosojust1984/subscriptions",
"organizations_url": "https://api.github.com/users/sosojust1984/orgs",
"repos_url": "https://api.github.com/users/sosojust1984/repos",
"events_url": "https://api.github.com/users/sosojust1984/events{/privacy}",
"received_events_url": "https://api.github.com/users/sosojust1984/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 23
| 2024-08-28T12:40:26
| 2024-10-22T18:32:57
| 2024-08-31T21:21:20
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
llama runner process has terminated: exit status127 ?
### OS
Linux
### GPU
Other
### CPU
Other
### Ollama version
0.3*
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6541/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6541/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/701
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/701/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/701/comments
|
https://api.github.com/repos/ollama/ollama/issues/701/events
|
https://github.com/ollama/ollama/issues/701
| 1,926,886,657
|
I_kwDOJ0Z1Ps5y2fUB
| 701
|
SSL support
|
{
"login": "ivanfioravanti",
"id": 1069210,
"node_id": "MDQ6VXNlcjEwNjkyMTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/1069210?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ivanfioravanti",
"html_url": "https://github.com/ivanfioravanti",
"followers_url": "https://api.github.com/users/ivanfioravanti/followers",
"following_url": "https://api.github.com/users/ivanfioravanti/following{/other_user}",
"gists_url": "https://api.github.com/users/ivanfioravanti/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ivanfioravanti/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ivanfioravanti/subscriptions",
"organizations_url": "https://api.github.com/users/ivanfioravanti/orgs",
"repos_url": "https://api.github.com/users/ivanfioravanti/repos",
"events_url": "https://api.github.com/users/ivanfioravanti/events{/privacy}",
"received_events_url": "https://api.github.com/users/ivanfioravanti/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 4
| 2023-10-04T19:56:02
| 2023-12-17T07:12:39
| 2023-12-17T07:12:39
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi super Ollama team!
I received an interest comment on the chatbot-ollama interface [here](https://github.com/ivanfioravanti/chatbot-ollama/issues/4).
It seems that sharing a server and connect from multiple clients works, but it's plain HTTP.
Adding SSL can help in this scenario.
|
{
"login": "ivanfioravanti",
"id": 1069210,
"node_id": "MDQ6VXNlcjEwNjkyMTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/1069210?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ivanfioravanti",
"html_url": "https://github.com/ivanfioravanti",
"followers_url": "https://api.github.com/users/ivanfioravanti/followers",
"following_url": "https://api.github.com/users/ivanfioravanti/following{/other_user}",
"gists_url": "https://api.github.com/users/ivanfioravanti/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ivanfioravanti/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ivanfioravanti/subscriptions",
"organizations_url": "https://api.github.com/users/ivanfioravanti/orgs",
"repos_url": "https://api.github.com/users/ivanfioravanti/repos",
"events_url": "https://api.github.com/users/ivanfioravanti/events{/privacy}",
"received_events_url": "https://api.github.com/users/ivanfioravanti/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/701/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/701/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/804
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/804/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/804/comments
|
https://api.github.com/repos/ollama/ollama/issues/804/events
|
https://github.com/ollama/ollama/issues/804
| 1,945,055,884
|
I_kwDOJ0Z1Ps5z7zKM
| 804
|
Is there a maximum in Modelfile??
|
{
"login": "LiuYang328",
"id": 58350195,
"node_id": "MDQ6VXNlcjU4MzUwMTk1",
"avatar_url": "https://avatars.githubusercontent.com/u/58350195?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LiuYang328",
"html_url": "https://github.com/LiuYang328",
"followers_url": "https://api.github.com/users/LiuYang328/followers",
"following_url": "https://api.github.com/users/LiuYang328/following{/other_user}",
"gists_url": "https://api.github.com/users/LiuYang328/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LiuYang328/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LiuYang328/subscriptions",
"organizations_url": "https://api.github.com/users/LiuYang328/orgs",
"repos_url": "https://api.github.com/users/LiuYang328/repos",
"events_url": "https://api.github.com/users/LiuYang328/events{/privacy}",
"received_events_url": "https://api.github.com/users/LiuYang328/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2023-10-16T12:04:05
| 2023-10-25T19:37:14
| 2023-10-25T19:37:13
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
how many prompt I can provide in Modelfile
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/804/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/804/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4212
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4212/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4212/comments
|
https://api.github.com/repos/ollama/ollama/issues/4212/events
|
https://github.com/ollama/ollama/issues/4212
| 2,281,916,647
|
I_kwDOJ0Z1Ps6IA0jn
| 4,212
|
Long context models don't split memory correctly leads to OOM error
|
{
"login": "kungfu-eric",
"id": 87145506,
"node_id": "MDQ6VXNlcjg3MTQ1NTA2",
"avatar_url": "https://avatars.githubusercontent.com/u/87145506?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kungfu-eric",
"html_url": "https://github.com/kungfu-eric",
"followers_url": "https://api.github.com/users/kungfu-eric/followers",
"following_url": "https://api.github.com/users/kungfu-eric/following{/other_user}",
"gists_url": "https://api.github.com/users/kungfu-eric/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kungfu-eric/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kungfu-eric/subscriptions",
"organizations_url": "https://api.github.com/users/kungfu-eric/orgs",
"repos_url": "https://api.github.com/users/kungfu-eric/repos",
"events_url": "https://api.github.com/users/kungfu-eric/events{/privacy}",
"received_events_url": "https://api.github.com/users/kungfu-eric/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg",
"url": "https://api.github.com/repos/ollama/ollama/labels/nvidia",
"name": "nvidia",
"color": "8CDB00",
"default": false,
"description": "Issues relating to Nvidia GPUs and CUDA"
},
{
"id": 6677745918,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgZQ_g",
"url": "https://api.github.com/repos/ollama/ollama/labels/gpu",
"name": "gpu",
"color": "76C49E",
"default": false,
"description": ""
}
] |
open
| false
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 0
| 2024-05-06T22:47:44
| 2024-05-08T20:22:19
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Using mixtral default 2048 ctx splits memory across 2x GPUs ~12 GBs each. When extending context to 12k, it dumps all mem on one GPU using 29 GB. Ideally, would want to split equally as before to push to higher 16k context without OOM. Using 2x 48 GB A6000. Issue is possibly related to https://github.com/ollama/ollama/issues/1341
```
[GIN] 2024/05/05 - 23:38:16 | 200 | 24.89660366s | 172.17.0.1 | POST "/api/chat"
{"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":42754,"tid":"139643039125504","timestamp":1714977496}
{"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":56082,"status":200,"tid":"139637473294080","timestamp":1714977496}
{"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":42755,"tid":"139643039125504","timestamp":1714977496}
{"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":56082,"status":200,"tid":"139637473294080","timestamp":1714977496}
{"function":"log_server_request","level":"INFO","line":2734,"method":"POST","msg":"request","params":{},"path":"/tokenize","remote_addr":"127.0.0.1","remote_port":56082,"status":200,"tid":"139637473294080","timestamp":1714977496}
{"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":42756,"tid":"139643039125504","timestamp":1714977496}
{"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":56082,"status":200,"tid":"139637473294080","timestamp":1714977496}
{"function":"launch_slot_with_data","level":"INFO","line":830,"msg":"slot is processing task","slot_id":0,"task_id":42757,"tid":"139643039125504","timestamp":1714977496}
{"function":"update_slots","ga_i":0,"level":"INFO","line":1809,"msg":"slot progression","n_past":15,"n_past_se":0,"n_prompt_tokens_processed":15377,"slot_id":0,"task_id":42757,"tid":"139643039125504","timestamp":1714977496}
{"function":"update_slots","level":"INFO","line":1836,"msg":"kv cache rm [p0, end)","p0":15,"slot_id":0,"task_id":42757,"tid":"139643039125504","timestamp":1714977496}
{"function":"print_timings","level":"INFO","line":269,"msg":"prompt eval time = 24744.71 ms / 15377 tokens ( 1.61 ms per token, 621.43 tokens per second)","n_prompt_tokens_processed":15377,"n_tokens_second":621.4256758605363,"slot_id":0,"t_prompt_processing":24744.713,"t_token":1.6092029004357156,"task_id":42757,"tid":"139643039125504","timestamp":1714977535}
{"function":"print_timings","level":"INFO","line":283,"msg":"generation eval time = 13787.47 ms / 550 runs ( 25.07 ms per token, 39.89 tokens per second)","n_decoded":550,"n_tokens_second":39.891292601180645,"slot_id":0,"t_token":25.06812727272727,"t_token_generation":13787.47,"task_id":42757,"tid":"139643039125504","timestamp":1714977535}
{"function":"print_timings","level":"INFO","line":293,"msg":" total time = 38532.18 ms","slot_id":0,"t_prompt_processing":24744.713,"t_token_generation":13787.47,"t_total":38532.183,"task_id":42757,"tid":"139643039125504","timestamp":1714977535}
{"function":"update_slots","level":"INFO","line":1640,"msg":"slot released","n_cache_tokens":15942,"n_ctx":16384,"n_past":15941,"n_system_tokens":0,"slot_id":0,"task_id":42757,"tid":"139643039125504","timestamp":1714977535,"truncated":false}
{"function":"log_server_request","level":"INFO","line":2734,"method":"POST","msg":"request","params":{},"path":"/completion","remote_addr":"127.0.0.1","remote_port":56082,"status":200,"tid":"139637473294080","timestamp":1714977535}
[GIN] 2024/05/05 - 23:38:55 | 200 | 38.672720153s | 172.17.0.1 | POST "/api/chat"
{"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":43310,"tid":"139643039125504","timestamp":1714977535}
{"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":35540,"status":200,"tid":"139637464901376","timestamp":1714977535}
{"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":43311,"tid":"139643039125504","timestamp":1714977535}
{"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":35540,"status":200,"tid":"139637464901376","timestamp":1714977535}
{"function":"log_server_request","level":"INFO","line":2734,"method":"POST","msg":"request","params":{},"path":"/tokenize","remote_addr":"127.0.0.1","remote_port":35540,"status":200,"tid":"139637464901376","timestamp":1714977535}
{"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":43312,"tid":"139643039125504","timestamp":17149775llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: model type = 8x7B
llm_load_print_meta: model ftype = Q4_0
llm_load_print_meta: model params = 46.70 B
llm_load_print_meta: model size = 24.62 GiB (4.53 BPW)
llm_load_print_meta: general.name = mistralai
llm_load_print_meta: BOS token = 1 '<s>'
llm_load_print_meta: EOS token = 2 '</s>'
llm_load_print_meta: UNK token = 0 '<unk>'
llm_load_print_meta: LF token = 13 '<0x0A>'
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: yes
ggml_cuda_init: CUDA_USE_TENSOR_CORES: no
ggml_cuda_init: found 2 CUDA devices:
Device 0: NVIDIA RTX A6000, compute capability 8.6, VMM: yes
Device 1: NVIDIA RTX A6000, compute capability 8.6, VMM: yes
llm_load_tensors: ggml ctx size = 0.42 MiB
llm_load_tensors: offloading 0 repeating layers to GPU
llm_load_tensors: offloaded 0/33 layers to GPU
llm_load_tensors: CUDA_Host buffer size = 25215.87 MiB
....................................................................................................
llama_new_context_with_model: n_ctx = 16384
llama_new_context_with_model: n_batch = 512
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: freq_base = 1000000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: CUDA_Host KV buffer size = 2048.00 MiB
llama_new_context_with_model: KV self size = 2048.00 MiB, K (f16): 1024.00 MiB, V (f16): 1024.00 MiB
llama_new_context_with_model: CUDA_Host output buffer size = 0.14 MiB
ggml_backend_cuda_buffer_type_alloc_buffer: allocating 1145.00 MiB on device 0: cudaMalloc failed: out of memory
ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 1200621568
llama_new_context_with_model: failed to allocate compute buffers
llama_init_from_gpt_params: error: failed to create context with model '/root/.ollama/models/blobs/sha256-e9e56e8bb5f0fcd4860675e6837a8f6a94e659f5fa7dce6a1076279336320f2b'
{"function":"load_model","level":"ERR","line":410,"model":"/root/.ollama/models/blobs/sha256-e9e56e8bb5f0fcd4860675e6837a8f6a94e659f5fa7dce6a1076279336320f2b","msg":"unable to load model","tid":"140631930466304","timestamp":1714999945}
time=2024-05-06T05:52:25.670-07:00 level=ERROR source=sched.go:333 msg="error loading llama server" error="llama runner process no longer running: 1 error:failed to create context with model '/root/.ollama/models/blobs/sha256-e9e56e8bb5f0fcd4860675e6837a8f6a94e659f5fa7dce6a1076279336320f2b'"
[GIN] 2024/05/06 - 05:52:25 | 500 | 20.037722871s | 172.17.0.1 | POST "/api/chat"
```
### OS
Linux
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.1.33
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4212/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4212/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/7061
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7061/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7061/comments
|
https://api.github.com/repos/ollama/ollama/issues/7061/events
|
https://github.com/ollama/ollama/issues/7061
| 2,559,191,861
|
I_kwDOJ0Z1Ps6Yiis1
| 7,061
|
GPU Tesla at 100% and ollama don't work, is hung
|
{
"login": "Domi31tls",
"id": 124446863,
"node_id": "U_kgDOB2rojw",
"avatar_url": "https://avatars.githubusercontent.com/u/124446863?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Domi31tls",
"html_url": "https://github.com/Domi31tls",
"followers_url": "https://api.github.com/users/Domi31tls/followers",
"following_url": "https://api.github.com/users/Domi31tls/following{/other_user}",
"gists_url": "https://api.github.com/users/Domi31tls/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Domi31tls/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Domi31tls/subscriptions",
"organizations_url": "https://api.github.com/users/Domi31tls/orgs",
"repos_url": "https://api.github.com/users/Domi31tls/repos",
"events_url": "https://api.github.com/users/Domi31tls/events{/privacy}",
"received_events_url": "https://api.github.com/users/Domi31tls/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg",
"url": "https://api.github.com/repos/ollama/ollama/labels/nvidia",
"name": "nvidia",
"color": "8CDB00",
"default": false,
"description": "Issues relating to Nvidia GPUs and CUDA"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info",
"name": "needs more info",
"color": "BA8041",
"default": false,
"description": "More information is needed to assist"
}
] |
open
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 4
| 2024-10-01T13:12:16
| 2024-11-05T22:52:56
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?



I have just set up my Debian 12 Linux server. I installed Ollama directly using the following command:
curl -fsSL https://ollama.com/install.sh | sh
I did not make any modifications to the service file. I ran the command:
ollama run phi3.5
I typed "hello". The response was good and correct. However, for my second question, it started writing the beginning of a sentence and then displayed "###" (see screenshot).
I ran the nvidia-smi command (see screenshot), and the processor stays at 100%, and Ollama stops responding.
I should mention that my server does not have a graphical interface, and the Nvidia and CUDA drivers are up to date (update, upgrade). I am providing the top command output for CPU and RAM information.
Thank you for your feedback.
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.3.12
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7061/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7061/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/2594
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2594/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2594/comments
|
https://api.github.com/repos/ollama/ollama/issues/2594/events
|
https://github.com/ollama/ollama/issues/2594
| 2,142,541,756
|
I_kwDOJ0Z1Ps5_tJe8
| 2,594
|
i am a new fish, how to restart or stop the ollama under linux?
|
{
"login": "jaqenwang",
"id": 18111033,
"node_id": "MDQ6VXNlcjE4MTExMDMz",
"avatar_url": "https://avatars.githubusercontent.com/u/18111033?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jaqenwang",
"html_url": "https://github.com/jaqenwang",
"followers_url": "https://api.github.com/users/jaqenwang/followers",
"following_url": "https://api.github.com/users/jaqenwang/following{/other_user}",
"gists_url": "https://api.github.com/users/jaqenwang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jaqenwang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jaqenwang/subscriptions",
"organizations_url": "https://api.github.com/users/jaqenwang/orgs",
"repos_url": "https://api.github.com/users/jaqenwang/repos",
"events_url": "https://api.github.com/users/jaqenwang/events{/privacy}",
"received_events_url": "https://api.github.com/users/jaqenwang/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 5
| 2024-02-19T14:48:12
| 2024-09-07T19:59:09
| 2024-02-19T17:23:27
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
after i updated a model, i want to refresh everthing again, how to do that
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2594/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2594/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3612
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3612/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3612/comments
|
https://api.github.com/repos/ollama/ollama/issues/3612/events
|
https://github.com/ollama/ollama/pull/3612
| 2,239,150,285
|
PR_kwDOJ0Z1Ps5sb_0K
| 3,612
|
add qa-pilot link
|
{
"login": "reid41",
"id": 25558653,
"node_id": "MDQ6VXNlcjI1NTU4NjUz",
"avatar_url": "https://avatars.githubusercontent.com/u/25558653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/reid41",
"html_url": "https://github.com/reid41",
"followers_url": "https://api.github.com/users/reid41/followers",
"following_url": "https://api.github.com/users/reid41/following{/other_user}",
"gists_url": "https://api.github.com/users/reid41/gists{/gist_id}",
"starred_url": "https://api.github.com/users/reid41/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/reid41/subscriptions",
"organizations_url": "https://api.github.com/users/reid41/orgs",
"repos_url": "https://api.github.com/users/reid41/repos",
"events_url": "https://api.github.com/users/reid41/events{/privacy}",
"received_events_url": "https://api.github.com/users/reid41/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-04-12T06:03:25
| 2024-04-23T01:23:26
| 2024-04-23T00:10:34
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3612",
"html_url": "https://github.com/ollama/ollama/pull/3612",
"diff_url": "https://github.com/ollama/ollama/pull/3612.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3612.patch",
"merged_at": "2024-04-23T00:10:34"
}
|
add a [qa-pilot](https://github.com/reid41/QA-Pilot.git) tool which is also based on `ollama`.
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3612/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3612/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5760
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5760/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5760/comments
|
https://api.github.com/repos/ollama/ollama/issues/5760/events
|
https://github.com/ollama/ollama/pull/5760
| 2,415,050,510
|
PR_kwDOJ0Z1Ps51tqn8
| 5,760
|
Make llama.cpp's cache_prompt parameter configurable
|
{
"login": "sayap",
"id": 837049,
"node_id": "MDQ6VXNlcjgzNzA0OQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/837049?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sayap",
"html_url": "https://github.com/sayap",
"followers_url": "https://api.github.com/users/sayap/followers",
"following_url": "https://api.github.com/users/sayap/following{/other_user}",
"gists_url": "https://api.github.com/users/sayap/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sayap/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sayap/subscriptions",
"organizations_url": "https://api.github.com/users/sayap/orgs",
"repos_url": "https://api.github.com/users/sayap/repos",
"events_url": "https://api.github.com/users/sayap/events{/privacy}",
"received_events_url": "https://api.github.com/users/sayap/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 15
| 2024-07-18T02:05:28
| 2024-12-27T06:04:55
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5760",
"html_url": "https://github.com/ollama/ollama/pull/5760",
"diff_url": "https://github.com/ollama/ollama/pull/5760.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5760.patch",
"merged_at": null
}
|
This allows the output to be deterministic when setting the same seed and temperature.
Fixes #5321
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5760/reactions",
"total_count": 8,
"+1": 8,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5760/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/8682
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8682/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8682/comments
|
https://api.github.com/repos/ollama/ollama/issues/8682/events
|
https://github.com/ollama/ollama/issues/8682
| 2,819,668,792
|
I_kwDOJ0Z1Ps6oELs4
| 8,682
|
GIN mode is hard-coded to debug mode
|
{
"login": "yoonsio",
"id": 24367477,
"node_id": "MDQ6VXNlcjI0MzY3NDc3",
"avatar_url": "https://avatars.githubusercontent.com/u/24367477?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yoonsio",
"html_url": "https://github.com/yoonsio",
"followers_url": "https://api.github.com/users/yoonsio/followers",
"following_url": "https://api.github.com/users/yoonsio/following{/other_user}",
"gists_url": "https://api.github.com/users/yoonsio/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yoonsio/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yoonsio/subscriptions",
"organizations_url": "https://api.github.com/users/yoonsio/orgs",
"repos_url": "https://api.github.com/users/yoonsio/repos",
"events_url": "https://api.github.com/users/yoonsio/events{/privacy}",
"received_events_url": "https://api.github.com/users/yoonsio/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 1
| 2025-01-30T01:04:50
| 2025-01-30T01:05:37
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Gin mode is hard-coded to gin.DebugMode and ignores `GIN_MODE` environment variable.
The server always displays this log on start up.
```
[GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.
```
### OS
_No response_
### GPU
_No response_
### CPU
_No response_
### Ollama version
master
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8682/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8682/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/481
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/481/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/481/comments
|
https://api.github.com/repos/ollama/ollama/issues/481/events
|
https://github.com/ollama/ollama/issues/481
| 1,885,227,036
|
I_kwDOJ0Z1Ps5wXkgc
| 481
|
Unable to build ollama on linux with go 1.21.0 and Docker
|
{
"login": "FairyTail2000",
"id": 22645621,
"node_id": "MDQ6VXNlcjIyNjQ1NjIx",
"avatar_url": "https://avatars.githubusercontent.com/u/22645621?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/FairyTail2000",
"html_url": "https://github.com/FairyTail2000",
"followers_url": "https://api.github.com/users/FairyTail2000/followers",
"following_url": "https://api.github.com/users/FairyTail2000/following{/other_user}",
"gists_url": "https://api.github.com/users/FairyTail2000/gists{/gist_id}",
"starred_url": "https://api.github.com/users/FairyTail2000/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/FairyTail2000/subscriptions",
"organizations_url": "https://api.github.com/users/FairyTail2000/orgs",
"repos_url": "https://api.github.com/users/FairyTail2000/repos",
"events_url": "https://api.github.com/users/FairyTail2000/events{/privacy}",
"received_events_url": "https://api.github.com/users/FairyTail2000/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 6
| 2023-09-07T06:50:47
| 2023-09-07T13:36:32
| 2023-09-07T13:36:31
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
When trying to build ollama for linux I encountered the following Problem:
> llm/ggml_llama.go:31:12: pattern llama.cpp/ggml/build/*/bin/*: no matching files found
After that I cleaned my enviroment to ensure a clean slate
```bash
go clean -r -x -cache -testcache -modcache -fuzzcache
```
This however did not help.
Testing the docker build also fails with the same error.
Reading the file llm/llama.cpp/generate.go I noticed it used commands and as a human being I'm able to execute them myself. After executing all commands the build works now.
For anyone who doesn't want to read the source code these are commands you have to execute in the folder llm/llama.cpp:
```bash
git submodule init
git submodule update --force ggml
git -C ggml apply ../ggml_patch/0001-add-detokenize-endpoint.patch
git -C ggml apply ../ggml_patch/0002-34B-model-support.patch
git -C ggml apply ../ggml_patch/0003-metal-fix-synchronization-in-new-matrix-multiplicati.patch
git -C ggml apply ../ggml_patch/0004-metal-add-missing-barriers-for-mul-mat-2699.patch
cmake --fresh -S ggml -B ggml/build/cpu -DLLAMA_K_QUANTS=on
cmake --build ggml/build/cpu --target server --config Release
```
The question now is, why isn't this executed at build time?
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/481/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/481/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7860
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7860/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7860/comments
|
https://api.github.com/repos/ollama/ollama/issues/7860/events
|
https://github.com/ollama/ollama/issues/7860
| 2,698,330,790
|
I_kwDOJ0Z1Ps6g1UKm
| 7,860
|
Tool calling: LLAMA3.2 ignores param types
|
{
"login": "fce2",
"id": 16529960,
"node_id": "MDQ6VXNlcjE2NTI5OTYw",
"avatar_url": "https://avatars.githubusercontent.com/u/16529960?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fce2",
"html_url": "https://github.com/fce2",
"followers_url": "https://api.github.com/users/fce2/followers",
"following_url": "https://api.github.com/users/fce2/following{/other_user}",
"gists_url": "https://api.github.com/users/fce2/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fce2/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fce2/subscriptions",
"organizations_url": "https://api.github.com/users/fce2/orgs",
"repos_url": "https://api.github.com/users/fce2/repos",
"events_url": "https://api.github.com/users/fce2/events{/privacy}",
"received_events_url": "https://api.github.com/users/fce2/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 7
| 2024-11-27T12:04:36
| 2024-11-28T12:21:14
| 2024-11-28T12:21:14
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
if i run llama3.1 (which is ok):
```
Prompt: What is three plus one?
Calling function: add_two_numbers
Arguments: {'a': 3, 'b': 1}
Function output: 4
```
but if i run llama3.2:
```
Prompt: What is three plus one?
Calling function: add_two_numbers
Arguments: {'a': '3', 'b': '1'}
Function output: 31
```
unfortunately "llama3.2-vision:11b-instruct-q8_0" does not work at all:
`ResponseError: llama3.2-vision:11b-instruct-q8_0 does not support tools`
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.4.5
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7860/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7860/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2625
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2625/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2625/comments
|
https://api.github.com/repos/ollama/ollama/issues/2625/events
|
https://github.com/ollama/ollama/pull/2625
| 2,145,894,181
|
PR_kwDOJ0Z1Ps5neg7r
| 2,625
|
note on naming restrictions
|
{
"login": "CrispStrobe",
"id": 154636388,
"node_id": "U_kgDOCTeQZA",
"avatar_url": "https://avatars.githubusercontent.com/u/154636388?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CrispStrobe",
"html_url": "https://github.com/CrispStrobe",
"followers_url": "https://api.github.com/users/CrispStrobe/followers",
"following_url": "https://api.github.com/users/CrispStrobe/following{/other_user}",
"gists_url": "https://api.github.com/users/CrispStrobe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CrispStrobe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CrispStrobe/subscriptions",
"organizations_url": "https://api.github.com/users/CrispStrobe/orgs",
"repos_url": "https://api.github.com/users/CrispStrobe/repos",
"events_url": "https://api.github.com/users/CrispStrobe/events{/privacy}",
"received_events_url": "https://api.github.com/users/CrispStrobe/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-02-21T05:52:41
| 2024-05-07T05:40:16
| 2024-05-06T23:03:21
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/2625",
"html_url": "https://github.com/ollama/ollama/pull/2625",
"diff_url": "https://github.com/ollama/ollama/pull/2625.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2625.patch",
"merged_at": "2024-05-06T23:03:21"
}
|
else push would fail with cryptic
retrieving manifest
Error: file does not exist
==> maybe change that in code too
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2625/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2625/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2974
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2974/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2974/comments
|
https://api.github.com/repos/ollama/ollama/issues/2974/events
|
https://github.com/ollama/ollama/issues/2974
| 2,173,080,997
|
I_kwDOJ0Z1Ps6BhpWl
| 2,974
|
ollama free GPU memory itself
|
{
"login": "ly0303521",
"id": 11954512,
"node_id": "MDQ6VXNlcjExOTU0NTEy",
"avatar_url": "https://avatars.githubusercontent.com/u/11954512?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ly0303521",
"html_url": "https://github.com/ly0303521",
"followers_url": "https://api.github.com/users/ly0303521/followers",
"following_url": "https://api.github.com/users/ly0303521/following{/other_user}",
"gists_url": "https://api.github.com/users/ly0303521/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ly0303521/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ly0303521/subscriptions",
"organizations_url": "https://api.github.com/users/ly0303521/orgs",
"repos_url": "https://api.github.com/users/ly0303521/repos",
"events_url": "https://api.github.com/users/ly0303521/events{/privacy}",
"received_events_url": "https://api.github.com/users/ly0303521/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 2
| 2024-03-07T06:43:55
| 2024-03-08T00:57:08
| 2024-03-08T00:57:08
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hello, nice work. Below is my problem:
if no request to ollama for a few minutes, it will free GPU memory, when a new request comes, it will load the model and response, so this will about 5 seconds. How to make ollama hold the GPU memory
|
{
"login": "ly0303521",
"id": 11954512,
"node_id": "MDQ6VXNlcjExOTU0NTEy",
"avatar_url": "https://avatars.githubusercontent.com/u/11954512?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ly0303521",
"html_url": "https://github.com/ly0303521",
"followers_url": "https://api.github.com/users/ly0303521/followers",
"following_url": "https://api.github.com/users/ly0303521/following{/other_user}",
"gists_url": "https://api.github.com/users/ly0303521/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ly0303521/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ly0303521/subscriptions",
"organizations_url": "https://api.github.com/users/ly0303521/orgs",
"repos_url": "https://api.github.com/users/ly0303521/repos",
"events_url": "https://api.github.com/users/ly0303521/events{/privacy}",
"received_events_url": "https://api.github.com/users/ly0303521/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2974/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2974/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8684
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8684/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8684/comments
|
https://api.github.com/repos/ollama/ollama/issues/8684/events
|
https://github.com/ollama/ollama/issues/8684
| 2,819,726,642
|
I_kwDOJ0Z1Ps6oEZ0y
| 8,684
|
OLLAMA_MODELS env variable not working to customize the models download path
|
{
"login": "siddharthdashore",
"id": 10009761,
"node_id": "MDQ6VXNlcjEwMDA5NzYx",
"avatar_url": "https://avatars.githubusercontent.com/u/10009761?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/siddharthdashore",
"html_url": "https://github.com/siddharthdashore",
"followers_url": "https://api.github.com/users/siddharthdashore/followers",
"following_url": "https://api.github.com/users/siddharthdashore/following{/other_user}",
"gists_url": "https://api.github.com/users/siddharthdashore/gists{/gist_id}",
"starred_url": "https://api.github.com/users/siddharthdashore/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/siddharthdashore/subscriptions",
"organizations_url": "https://api.github.com/users/siddharthdashore/orgs",
"repos_url": "https://api.github.com/users/siddharthdashore/repos",
"events_url": "https://api.github.com/users/siddharthdashore/events{/privacy}",
"received_events_url": "https://api.github.com/users/siddharthdashore/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 1
| 2025-01-30T01:57:34
| 2025-01-30T02:13:12
| 2025-01-30T02:13:12
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I'm trying to update the OLLAMA_MODELS env variable based models download path on my MacBook Pro M4 Pro 24GB/512GB, but after following commands it's still taking the default path.
**Commands:**
1. Stop Ollama App
2. vi ~/.zshrc
3. add: export OLLAMA_MODELS="/Volumes/Extreme-Pro/GenAI-Helper/ollama/models"
4. source ~/.zshrc
5. Start Ollama App
6. ollama pull phi4
After above steps, i'm expecting ollama should use my external SSD based path from zshrc but it's still using the default path (/Users/siddharthdashore/.ollama/models)
**Workaround:**
Create a Symbolic Link (ln -s /Volumes/Extreme-Pro/GenAI-Helper/ollama/models /Users/siddharthdashore/.ollama/models)
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.5.7
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8684/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8684/timeline
| null |
duplicate
| false
|
https://api.github.com/repos/ollama/ollama/issues/2970
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2970/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2970/comments
|
https://api.github.com/repos/ollama/ollama/issues/2970/events
|
https://github.com/ollama/ollama/issues/2970
| 2,172,926,598
|
I_kwDOJ0Z1Ps6BhDqG
| 2,970
|
ollama run does not support tee
|
{
"login": "DarkenedOrigins",
"id": 36058955,
"node_id": "MDQ6VXNlcjM2MDU4OTU1",
"avatar_url": "https://avatars.githubusercontent.com/u/36058955?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DarkenedOrigins",
"html_url": "https://github.com/DarkenedOrigins",
"followers_url": "https://api.github.com/users/DarkenedOrigins/followers",
"following_url": "https://api.github.com/users/DarkenedOrigins/following{/other_user}",
"gists_url": "https://api.github.com/users/DarkenedOrigins/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DarkenedOrigins/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DarkenedOrigins/subscriptions",
"organizations_url": "https://api.github.com/users/DarkenedOrigins/orgs",
"repos_url": "https://api.github.com/users/DarkenedOrigins/repos",
"events_url": "https://api.github.com/users/DarkenedOrigins/events{/privacy}",
"received_events_url": "https://api.github.com/users/DarkenedOrigins/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5667396210,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2acg",
"url": "https://api.github.com/repos/ollama/ollama/labels/good%20first%20issue",
"name": "good first issue",
"color": "7057ff",
"default": true,
"description": "Good for newcomers"
}
] |
closed
| false
| null |
[] | null | 4
| 2024-03-07T04:38:34
| 2024-03-08T23:20:56
| 2024-03-08T23:20:56
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
```
~$ ollama run phi | tee llms/out.txt
>>> Error getting size: inappropriate ioctl for device
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x8 pc=0x9ad54e]
goroutine 1 [running]:
github.com/jmorganca/ollama/readline.(*Buffer).IsEmpty(...)
/go/src/github.com/jmorganca/ollama/readline/buffer.go:309
github.com/jmorganca/ollama/readline.(*Instance).Readline(0xc00058a160)
/go/src/github.com/jmorganca/ollama/readline/readline.go:100 +0x20e
github.com/jmorganca/ollama/cmd.generateInteractive(0xc00045b200, {{0x7ffcd3b32552, 0x3}, {0x0, 0x0}, {0x0, 0x0}, {0x1184c8c0, 0x0, 0x0}, ...})
/go/src/github.com/jmorganca/ollama/cmd/interactive.go:185 +0x1cf
github.com/jmorganca/ollama/cmd.RunHandler(0xc00045b200, {0xc00044d020, 0x1, 0x1})
/go/src/github.com/jmorganca/ollama/cmd/cmd.go:212 +0x6c5
github.com/spf13/cobra.(*Command).execute(0xc00045b200, {0xc00044cff0, 0x1, 0x1})
/root/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:940 +0x87c
github.com/spf13/cobra.(*Command).ExecuteC(0xc00045a900)
/root/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068 +0x3a5
github.com/spf13/cobra.(*Command).Execute(...)
/root/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992
github.com/spf13/cobra.(*Command).ExecuteContext(...)
/root/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:985
main.main()
/go/src/github.com/jmorganca/ollama/main.go:11 +0x4d
```
I tried using tee to save the output but this caused a segfault.
Seems like the issue might be here?
https://github.com/ollama/ollama/blob/0ded7fdc4b33801fd0115656927b6097f800c544/readline/readline.go#L89
NewBuffer can be returning nil and there is no check for this.
https://github.com/ollama/ollama/blob/0ded7fdc4b33801fd0115656927b6097f800c544/readline/buffer.go#L23
this is where NewBuffer is returning the nil
I do not understand why term.Getsize would be failing?
Note: it works perfectly when I do not use tee
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2970/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2970/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6685
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6685/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6685/comments
|
https://api.github.com/repos/ollama/ollama/issues/6685/events
|
https://github.com/ollama/ollama/issues/6685
| 2,511,570,256
|
I_kwDOJ0Z1Ps6Vs4VQ
| 6,685
|
AMD 7900XTX fails with `"Could not initialize Tensile host: No devices found"`
|
{
"login": "svaningelgem",
"id": 199434,
"node_id": "MDQ6VXNlcjE5OTQzNA==",
"avatar_url": "https://avatars.githubusercontent.com/u/199434?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/svaningelgem",
"html_url": "https://github.com/svaningelgem",
"followers_url": "https://api.github.com/users/svaningelgem/followers",
"following_url": "https://api.github.com/users/svaningelgem/following{/other_user}",
"gists_url": "https://api.github.com/users/svaningelgem/gists{/gist_id}",
"starred_url": "https://api.github.com/users/svaningelgem/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/svaningelgem/subscriptions",
"organizations_url": "https://api.github.com/users/svaningelgem/orgs",
"repos_url": "https://api.github.com/users/svaningelgem/repos",
"events_url": "https://api.github.com/users/svaningelgem/events{/privacy}",
"received_events_url": "https://api.github.com/users/svaningelgem/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677677816,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgVG-A",
"url": "https://api.github.com/repos/ollama/ollama/labels/docker",
"name": "docker",
"color": "0052CC",
"default": false,
"description": "Issues relating to using ollama in containers"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 51
| 2024-09-07T09:50:43
| 2024-09-26T23:50:11
| 2024-09-11T18:38:26
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I installed the AMD drivers with https://rocm.docs.amd.com/projects/install-on-linux/en/latest/install/native-install/ubuntu.html :heavy_check_mark:
OS: `Ubuntu 24.04.1 LTS`
ROCm: `ROCm version: 6.2.0`
CPU: `AMD Ryzen 9 7950X3D`
GPU: `Radeon RX 7900 XTX`
model: `llama3.1`
Started with:
`docker run --gpus=all -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama:rocm`
Then tried to start llama3.1 with (I pulled it first successfully):
`OLLAMA_DEBUG=1 ollama run llama3.1`
Log file:
[ollama.log](https://github.com/user-attachments/files/16917396/ollama.log)
It looks like it is detecting the GPU correctly at the start of the container, but somehow fails to use it?
### OS
Linux
### GPU
AMD
### CPU
AMD
### Ollama version
0.3.9
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6685/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6685/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/189
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/189/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/189/comments
|
https://api.github.com/repos/ollama/ollama/issues/189/events
|
https://github.com/ollama/ollama/pull/189
| 1,818,363,433
|
PR_kwDOJ0Z1Ps5WOouf
| 189
|
Improve command parsing and multiline string handling
|
{
"login": "Mohit-Gaur",
"id": 56885276,
"node_id": "MDQ6VXNlcjU2ODg1Mjc2",
"avatar_url": "https://avatars.githubusercontent.com/u/56885276?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Mohit-Gaur",
"html_url": "https://github.com/Mohit-Gaur",
"followers_url": "https://api.github.com/users/Mohit-Gaur/followers",
"following_url": "https://api.github.com/users/Mohit-Gaur/following{/other_user}",
"gists_url": "https://api.github.com/users/Mohit-Gaur/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Mohit-Gaur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mohit-Gaur/subscriptions",
"organizations_url": "https://api.github.com/users/Mohit-Gaur/orgs",
"repos_url": "https://api.github.com/users/Mohit-Gaur/repos",
"events_url": "https://api.github.com/users/Mohit-Gaur/events{/privacy}",
"received_events_url": "https://api.github.com/users/Mohit-Gaur/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2023-07-24T12:50:00
| 2023-07-25T18:28:11
| 2023-07-25T18:28:11
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/189",
"html_url": "https://github.com/ollama/ollama/pull/189",
"diff_url": "https://github.com/ollama/ollama/pull/189.diff",
"patch_url": "https://github.com/ollama/ollama/pull/189.patch",
"merged_at": "2023-07-25T18:28:11"
}
|
This PR enhances the existing parser package. Main improvements include better error handling, optimized string-to-byte conversions, and efficient handling of multiline strings.
Detailed changes:
- Define a `multilineString` constant for repeated values to avoid duplication.
- Modify the error handling in the `Parse` function to return an error for unknown commands.
- Replace `bytes.ToUpper` and `bytes.ToLower` with `strings.ToUpper` and `strings.ToLower` for faster string conversions.
- Optimize removal of `"""` from multiline strings by using `bytes.Index` and `bytes.LastIndex` instead of `bytes.Replace`.
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/189/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/189/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/738
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/738/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/738/comments
|
https://api.github.com/repos/ollama/ollama/issues/738/events
|
https://github.com/ollama/ollama/issues/738
| 1,931,835,103
|
I_kwDOJ0Z1Ps5zJXbf
| 738
|
AMD GPU & ROCm support
|
{
"login": "deadmeu",
"id": 12111013,
"node_id": "MDQ6VXNlcjEyMTExMDEz",
"avatar_url": "https://avatars.githubusercontent.com/u/12111013?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/deadmeu",
"html_url": "https://github.com/deadmeu",
"followers_url": "https://api.github.com/users/deadmeu/followers",
"following_url": "https://api.github.com/users/deadmeu/following{/other_user}",
"gists_url": "https://api.github.com/users/deadmeu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/deadmeu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/deadmeu/subscriptions",
"organizations_url": "https://api.github.com/users/deadmeu/orgs",
"repos_url": "https://api.github.com/users/deadmeu/repos",
"events_url": "https://api.github.com/users/deadmeu/events{/privacy}",
"received_events_url": "https://api.github.com/users/deadmeu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 6433346500,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf3UTxA",
"url": "https://api.github.com/repos/ollama/ollama/labels/amd",
"name": "amd",
"color": "000000",
"default": false,
"description": "Issues relating to AMD GPUs and ROCm"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 323
| 2023-10-08T14:53:24
| 2024-06-14T17:29:05
| 2024-03-07T18:51:02
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I have a 7900XT and would definitely love to have ROCm support. It seems like it might be coming with https://github.com/jmorganca/ollama/pull/667?
I couldn't find a dedicated issue for this so I'm creating this one to track it.
Edit: For those interested in this feature, follow https://github.com/jmorganca/ollama/pull/814.
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/738/reactions",
"total_count": 62,
"+1": 47,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 15,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/738/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4842
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4842/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4842/comments
|
https://api.github.com/repos/ollama/ollama/issues/4842/events
|
https://github.com/ollama/ollama/pull/4842
| 2,336,516,017
|
PR_kwDOJ0Z1Ps5xln3L
| 4,842
|
Separate ListResponse and ModelResponse for api/tags vs api/ps
|
{
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjhan/followers",
"following_url": "https://api.github.com/users/royjhan/following{/other_user}",
"gists_url": "https://api.github.com/users/royjhan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/royjhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/royjhan/subscriptions",
"organizations_url": "https://api.github.com/users/royjhan/orgs",
"repos_url": "https://api.github.com/users/royjhan/repos",
"events_url": "https://api.github.com/users/royjhan/events{/privacy}",
"received_events_url": "https://api.github.com/users/royjhan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-06-05T18:03:08
| 2024-06-06T17:11:46
| 2024-06-06T17:11:45
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4842",
"html_url": "https://github.com/ollama/ollama/pull/4842",
"diff_url": "https://github.com/ollama/ollama/pull/4842.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4842.patch",
"merged_at": "2024-06-06T17:11:45"
}
|
/api/tags was returning "0001-01-01T00:00:00Z" for 'expires_at'
/api/ps was returning "0001-01-01T00:00:00Z" for 'modified_at'
- Removes these fields from the respective endpoints
/api/ps was omitting 'size_vram' when it was 0
- ensures that size_vram is always returned
Added assertion in test case, and tested locally both with curl and CLI
|
{
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjhan/followers",
"following_url": "https://api.github.com/users/royjhan/following{/other_user}",
"gists_url": "https://api.github.com/users/royjhan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/royjhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/royjhan/subscriptions",
"organizations_url": "https://api.github.com/users/royjhan/orgs",
"repos_url": "https://api.github.com/users/royjhan/repos",
"events_url": "https://api.github.com/users/royjhan/events{/privacy}",
"received_events_url": "https://api.github.com/users/royjhan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4842/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4842/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6355
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6355/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6355/comments
|
https://api.github.com/repos/ollama/ollama/issues/6355/events
|
https://github.com/ollama/ollama/issues/6355
| 2,465,329,090
|
I_kwDOJ0Z1Ps6S8e_C
| 6,355
|
LLama 3.1 Tools do not work properly
|
{
"login": "nomisto",
"id": 28439912,
"node_id": "MDQ6VXNlcjI4NDM5OTEy",
"avatar_url": "https://avatars.githubusercontent.com/u/28439912?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nomisto",
"html_url": "https://github.com/nomisto",
"followers_url": "https://api.github.com/users/nomisto/followers",
"following_url": "https://api.github.com/users/nomisto/following{/other_user}",
"gists_url": "https://api.github.com/users/nomisto/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nomisto/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nomisto/subscriptions",
"organizations_url": "https://api.github.com/users/nomisto/orgs",
"repos_url": "https://api.github.com/users/nomisto/repos",
"events_url": "https://api.github.com/users/nomisto/events{/privacy}",
"received_events_url": "https://api.github.com/users/nomisto/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-08-14T09:21:40
| 2024-08-14T10:23:52
| 2024-08-14T10:23:06
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I can't get the tooling function of llama 3.1 to work properly.
```python
messages = [
{'role': 'system', 'content': 'You are a helpful assistant with tool calling capabilities. When you receive a tool call response, use the output to format an answer to the orginal use question.'},
{'role': 'user', 'content': 'What is the weather in Toronto?'},
{'role': 'assistant',
'content': '',
'tool_calls': [{'function': {'name': 'get_current_weather',
'arguments': {'city': 'Toronto'}},
'id': 'call_6duDxk',
'type': 'function'}]},
{'role': 'ipython',
'tool_call_id': 'call_6duDxk',
'name': 'get_current_weather',
'content': '{"city": "Toronto", "weather": "sunny"}'}]
```
Using ollama, I get the following (hallucinated) output:
```python
response = ollama.chat(
model='llama3.1:8b-instruct-fp16',
messages=messages,
tools=tools
)
response
```
```python
{'model': 'llama3.1:8b-instruct-fp16',
'created_at': '2024-08-14T09:12:15.708397447Z',
'message': {'role': 'assistant',
'content': ' \n\nThe current weather in Toronto is mostly cloudy with a temperature of 22°C (72°F) and a gentle breeze of 15 km/h (9 mph).'},
'done_reason': 'stop',
'done': True,
'total_duration': 1352319726,
'load_duration': 70634597,
'prompt_eval_count': 101,
'prompt_eval_duration': 44061000,
'eval_count': 34,
'eval_duration': 1013216000}
```
Using *plain* transformers I get the correct answer
```python
import transformers
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "meta-llama/Meta-Llama-3.1-8B-Instruct"
device = "cuda:5"
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16)
model.to(device)
tokenizer = AutoTokenizer.from_pretrained(model_id)
inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=True, return_tensors="pt").to(device)
outputs = model.generate(inputs, do_sample=False, max_new_tokens=256)
response = tokenizer.batch_decode(outputs[:, inputs.shape[1]:], skip_special_tokens=True)[0]
response
```
```text
'The current weather in Toronto is sunny.'
```
Maybe this is related to #6129?
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.3.5
|
{
"login": "nomisto",
"id": 28439912,
"node_id": "MDQ6VXNlcjI4NDM5OTEy",
"avatar_url": "https://avatars.githubusercontent.com/u/28439912?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nomisto",
"html_url": "https://github.com/nomisto",
"followers_url": "https://api.github.com/users/nomisto/followers",
"following_url": "https://api.github.com/users/nomisto/following{/other_user}",
"gists_url": "https://api.github.com/users/nomisto/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nomisto/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nomisto/subscriptions",
"organizations_url": "https://api.github.com/users/nomisto/orgs",
"repos_url": "https://api.github.com/users/nomisto/repos",
"events_url": "https://api.github.com/users/nomisto/events{/privacy}",
"received_events_url": "https://api.github.com/users/nomisto/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6355/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6355/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2167
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2167/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2167/comments
|
https://api.github.com/repos/ollama/ollama/issues/2167/events
|
https://github.com/ollama/ollama/issues/2167
| 2,097,758,655
|
I_kwDOJ0Z1Ps59CUG_
| 2,167
|
Deleting a model isn't removing Its blob
|
{
"login": "racso-dev",
"id": 51890236,
"node_id": "MDQ6VXNlcjUxODkwMjM2",
"avatar_url": "https://avatars.githubusercontent.com/u/51890236?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/racso-dev",
"html_url": "https://github.com/racso-dev",
"followers_url": "https://api.github.com/users/racso-dev/followers",
"following_url": "https://api.github.com/users/racso-dev/following{/other_user}",
"gists_url": "https://api.github.com/users/racso-dev/gists{/gist_id}",
"starred_url": "https://api.github.com/users/racso-dev/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/racso-dev/subscriptions",
"organizations_url": "https://api.github.com/users/racso-dev/orgs",
"repos_url": "https://api.github.com/users/racso-dev/repos",
"events_url": "https://api.github.com/users/racso-dev/events{/privacy}",
"received_events_url": "https://api.github.com/users/racso-dev/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 3
| 2024-01-24T08:57:13
| 2024-01-25T23:39:05
| 2024-01-25T23:39:05
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
# Bug Report
## Description
**Bug Summary:**
When I try to delete a model through the UI in the settings it doesn't seem to work properly.
**Steps to Reproduce:**
Settings > Select a model to delete > Delete
**Expected Behavior:**
It should delete the model and `/usr/share/ollama/.ollama/models/blobs` shoud therefore not contain the blob of the model anymore.
**Actual Behavior:**
The blob of the model isn't removed from `/usr/share/ollama/.ollama/models/blobs` and therefore memory isn't freed
## Environment
- **Operating System:** Ubuntu 22.04
- **Browser (if applicable):** Chrome Version 120.0.6099.224 (Official Build) (64-bit)
## Reproduction Details
**Confirmation:**
- [Y] I have read and followed all the instructions provided in the README.md.
- [Y] I have reviewed the troubleshooting.md document.
- [N] I have included the browser console logs. (Not relevant, but maybe I'm wrong)
- [N] I have included the Docker container logs. (Not relevant, but maybe I'm wrong)
## Installation Method
I installed the project, with building a docker container. I deployed the ollama inference server on a distant machine, that I included the url in the env of the docker container.
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2167/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2167/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8552
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8552/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8552/comments
|
https://api.github.com/repos/ollama/ollama/issues/8552/events
|
https://github.com/ollama/ollama/issues/8552
| 2,807,480,713
|
I_kwDOJ0Z1Ps6nVsGJ
| 8,552
|
Adding Tools support to template causes erroneous removal of <think> tag in responses
|
{
"login": "odrobnik",
"id": 333270,
"node_id": "MDQ6VXNlcjMzMzI3MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/333270?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/odrobnik",
"html_url": "https://github.com/odrobnik",
"followers_url": "https://api.github.com/users/odrobnik/followers",
"following_url": "https://api.github.com/users/odrobnik/following{/other_user}",
"gists_url": "https://api.github.com/users/odrobnik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/odrobnik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/odrobnik/subscriptions",
"organizations_url": "https://api.github.com/users/odrobnik/orgs",
"repos_url": "https://api.github.com/users/odrobnik/repos",
"events_url": "https://api.github.com/users/odrobnik/events{/privacy}",
"received_events_url": "https://api.github.com/users/odrobnik/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 12
| 2025-01-23T17:18:11
| 2025-01-24T20:26:50
| 2025-01-24T20:26:49
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
A model like `deepseek-r1` emits thinking self-dialog between `<think>` and `</think>`. This is evident in deepseek-r1 with the standard chat template.
While trying to add tool support I added this after the system message, this minimal addition is enough to show the problem.
```
{{- if .Tools }}
# Tools
{{- end }}
```
This causes Ollama to accept tools definitions, but it causes an issue later on, because there's a bug causing the leading `<think>` to disappear from messages. You see the thinking followed by `</think>` and the normal response.
At first I thought that I might have some issue with whitespace removal of the tags, but it turns out, that the above addition to the original template is enough to cause the same issue, with the rest of the template being completely identical.
So my minimally modified template looks like this:
```
{{- if .System }}{{ .System }}{{ end }}
{{- if .Tools }}
# Tools
{{- end }}
{{- range $i, $_ := .Messages }}
{{- $last := eq (len (slice $.Messages $i)) 1}}
{{- if eq .Role "user" }}<|User|>{{ .Content }}
{{- else if eq .Role "assistant" }}<|Assistant|>{{ .Content }}{{- if not $last }}<|end▁of▁sentence|>{{- end }}
{{- end }}
{{- if and $last (ne .Role "assistant") }}<|Assistant|>{{- end }}
{{- end }}
```
The expected behavior is that the tag should be preserved if it is in the response from the LLM.
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.5.7
|
{
"login": "odrobnik",
"id": 333270,
"node_id": "MDQ6VXNlcjMzMzI3MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/333270?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/odrobnik",
"html_url": "https://github.com/odrobnik",
"followers_url": "https://api.github.com/users/odrobnik/followers",
"following_url": "https://api.github.com/users/odrobnik/following{/other_user}",
"gists_url": "https://api.github.com/users/odrobnik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/odrobnik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/odrobnik/subscriptions",
"organizations_url": "https://api.github.com/users/odrobnik/orgs",
"repos_url": "https://api.github.com/users/odrobnik/repos",
"events_url": "https://api.github.com/users/odrobnik/events{/privacy}",
"received_events_url": "https://api.github.com/users/odrobnik/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8552/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8552/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6648
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6648/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6648/comments
|
https://api.github.com/repos/ollama/ollama/issues/6648/events
|
https://github.com/ollama/ollama/issues/6648
| 2,506,621,120
|
I_kwDOJ0Z1Ps6VaADA
| 6,648
|
Llama 3.1 8B giving bad answers while Llama.cpp works well with the same model (Ollama on MacOS)
|
{
"login": "ea167",
"id": 571650,
"node_id": "MDQ6VXNlcjU3MTY1MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/571650?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ea167",
"html_url": "https://github.com/ea167",
"followers_url": "https://api.github.com/users/ea167/followers",
"following_url": "https://api.github.com/users/ea167/following{/other_user}",
"gists_url": "https://api.github.com/users/ea167/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ea167/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ea167/subscriptions",
"organizations_url": "https://api.github.com/users/ea167/orgs",
"repos_url": "https://api.github.com/users/ea167/repos",
"events_url": "https://api.github.com/users/ea167/events{/privacy}",
"received_events_url": "https://api.github.com/users/ea167/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-09-05T02:11:26
| 2024-09-05T02:40:13
| 2024-09-05T02:18:13
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Llama 3.1 8B replies bad answers to a simple information extraction, running "out-of-the-box" on Ollama Mac.
The same model running on Llama.cpp with seemingly the same parameters works well.
Attached is a markdown content from a website, that is provided to the ollama prompt along with
`List all the associations mentioned in the markdown document above.`
This test case is easy to reproduce. The 2 screenshots show the settings and the results.
If you run the same test with the same model on Llama.cpp, you get the list of 25 associations right.
Same on deepinfra.com.
I guess there must be some Ollama bug in the default settings or templates.
Hope it helps!


[crawl_1_1.md](https://github.com/user-attachments/files/16882934/crawl_1_1.md)
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
ollama version is 0.3.9
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6648/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6648/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5541
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5541/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5541/comments
|
https://api.github.com/repos/ollama/ollama/issues/5541/events
|
https://github.com/ollama/ollama/issues/5541
| 2,395,312,275
|
I_kwDOJ0Z1Ps6OxZCT
| 5,541
|
internlm/internlm-xcomposer2d5-7b model request (multimodal)
|
{
"login": "swistaczek",
"id": 13238,
"node_id": "MDQ6VXNlcjEzMjM4",
"avatar_url": "https://avatars.githubusercontent.com/u/13238?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/swistaczek",
"html_url": "https://github.com/swistaczek",
"followers_url": "https://api.github.com/users/swistaczek/followers",
"following_url": "https://api.github.com/users/swistaczek/following{/other_user}",
"gists_url": "https://api.github.com/users/swistaczek/gists{/gist_id}",
"starred_url": "https://api.github.com/users/swistaczek/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/swistaczek/subscriptions",
"organizations_url": "https://api.github.com/users/swistaczek/orgs",
"repos_url": "https://api.github.com/users/swistaczek/repos",
"events_url": "https://api.github.com/users/swistaczek/events{/privacy}",
"received_events_url": "https://api.github.com/users/swistaczek/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
open
| false
| null |
[] | null | 0
| 2024-07-08T10:57:32
| 2024-07-08T11:01:10
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
[internlm/internlm-xcomposer2d5-7b](https://huggingface.co/internlm/internlm-xcomposer2d5-7b)
> InternLM-XComposer2.5 excels in various text-image comprehension and composition applications, achieving GPT-4V level capabilities with merely 7B LLM backend. IXC2.5 is trained with 24K interleaved image-text contexts, it can seamlessly extend to 96K long contexts via RoPE extrapolation. This long-context capability allows IXC-2.5 to excel in tasks requiring extensive input and output contexts.
https://github.com/InternLM/InternLM-XComposer
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5541/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5541/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/454
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/454/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/454/comments
|
https://api.github.com/repos/ollama/ollama/issues/454/events
|
https://github.com/ollama/ollama/pull/454
| 1,878,020,801
|
PR_kwDOJ0Z1Ps5ZXwGr
| 454
|
first pass at linux gpu support
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 20
| 2023-09-01T20:39:21
| 2023-09-15T19:20:38
| 2023-09-12T15:04:35
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/454",
"html_url": "https://github.com/ollama/ollama/pull/454",
"diff_url": "https://github.com/ollama/ollama/pull/454.diff",
"patch_url": "https://github.com/ollama/ollama/pull/454.patch",
"merged_at": "2023-09-12T15:04:35"
}
|
This is the basic implementation of enabling a linux build with GPU support.
Building for Linux with CPU support is unchanged (generate and build as normal).
Building for Linux with GPU requires generating with the `gpu` tag. This is to allow non-GPU linux builds to continue to be built locally without issue.
How to build/run:
- generate the required dependencies: `go generate ./...`
- build the binary `go build .`
and run as normal:
`./ollama serve &`
`./ollama run llama2`
Follow up:
- Packaging nvidia drivers or downloading them automatically
- Better heuristics for determining the number of layers to load into GPU
Part of #259
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/454/reactions",
"total_count": 9,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 7,
"eyes": 2
}
|
https://api.github.com/repos/ollama/ollama/issues/454/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6766
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6766/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6766/comments
|
https://api.github.com/repos/ollama/ollama/issues/6766/events
|
https://github.com/ollama/ollama/pull/6766
| 2,520,980,103
|
PR_kwDOJ0Z1Ps57OnqR
| 6,766
|
documentation for stopping a model
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-09-11T23:23:40
| 2024-09-18T23:26:44
| 2024-09-18T23:26:42
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6766",
"html_url": "https://github.com/ollama/ollama/pull/6766",
"diff_url": "https://github.com/ollama/ollama/pull/6766.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6766.patch",
"merged_at": "2024-09-18T23:26:42"
}
|
This PR includes documentation changes which should be merged when the release with the `ollama stop` command goes live (from PR #6739)
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6766/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6766/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/741
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/741/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/741/comments
|
https://api.github.com/repos/ollama/ollama/issues/741/events
|
https://github.com/ollama/ollama/pull/741
| 1,933,299,292
|
PR_kwDOJ0Z1Ps5cRa67
| 741
|
Update api.md
|
{
"login": "konsalex",
"id": 12672541,
"node_id": "MDQ6VXNlcjEyNjcyNTQx",
"avatar_url": "https://avatars.githubusercontent.com/u/12672541?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/konsalex",
"html_url": "https://github.com/konsalex",
"followers_url": "https://api.github.com/users/konsalex/followers",
"following_url": "https://api.github.com/users/konsalex/following{/other_user}",
"gists_url": "https://api.github.com/users/konsalex/gists{/gist_id}",
"starred_url": "https://api.github.com/users/konsalex/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/konsalex/subscriptions",
"organizations_url": "https://api.github.com/users/konsalex/orgs",
"repos_url": "https://api.github.com/users/konsalex/repos",
"events_url": "https://api.github.com/users/konsalex/events{/privacy}",
"received_events_url": "https://api.github.com/users/konsalex/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2023-10-09T15:09:18
| 2023-10-09T20:01:47
| 2023-10-09T20:01:47
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/741",
"html_url": "https://github.com/ollama/ollama/pull/741",
"diff_url": "https://github.com/ollama/ollama/pull/741.diff",
"patch_url": "https://github.com/ollama/ollama/pull/741.patch",
"merged_at": "2023-10-09T20:01:47"
}
|
Avoid triple ticks in visual editor and also copied in clipboard.


|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/741/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/741/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7026
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7026/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7026/comments
|
https://api.github.com/repos/ollama/ollama/issues/7026/events
|
https://github.com/ollama/ollama/pull/7026
| 2,554,714,116
|
PR_kwDOJ0Z1Ps59BBpC
| 7,026
|
server: fix custom template capabiliy checking for generation. (Fixes #7052)
|
{
"login": "kyRobot",
"id": 9490543,
"node_id": "MDQ6VXNlcjk0OTA1NDM=",
"avatar_url": "https://avatars.githubusercontent.com/u/9490543?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kyRobot",
"html_url": "https://github.com/kyRobot",
"followers_url": "https://api.github.com/users/kyRobot/followers",
"following_url": "https://api.github.com/users/kyRobot/following{/other_user}",
"gists_url": "https://api.github.com/users/kyRobot/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kyRobot/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kyRobot/subscriptions",
"organizations_url": "https://api.github.com/users/kyRobot/orgs",
"repos_url": "https://api.github.com/users/kyRobot/repos",
"events_url": "https://api.github.com/users/kyRobot/events{/privacy}",
"received_events_url": "https://api.github.com/users/kyRobot/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 3
| 2024-09-29T05:24:46
| 2024-11-09T00:24:02
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7026",
"html_url": "https://github.com/ollama/ollama/pull/7026",
"diff_url": "https://github.com/ollama/ollama/pull/7026.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7026.patch",
"merged_at": null
}
|
Closes #7052
Model capability checks for e.g Fill-In-Middle tasks (named insert for Capability) check that the Model Template includes the required template variables for the task
This PR addresses an issue where only the Template from the Modelfile was used to decide on capability rather than the request template when provided which causes failed requests even when the custom template does have the required variables.
Now, when a custom template is provided, the capability check acts on it, not the modelfile template. This allows callers to successfully override model templates in cases where the modelfile does not include an appropriate template or when the caller does not want to use raw request format as an override i.e they wish to take advantage of the templating capability.
Note: This is only for completion generation where template overrides are enabled.
- tests
- minor addition to `template.go` to add a func on Template to ask what it supports vs callers grabbing the Vars directly. This func is used by `Images.Model` capability check & in `routes.go`
Broaded Context: This Issue was identified when using Continue.dev pre-release which moves Continue to use ollama Suffix support for tab completion tasks. Using a completion model e.g Qwen2.5 Coder 1.5B base which has the default `{{.Prompt}}` template, this did not work due to the compatibility check for `{{.Suffix}}` failing. Sending a custom template also did not work due to the incorrectly targetted compatibility check. With this PR Continue autocompletion with Ollama custom templates now works. This also works via api, and is exercised in the new tests
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7026/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7026/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4665
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4665/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4665/comments
|
https://api.github.com/repos/ollama/ollama/issues/4665/events
|
https://github.com/ollama/ollama/issues/4665
| 2,319,229,522
|
I_kwDOJ0Z1Ps6KPKJS
| 4,665
|
Request to add Smaug Llama3
|
{
"login": "rjmalagon",
"id": 13302853,
"node_id": "MDQ6VXNlcjEzMzAyODUz",
"avatar_url": "https://avatars.githubusercontent.com/u/13302853?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rjmalagon",
"html_url": "https://github.com/rjmalagon",
"followers_url": "https://api.github.com/users/rjmalagon/followers",
"following_url": "https://api.github.com/users/rjmalagon/following{/other_user}",
"gists_url": "https://api.github.com/users/rjmalagon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rjmalagon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rjmalagon/subscriptions",
"organizations_url": "https://api.github.com/users/rjmalagon/orgs",
"repos_url": "https://api.github.com/users/rjmalagon/repos",
"events_url": "https://api.github.com/users/rjmalagon/events{/privacy}",
"received_events_url": "https://api.github.com/users/rjmalagon/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
open
| false
| null |
[] | null | 0
| 2024-05-27T13:57:38
| 2024-05-27T13:59:33
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
This fine tune model of Llama-3-70B-Instruct target to be a very high performant on quality
https://huggingface.co/abacusai/Smaug-Llama-3-70B-Instruct
Similar to Smaug-72b (but improved) that was built and finetune with DPO techniques https://arxiv.org/abs/2402.13228
Summarized: "DPO-Positive (DPOP), a new loss function and training procedure which avoids reduction of the DPO model’s intended picking one response over another."
Recently added to llama.cpp
https://github.com/ggerganov/llama.cpp/commit/c429b33beb35f13934a4dfbe0c138d30b45e5d54
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4665/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4665/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/5168
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5168/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5168/comments
|
https://api.github.com/repos/ollama/ollama/issues/5168/events
|
https://github.com/ollama/ollama/issues/5168
| 2,364,190,541
|
I_kwDOJ0Z1Ps6M6q9N
| 5,168
|
Models don't respond and ollama gets stuck after long time
|
{
"login": "luisgg98",
"id": 45603226,
"node_id": "MDQ6VXNlcjQ1NjAzMjI2",
"avatar_url": "https://avatars.githubusercontent.com/u/45603226?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/luisgg98",
"html_url": "https://github.com/luisgg98",
"followers_url": "https://api.github.com/users/luisgg98/followers",
"following_url": "https://api.github.com/users/luisgg98/following{/other_user}",
"gists_url": "https://api.github.com/users/luisgg98/gists{/gist_id}",
"starred_url": "https://api.github.com/users/luisgg98/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/luisgg98/subscriptions",
"organizations_url": "https://api.github.com/users/luisgg98/orgs",
"repos_url": "https://api.github.com/users/luisgg98/repos",
"events_url": "https://api.github.com/users/luisgg98/events{/privacy}",
"received_events_url": "https://api.github.com/users/luisgg98/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 5
| 2024-06-20T11:20:41
| 2024-08-24T23:37:03
| 2024-08-20T10:07:41
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Good afternoon.
I am rewriting a datasets by using https://ollama.com/library/mixtral:instruct
Ollama works perfectly until randomly it seems to get stuck in every task which envolves using a model.
The OS is Ubuntu 22.04.
Inference and running a model get stucks:
```bash
lggarcia@turing:~$ nvidia-smi
Thu Jun 20 13:04:11 2024
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.171.04 Driver Version: 535.171.04 CUDA Version: 12.2 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 NVIDIA H100 80GB HBM3 Off | 00000000:55:00.0 Off | 0 |
| N/A 52C P0 150W / 200W | 25168MiB / 81559MiB | 29% Default |
| | | Disabled |
+-----------------------------------------+----------------------+----------------------+
| 1 NVIDIA H100 80GB HBM3 Off | 00000000:68:00.0 Off | 0 |
| N/A 52C P0 167W / 200W | 35500MiB / 81559MiB | 57% Default |
| | | Disabled |
+-----------------------------------------+----------------------+----------------------+
| 2 NVIDIA H100 80GB HBM3 Off | 00000000:D2:00.0 Off | 0 |
| N/A 52C P0 157W / 200W | 79420MiB / 81559MiB | 25% Default |
| | | Disabled |
+-----------------------------------------+----------------------+----------------------+
| 3 NVIDIA H100 80GB HBM3 Off | 00000000:E4:00.0 Off | 0 |
| N/A 53C P0 156W / 200W | 71286MiB / 81559MiB | 31% Default |
| | | Disabled |
+-----------------------------------------+----------------------+----------------------+
+---------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
| 0 N/A N/A 153203 C python 708MiB |
| 0 N/A N/A 440608 C ...unners/cuda_v11/ollama_llama_server 24442MiB |
| 1 N/A N/A 79068 C python 17238MiB |
| 1 N/A N/A 153203 C python 706MiB |
| 1 N/A N/A 440608 C ...unners/cuda_v11/ollama_llama_server 17532MiB |
| 2 N/A N/A 153203 C python 706MiB |
| 2 N/A N/A 440608 C ...unners/cuda_v11/ollama_llama_server 25808MiB |
| 2 N/A N/A 551205 C ...astor/.conda/envs/mixenv/bin/python 52882MiB |
| 3 N/A N/A 153203 C python 706MiB |
| 3 N/A N/A 440608 C ...unners/cuda_v11/ollama_llama_server 24442MiB |
| 3 N/A N/A 468947 C ...astor/.conda/envs/mixenv/bin/python 46114MiB |
+---------------------------------------------------------------------------------------+
lggarcia@turing:~$ ollama list
NAME ID SIZE MODIFIED
command-r:latest b8cdfff0263c 20 GB 46 hours ago
hro/laser-dolphin-mixtral-2x7b-dpo:latest a2f4da69f5ae 7.8 GB 2 days ago
phi3:latest 64c1188f2485 2.4 GB 7 days ago
phi3:medium 1e67dff39209 7.9 GB 8 days ago
thebloke/laser-dolphin-mixtral-2x7b-dpo:latest f1dda7448ba2 7.8 GB 9 days ago
llama3:instruct 365c0bd3c000 4.7 GB 2 weeks ago
llama3:70b-instruct 786f3184aec0 39 GB 3 weeks ago
llama3:70b 786f3184aec0 39 GB 3 weeks ago
mixtral:instruct d39eb76ed9c5 26 GB 3 weeks ago
mixtral:8x7b d39eb76ed9c5 26 GB 3 weeks ago
mixtral:v0.1-instruct 6a0910fa6dc1 79 GB 3 weeks ago
llama2:latest 78e26419b446 3.8 GB 3 weeks ago
lggarcia@turing:~$ ollama run phi3:latest
⠴
```
ollama run command just doesn't work anymore it just gets stuck until I kill the process.
```bash
lggarcia@turing:~$ ollama --version
ollama version is 0.1.44
lggarcia@turing:~$ ollama ps
NAME ID SIZE PROCESSOR UNTIL
mixtral:v0.1-instruct 6a0910fa6dc1 91 GB 100% GPU Less than a second ago
lggarcia@turing:~$
```
This is the Linux service configuration:
```
Environment="OLLAMA_MODELS=/datassd/proyectos/modelos"
Environment="OLLAMA_HOST=0.0.0.0:11434"
Environment="OLLAMA_MAX_LOADED_MODELS=8"
Environment="OLLAMA_NUM_PARALLEL=8"
Environment="OLLAMA_DEBUG=1"
```
### OS
Linux
### GPU
Nvidia
### CPU
_No response_
### Ollama version
0.1.44
|
{
"login": "luisgg98",
"id": 45603226,
"node_id": "MDQ6VXNlcjQ1NjAzMjI2",
"avatar_url": "https://avatars.githubusercontent.com/u/45603226?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/luisgg98",
"html_url": "https://github.com/luisgg98",
"followers_url": "https://api.github.com/users/luisgg98/followers",
"following_url": "https://api.github.com/users/luisgg98/following{/other_user}",
"gists_url": "https://api.github.com/users/luisgg98/gists{/gist_id}",
"starred_url": "https://api.github.com/users/luisgg98/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/luisgg98/subscriptions",
"organizations_url": "https://api.github.com/users/luisgg98/orgs",
"repos_url": "https://api.github.com/users/luisgg98/repos",
"events_url": "https://api.github.com/users/luisgg98/events{/privacy}",
"received_events_url": "https://api.github.com/users/luisgg98/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5168/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5168/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4459
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4459/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4459/comments
|
https://api.github.com/repos/ollama/ollama/issues/4459/events
|
https://github.com/ollama/ollama/pull/4459
| 2,298,893,867
|
PR_kwDOJ0Z1Ps5vlfTp
| 4,459
|
Sanitize the env var debug log
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-05-15T21:43:25
| 2024-05-15T21:58:59
| 2024-05-15T21:58:55
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4459",
"html_url": "https://github.com/ollama/ollama/pull/4459",
"diff_url": "https://github.com/ollama/ollama/pull/4459.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4459.patch",
"merged_at": "2024-05-15T21:58:55"
}
|
Only dump env vars we care about in the logs
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4459/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4459/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3378
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3378/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3378/comments
|
https://api.github.com/repos/ollama/ollama/issues/3378/events
|
https://github.com/ollama/ollama/pull/3378
| 2,211,996,382
|
PR_kwDOJ0Z1Ps5q_SzG
| 3,378
|
Update README.md
|
{
"login": "yaroslavyaroslav",
"id": 16612247,
"node_id": "MDQ6VXNlcjE2NjEyMjQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/16612247?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yaroslavyaroslav",
"html_url": "https://github.com/yaroslavyaroslav",
"followers_url": "https://api.github.com/users/yaroslavyaroslav/followers",
"following_url": "https://api.github.com/users/yaroslavyaroslav/following{/other_user}",
"gists_url": "https://api.github.com/users/yaroslavyaroslav/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yaroslavyaroslav/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yaroslavyaroslav/subscriptions",
"organizations_url": "https://api.github.com/users/yaroslavyaroslav/orgs",
"repos_url": "https://api.github.com/users/yaroslavyaroslav/repos",
"events_url": "https://api.github.com/users/yaroslavyaroslav/events{/privacy}",
"received_events_url": "https://api.github.com/users/yaroslavyaroslav/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-03-27T22:16:56
| 2024-03-31T17:10:05
| 2024-03-31T17:10:05
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3378",
"html_url": "https://github.com/ollama/ollama/pull/3378",
"diff_url": "https://github.com/ollama/ollama/pull/3378.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3378.patch",
"merged_at": "2024-03-31T17:10:05"
}
|
Plugins list updated
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3378/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3378/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2814
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2814/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2814/comments
|
https://api.github.com/repos/ollama/ollama/issues/2814/events
|
https://github.com/ollama/ollama/issues/2814
| 2,159,532,651
|
I_kwDOJ0Z1Ps6At9pr
| 2,814
|
Cannot use Ollama at all on Windows
|
{
"login": "TheWonderfulTartiflette",
"id": 44057002,
"node_id": "MDQ6VXNlcjQ0MDU3MDAy",
"avatar_url": "https://avatars.githubusercontent.com/u/44057002?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TheWonderfulTartiflette",
"html_url": "https://github.com/TheWonderfulTartiflette",
"followers_url": "https://api.github.com/users/TheWonderfulTartiflette/followers",
"following_url": "https://api.github.com/users/TheWonderfulTartiflette/following{/other_user}",
"gists_url": "https://api.github.com/users/TheWonderfulTartiflette/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TheWonderfulTartiflette/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TheWonderfulTartiflette/subscriptions",
"organizations_url": "https://api.github.com/users/TheWonderfulTartiflette/orgs",
"repos_url": "https://api.github.com/users/TheWonderfulTartiflette/repos",
"events_url": "https://api.github.com/users/TheWonderfulTartiflette/events{/privacy}",
"received_events_url": "https://api.github.com/users/TheWonderfulTartiflette/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-02-28T18:03:57
| 2024-03-07T00:07:56
| 2024-03-07T00:07:56
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
On Windows, with the preview build, I installed Ollama, but as soon as I entered “ollama run llama2”, I got this:
Error: max retries exceeded: 400: <?xml version="1.0” encoding="UTF-8”?><Error><Code>InvalidArgument</Code><Message>Invalid Argument: range must be positive.</Message></Error>
Not sure what to do about this
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2814/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2814/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5553
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5553/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5553/comments
|
https://api.github.com/repos/ollama/ollama/issues/5553/events
|
https://github.com/ollama/ollama/issues/5553
| 2,396,881,977
|
I_kwDOJ0Z1Ps6O3YQ5
| 5,553
|
GLM4-1m support
|
{
"login": "taozhiyuai",
"id": 146583103,
"node_id": "U_kgDOCLyuPw",
"avatar_url": "https://avatars.githubusercontent.com/u/146583103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/taozhiyuai",
"html_url": "https://github.com/taozhiyuai",
"followers_url": "https://api.github.com/users/taozhiyuai/followers",
"following_url": "https://api.github.com/users/taozhiyuai/following{/other_user}",
"gists_url": "https://api.github.com/users/taozhiyuai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/taozhiyuai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/taozhiyuai/subscriptions",
"organizations_url": "https://api.github.com/users/taozhiyuai/orgs",
"repos_url": "https://api.github.com/users/taozhiyuai/repos",
"events_url": "https://api.github.com/users/taozhiyuai/events{/privacy}",
"received_events_url": "https://api.github.com/users/taozhiyuai/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
open
| false
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 7
| 2024-07-09T01:17:38
| 2024-07-12T19:24:12
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
taozhiyu@603e5f4a42f1 downloads % ollama run glm4:9b-chat-1m-q8_0
pulling manifest
pulling 5c0b5b35f3e0... 100% ▕█████████████████▏ 10 GB
pulling e7e7aebd710c... 100% ▕█████████████████▏ 137 B
pulling e4f0dc83900a... 100% ▕█████████████████▏ 6.5 KB
pulling 4134f3eb0516... 100% ▕█████████████████▏ 81 B
pulling dcddad887d90... 100% ▕█████████████████▏ 489 B
verifying sha256 digest
writing manifest
removing any unused layers
success
Error: llama runner process has terminated: signal: abort trap error:check_tensor_dims: tensor 'blk.0.attn_qkv.weight' has wrong shape; expected 4096, 4608, got 4096, 5120, 1, 1
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.2
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5553/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5553/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/317
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/317/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/317/comments
|
https://api.github.com/repos/ollama/ollama/issues/317/events
|
https://github.com/ollama/ollama/pull/317
| 1,845,540,061
|
PR_kwDOJ0Z1Ps5XqI4j
| 317
|
cmd: check GetBlobsPath error
|
{
"login": "soroushj",
"id": 4595459,
"node_id": "MDQ6VXNlcjQ1OTU0NTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/4595459?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/soroushj",
"html_url": "https://github.com/soroushj",
"followers_url": "https://api.github.com/users/soroushj/followers",
"following_url": "https://api.github.com/users/soroushj/following{/other_user}",
"gists_url": "https://api.github.com/users/soroushj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/soroushj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/soroushj/subscriptions",
"organizations_url": "https://api.github.com/users/soroushj/orgs",
"repos_url": "https://api.github.com/users/soroushj/repos",
"events_url": "https://api.github.com/users/soroushj/events{/privacy}",
"received_events_url": "https://api.github.com/users/soroushj/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2023-08-10T16:43:53
| 2023-08-10T17:01:07
| 2023-08-10T16:57:50
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/317",
"html_url": "https://github.com/ollama/ollama/pull/317",
"diff_url": "https://github.com/ollama/ollama/pull/317.diff",
"patch_url": "https://github.com/ollama/ollama/pull/317.patch",
"merged_at": "2023-08-10T16:57:50"
}
|
The error returned by `server.GetBlobsPath` in `showLayer` was never checked. Check the error and return if not nil. Also, make newlines at the end of error messages consistent and fix a typo.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/317/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/317/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4256
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4256/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4256/comments
|
https://api.github.com/repos/ollama/ollama/issues/4256/events
|
https://github.com/ollama/ollama/issues/4256
| 2,285,298,174
|
I_kwDOJ0Z1Ps6INuH-
| 4,256
|
Ollama 0.1.34 not working on AGX Orin 64gb while no issue on 0.1.30
|
{
"login": "casimirextreme",
"id": 14925422,
"node_id": "MDQ6VXNlcjE0OTI1NDIy",
"avatar_url": "https://avatars.githubusercontent.com/u/14925422?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/casimirextreme",
"html_url": "https://github.com/casimirextreme",
"followers_url": "https://api.github.com/users/casimirextreme/followers",
"following_url": "https://api.github.com/users/casimirextreme/following{/other_user}",
"gists_url": "https://api.github.com/users/casimirextreme/gists{/gist_id}",
"starred_url": "https://api.github.com/users/casimirextreme/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/casimirextreme/subscriptions",
"organizations_url": "https://api.github.com/users/casimirextreme/orgs",
"repos_url": "https://api.github.com/users/casimirextreme/repos",
"events_url": "https://api.github.com/users/casimirextreme/events{/privacy}",
"received_events_url": "https://api.github.com/users/casimirextreme/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-05-08T10:51:50
| 2024-05-16T15:29:20
| 2024-05-08T15:44:56
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
It was perfectly working on 0.1.30 that I've compiled. Now either the official release nor my compiled version is working
Ollama "serve" hangs with:
`CUDA error: the resource allocation failed
current device: 0, in function cublas_handle at /data/llm/llama.cpp/ggml-cuda/common.cuh:526
cublasCreate_v2(&cublas_handles[device])
GGML_ASSERT: /data/llm/llama.cpp/ggml-cuda.cu:60: !"CUDA error"
Could not attach to process. If your uid matches the uid of the target
process, check the setting of /proc/sys/kernel/yama/ptrace_scope, or try
again as the root user. For more details, see /etc/sysctl.d/10-ptrace.conf
ptrace: Inappropriate ioctl for device.
No stack.
The program is not being run.`
I've added the logs of both versions:
[agx-orin-0.1.34.txt](https://github.com/ollama/ollama/files/15247743/agx-orin-0.1.34.txt)
[agx-orin-0.1.30.txt](https://github.com/ollama/ollama/files/15247744/agx-orin-0.1.30.txt)
It was compiled with:
`OLLAMA_SKIP_CPU_GENERATE="1"
CMAKE_CUDA_ARCHITECTURES="72;87"
`
Is there any known issue with Jetson AGX orin?
### OS
Linux
### GPU
Nvidia
### CPU
Other
### Ollama version
0.1.34
|
{
"login": "casimirextreme",
"id": 14925422,
"node_id": "MDQ6VXNlcjE0OTI1NDIy",
"avatar_url": "https://avatars.githubusercontent.com/u/14925422?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/casimirextreme",
"html_url": "https://github.com/casimirextreme",
"followers_url": "https://api.github.com/users/casimirextreme/followers",
"following_url": "https://api.github.com/users/casimirextreme/following{/other_user}",
"gists_url": "https://api.github.com/users/casimirextreme/gists{/gist_id}",
"starred_url": "https://api.github.com/users/casimirextreme/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/casimirextreme/subscriptions",
"organizations_url": "https://api.github.com/users/casimirextreme/orgs",
"repos_url": "https://api.github.com/users/casimirextreme/repos",
"events_url": "https://api.github.com/users/casimirextreme/events{/privacy}",
"received_events_url": "https://api.github.com/users/casimirextreme/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4256/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4256/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6429
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6429/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6429/comments
|
https://api.github.com/repos/ollama/ollama/issues/6429/events
|
https://github.com/ollama/ollama/pull/6429
| 2,474,322,317
|
PR_kwDOJ0Z1Ps54ya2m
| 6,429
|
CI: remove directories from dist dir before upload step
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-08-19T22:18:49
| 2024-08-19T22:19:25
| 2024-08-19T22:19:21
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6429",
"html_url": "https://github.com/ollama/ollama/pull/6429",
"diff_url": "https://github.com/ollama/ollama/pull/6429.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6429.patch",
"merged_at": "2024-08-19T22:19:21"
}
| null |
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6429/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6429/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5081
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5081/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5081/comments
|
https://api.github.com/repos/ollama/ollama/issues/5081/events
|
https://github.com/ollama/ollama/issues/5081
| 2,355,816,361
|
I_kwDOJ0Z1Ps6Mauep
| 5,081
|
Timeout for long generation
|
{
"login": "slavonnet",
"id": 9463626,
"node_id": "MDQ6VXNlcjk0NjM2MjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9463626?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/slavonnet",
"html_url": "https://github.com/slavonnet",
"followers_url": "https://api.github.com/users/slavonnet/followers",
"following_url": "https://api.github.com/users/slavonnet/following{/other_user}",
"gists_url": "https://api.github.com/users/slavonnet/gists{/gist_id}",
"starred_url": "https://api.github.com/users/slavonnet/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/slavonnet/subscriptions",
"organizations_url": "https://api.github.com/users/slavonnet/orgs",
"repos_url": "https://api.github.com/users/slavonnet/repos",
"events_url": "https://api.github.com/users/slavonnet/events{/privacy}",
"received_events_url": "https://api.github.com/users/slavonnet/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 7
| 2024-06-16T14:52:39
| 2024-11-06T01:20:44
| 2024-11-06T01:20:40
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
https://github.com/ollama/ollama-js/issues/103
After 5 minutes of execution, 500 error is received. There is no way to increase the time. I use `stream: false`.
`keepalive: 15m` and env `OLLAMA_KEEP_ALIVE=15m` does not help
```
июн 16 17:43:34 ai ollama[3733311]: [GIN] 2024/06/16 - 17:43:34 | 500 | 5m1s | 127.0.0.1 | POST "/api/chat"
```
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.45-rc1
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5081/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5081/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1834
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1834/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1834/comments
|
https://api.github.com/repos/ollama/ollama/issues/1834/events
|
https://github.com/ollama/ollama/pull/1834
| 2,069,003,418
|
PR_kwDOJ0Z1Ps5jZkEa
| 1,834
|
Detect very old CUDA GPUs and fall back to CPU
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-01-07T05:56:34
| 2024-01-07T18:39:52
| 2024-01-07T18:39:49
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1834",
"html_url": "https://github.com/ollama/ollama/pull/1834",
"diff_url": "https://github.com/ollama/ollama/pull/1834.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1834.patch",
"merged_at": "2024-01-07T18:39:49"
}
|
If we try to load the CUDA library on an old GPU, it panics and crashes the server. This checks the compute capability before we load the library so we can gracefully fall back to CPU mode.
Prior to version 0.1.18, the fallback behavior resulted from the subprocess runner crashing. Example from an old GPU:
```
ggml_init_cublas: found 1 CUDA devices:
Device 0: NVIDIA GeForce GTX 765M, compute capability 3.0
cuBLAS error 3 at /go/src/github.com/jmorganca/ollama/llm/llama.cpp/gguf/ggml-cuda.cu:5552
current device: 0
2024/01/06 21:48:54 llama.go:320: llama runner exited with error: exit status 1
```
In 0.1.18 without this fix, the server crashes with a panic due to the assert in llama.cpp.
With this fix on the same system we now detect and fallback to CPU mode:
```
2024/01/06 21:52:17 shim_ext_server.go:142: Dynamic LLM variants [cuda rocm]
2024/01/06 21:52:17 gpu.go:37: Detecting GPU type
2024/01/06 21:52:17 gpu.go:56: Nvidia GPU detected
2024/01/06 21:52:17 gpu.go:89: CUDA GPU is too old. Falling back to CPU mode. Compute Capability detected: 3.0
2024/01/06 21:52:17 routes.go:953: no GPU detected
...
```
Example output on a newer supported GPU which correctly runs on the GPU:
```
2024/01/06 21:55:11 shim_ext_server.go:142: Dynamic LLM variants [cuda rocm]
2024/01/06 21:55:11 gpu.go:37: Detecting GPU type
2024/01/06 21:55:11 gpu.go:56: Nvidia GPU detected
2024/01/06 21:55:11 gpu.go:86: CUDA Compute Capability detected: 7.5
```
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1834/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1834/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1334
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1334/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1334/comments
|
https://api.github.com/repos/ollama/ollama/issues/1334/events
|
https://github.com/ollama/ollama/pull/1334
| 2,019,308,220
|
PR_kwDOJ0Z1Ps5g0KfR
| 1,334
|
load projectors
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-11-30T18:51:22
| 2023-12-05T22:40:54
| 2023-12-05T22:40:53
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1334",
"html_url": "https://github.com/ollama/ollama/pull/1334",
"diff_url": "https://github.com/ollama/ollama/pull/1334.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1334.patch",
"merged_at": "2023-12-05T22:40:53"
}
|
continuation of #1250 and #1308 to load additional models
This adds model configurations to generate response:
```
$ curl -s localhost:11434/api/generate -d '{"model":"llava:7b-v1.5-q4_0"}' | jq .
{
"model": "llava:7b-v1.5-q4_0",
"created_at": "2023-12-01T19:41:43.684471Z",
"response": "",
"model_configuration": {
"model_format": "gguf",
"model_family": "llama",
"model_families": [
"llama",
"clip"
],
"model_type": "7B",
"file_type": "Q4_0"
},
"done": true
}
```
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1334/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1334/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6598
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6598/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6598/comments
|
https://api.github.com/repos/ollama/ollama/issues/6598/events
|
https://github.com/ollama/ollama/pull/6598
| 2,501,765,290
|
PR_kwDOJ0Z1Ps56NItH
| 6,598
|
commit
|
{
"login": "rpreslar4765",
"id": 89657947,
"node_id": "MDQ6VXNlcjg5NjU3OTQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/89657947?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rpreslar4765",
"html_url": "https://github.com/rpreslar4765",
"followers_url": "https://api.github.com/users/rpreslar4765/followers",
"following_url": "https://api.github.com/users/rpreslar4765/following{/other_user}",
"gists_url": "https://api.github.com/users/rpreslar4765/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rpreslar4765/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rpreslar4765/subscriptions",
"organizations_url": "https://api.github.com/users/rpreslar4765/orgs",
"repos_url": "https://api.github.com/users/rpreslar4765/repos",
"events_url": "https://api.github.com/users/rpreslar4765/events{/privacy}",
"received_events_url": "https://api.github.com/users/rpreslar4765/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-09-03T01:33:09
| 2024-09-03T03:34:25
| 2024-09-03T03:34:25
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6598",
"html_url": "https://github.com/ollama/ollama/pull/6598",
"diff_url": "https://github.com/ollama/ollama/pull/6598.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6598.patch",
"merged_at": null
}
| null |
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6598/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6598/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1963
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1963/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1963/comments
|
https://api.github.com/repos/ollama/ollama/issues/1963/events
|
https://github.com/ollama/ollama/pull/1963
| 2,079,693,297
|
PR_kwDOJ0Z1Ps5j-KNu
| 1,963
|
trim chat prompt based on llm context size
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-01-12T20:48:36
| 2024-01-30T20:59:30
| 2024-01-30T20:59:29
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1963",
"html_url": "https://github.com/ollama/ollama/pull/1963",
"diff_url": "https://github.com/ollama/ollama/pull/1963.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1963.patch",
"merged_at": "2024-01-30T20:59:29"
}
|
When trimming the input chat prompt we need to make sure we keep the prompt template in the expected format. Without this the prompt will be trimmed without accounting for the model template when the maximum context length is reached, which can result in unexpected behavior from the model.
- update the `ChatPrompt` function to return a list of prompt variable, to allow the calling function to append them into the final prompt
- create the final prompt based on the loaded LLM's context window size, while preserving the prompt template formatting and system message in the first message of the new context window
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1963/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1963/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/263
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/263/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/263/comments
|
https://api.github.com/repos/ollama/ollama/issues/263/events
|
https://github.com/ollama/ollama/issues/263
| 1,834,033,064
|
I_kwDOJ0Z1Ps5tUR-o
| 263
|
Crash when running many (500) generations
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 2
| 2023-08-02T23:48:03
| 2023-08-02T23:53:45
| 2023-08-02T23:53:44
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
```
2023-08-03 01:24:10.990 ollama[3006:24871] *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '*** -[__NSArrayM setObject:atIndexedSubscript:]: object cannot be nil'
*** First throw call stack:
(
0 CoreFoundation 0x00000001aa66b154 __exceptionPreprocess + 176
1 libobjc.A.dylib 0x00000001aa18a4d4 objc_exception_throw + 60
2 CoreFoundation 0x00000001aa7559b8 -[__NSCFString characterAtIndex:].cold.1 + 0
3 CoreFoundation 0x00000001aa752280 -[__NSArrayM setObject:atIndexedSubscript:].cold.2 + 0
4 CoreFoundation 0x00000001aa609f60 -[__NSArrayM setObject:atIndexedSubscript:] + 640
5 ollama 0x00000001005b258c ggml_metal_graph_compute + 108
6 ollama 0x00000001005a56c4 _ZL19llama_eval_internalR13llama_contextPKiPKfiiiPKc + 2620
7 ollama 0x00000001005a4c58 llama_eval + 40
8 ollama 0x0000000100570228 _cgo_2053a7d5fdc2_Cfunc_llama_eval + 44
9 ollama 0x00000001000be05c runtime.asmcgocall.abi0 + 124
)
libc++abi: terminating due to uncaught exception of type NSException
```
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/263/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/263/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/888
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/888/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/888/comments
|
https://api.github.com/repos/ollama/ollama/issues/888/events
|
https://github.com/ollama/ollama/issues/888
| 1,958,910,016
|
I_kwDOJ0Z1Ps50wphA
| 888
|
How to deploy on k8s
|
{
"login": "xinmans",
"id": 2713008,
"node_id": "MDQ6VXNlcjI3MTMwMDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2713008?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xinmans",
"html_url": "https://github.com/xinmans",
"followers_url": "https://api.github.com/users/xinmans/followers",
"following_url": "https://api.github.com/users/xinmans/following{/other_user}",
"gists_url": "https://api.github.com/users/xinmans/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xinmans/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xinmans/subscriptions",
"organizations_url": "https://api.github.com/users/xinmans/orgs",
"repos_url": "https://api.github.com/users/xinmans/repos",
"events_url": "https://api.github.com/users/xinmans/events{/privacy}",
"received_events_url": "https://api.github.com/users/xinmans/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396191,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aXw",
"url": "https://api.github.com/repos/ollama/ollama/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
},
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 7
| 2023-10-24T09:59:21
| 2023-12-04T22:40:08
| 2023-12-04T22:40:08
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Anyone can share some deploymen.yaml ?
|
{
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.github.com/users/technovangelist/followers",
"following_url": "https://api.github.com/users/technovangelist/following{/other_user}",
"gists_url": "https://api.github.com/users/technovangelist/gists{/gist_id}",
"starred_url": "https://api.github.com/users/technovangelist/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/technovangelist/subscriptions",
"organizations_url": "https://api.github.com/users/technovangelist/orgs",
"repos_url": "https://api.github.com/users/technovangelist/repos",
"events_url": "https://api.github.com/users/technovangelist/events{/privacy}",
"received_events_url": "https://api.github.com/users/technovangelist/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/888/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/888/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3748
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3748/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3748/comments
|
https://api.github.com/repos/ollama/ollama/issues/3748/events
|
https://github.com/ollama/ollama/issues/3748
| 2,252,194,110
|
I_kwDOJ0Z1Ps6GPcE-
| 3,748
|
Import from a HF model directly?
|
{
"login": "wennycooper",
"id": 7479445,
"node_id": "MDQ6VXNlcjc0Nzk0NDU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7479445?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wennycooper",
"html_url": "https://github.com/wennycooper",
"followers_url": "https://api.github.com/users/wennycooper/followers",
"following_url": "https://api.github.com/users/wennycooper/following{/other_user}",
"gists_url": "https://api.github.com/users/wennycooper/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wennycooper/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wennycooper/subscriptions",
"organizations_url": "https://api.github.com/users/wennycooper/orgs",
"repos_url": "https://api.github.com/users/wennycooper/repos",
"events_url": "https://api.github.com/users/wennycooper/events{/privacy}",
"received_events_url": "https://api.github.com/users/wennycooper/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 8
| 2024-04-19T06:35:34
| 2024-07-30T16:46:24
| 2024-05-10T00:09:36
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Is it possible to import from a huggingface model (given a huggingface card ID) directly?
I don't want to converting it to GGUF.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3748/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3748/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4097
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4097/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4097/comments
|
https://api.github.com/repos/ollama/ollama/issues/4097/events
|
https://github.com/ollama/ollama/issues/4097
| 2,274,804,599
|
I_kwDOJ0Z1Ps6HlsN3
| 4,097
|
Generation Request Failing When Ollama Server Running Inside a Docker Container
|
{
"login": "Deepansharora27",
"id": 43300955,
"node_id": "MDQ6VXNlcjQzMzAwOTU1",
"avatar_url": "https://avatars.githubusercontent.com/u/43300955?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Deepansharora27",
"html_url": "https://github.com/Deepansharora27",
"followers_url": "https://api.github.com/users/Deepansharora27/followers",
"following_url": "https://api.github.com/users/Deepansharora27/following{/other_user}",
"gists_url": "https://api.github.com/users/Deepansharora27/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Deepansharora27/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Deepansharora27/subscriptions",
"organizations_url": "https://api.github.com/users/Deepansharora27/orgs",
"repos_url": "https://api.github.com/users/Deepansharora27/repos",
"events_url": "https://api.github.com/users/Deepansharora27/events{/privacy}",
"received_events_url": "https://api.github.com/users/Deepansharora27/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677677816,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgVG-A",
"url": "https://api.github.com/repos/ollama/ollama/labels/docker",
"name": "docker",
"color": "0052CC",
"default": false,
"description": "Issues relating to using ollama in containers"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 2
| 2024-05-02T07:22:47
| 2024-05-03T16:51:54
| 2024-05-03T16:51:54
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Hi,
This is My Dockerfile in Which I am Using the Ollama Base Image
```
FROM ollama/ollama:0.1.32 AS OllamaServer
WORKDIR /usr/src/app
COPY . .
EXPOSE 11434
ENV OLLAMA_HOST 0.0.0.0
ENV OLLAMA_ORIGINS=http://0.0.0.0:11434
RUN nohup bash -c "ollama serve &" && sleep 5 && ollama create llama3-custom -f /usr/src/app/model/Modelfile
```
I am Trying to Deploy this Image as a Container on a Remote VM Instance Deployed on Google Cloud Platform.
Now the Problem is that When I Try to Make a curl request in order to do a generation/chat request from my Local Machine the Operation Times Out . Here I use the Remote VM IP:Port for the Generation Request.
<img width="1447" alt="Screenshot 2024-05-02 at 12 50 23 PM" src="https://github.com/ollama/ollama/assets/43300955/0bf21689-eab1-462e-b7d5-db818c7f3ab4">
Now the Same Thing If I try to do within the Remote VM via curl localhost:11434, then the Request is Successfull and there is No Operation Time Out.
Can You Advise What Can Be Done On this ? I am Using the Appropriate ENV Variables as Suggested to Bind On all the Network Interfaces
### OS
Ubuntu 20.04 LTS
### GPU
_No response_
### CPU
Intel, AMD
### Ollama version
0.1.32
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4097/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4097/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8633
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8633/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8633/comments
|
https://api.github.com/repos/ollama/ollama/issues/8633/events
|
https://github.com/ollama/ollama/pull/8633
| 2,815,643,279
|
PR_kwDOJ0Z1Ps6JOZes
| 8,633
|
my commit
|
{
"login": "aditya-agrawalSFDC",
"id": 122862436,
"node_id": "U_kgDOB1K7ZA",
"avatar_url": "https://avatars.githubusercontent.com/u/122862436?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aditya-agrawalSFDC",
"html_url": "https://github.com/aditya-agrawalSFDC",
"followers_url": "https://api.github.com/users/aditya-agrawalSFDC/followers",
"following_url": "https://api.github.com/users/aditya-agrawalSFDC/following{/other_user}",
"gists_url": "https://api.github.com/users/aditya-agrawalSFDC/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aditya-agrawalSFDC/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aditya-agrawalSFDC/subscriptions",
"organizations_url": "https://api.github.com/users/aditya-agrawalSFDC/orgs",
"repos_url": "https://api.github.com/users/aditya-agrawalSFDC/repos",
"events_url": "https://api.github.com/users/aditya-agrawalSFDC/events{/privacy}",
"received_events_url": "https://api.github.com/users/aditya-agrawalSFDC/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2025-01-28T13:15:07
| 2025-01-28T13:17:19
| 2025-01-28T13:17:14
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/8633",
"html_url": "https://github.com/ollama/ollama/pull/8633",
"diff_url": "https://github.com/ollama/ollama/pull/8633.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8633.patch",
"merged_at": null
}
| null |
{
"login": "aditya-agrawalSFDC",
"id": 122862436,
"node_id": "U_kgDOB1K7ZA",
"avatar_url": "https://avatars.githubusercontent.com/u/122862436?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aditya-agrawalSFDC",
"html_url": "https://github.com/aditya-agrawalSFDC",
"followers_url": "https://api.github.com/users/aditya-agrawalSFDC/followers",
"following_url": "https://api.github.com/users/aditya-agrawalSFDC/following{/other_user}",
"gists_url": "https://api.github.com/users/aditya-agrawalSFDC/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aditya-agrawalSFDC/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aditya-agrawalSFDC/subscriptions",
"organizations_url": "https://api.github.com/users/aditya-agrawalSFDC/orgs",
"repos_url": "https://api.github.com/users/aditya-agrawalSFDC/repos",
"events_url": "https://api.github.com/users/aditya-agrawalSFDC/events{/privacy}",
"received_events_url": "https://api.github.com/users/aditya-agrawalSFDC/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8633/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8633/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1298
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1298/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1298/comments
|
https://api.github.com/repos/ollama/ollama/issues/1298/events
|
https://github.com/ollama/ollama/issues/1298
| 2,014,096,005
|
I_kwDOJ0Z1Ps54DKqF
| 1,298
|
clipboard paste issue
|
{
"login": "eramax",
"id": 542413,
"node_id": "MDQ6VXNlcjU0MjQxMw==",
"avatar_url": "https://avatars.githubusercontent.com/u/542413?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eramax",
"html_url": "https://github.com/eramax",
"followers_url": "https://api.github.com/users/eramax/followers",
"following_url": "https://api.github.com/users/eramax/following{/other_user}",
"gists_url": "https://api.github.com/users/eramax/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eramax/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eramax/subscriptions",
"organizations_url": "https://api.github.com/users/eramax/orgs",
"repos_url": "https://api.github.com/users/eramax/repos",
"events_url": "https://api.github.com/users/eramax/events{/privacy}",
"received_events_url": "https://api.github.com/users/eramax/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 6
| 2023-11-28T10:15:49
| 2023-11-29T00:59:28
| 2023-11-29T00:59:28
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I noticed this many times,
when I past a long contain from windows to wsl terminal using windows terminal I face this incorrect printing for the pasted input
The source (original)
```go
explain this code
func Parse(reader io.Reader) ([]Command, error) {
var commands []Command
var command, modelCommand Command
scanner := bufio.NewScanner(reader)
scanner.Buffer(make([]byte, 0, bufio.MaxScanTokenSize), bufio.MaxScanTokenSize)
scanner.Split(scanModelfile)
for scanner.Scan() {
line := scanner.Bytes()
fields := bytes.SplitN(line, []byte(" "), 2)
if len(fields) == 0 || len(fields[0]) == 0 {
continue
}
switch string(bytes.ToUpper(fields[0])) {
case "FROM":
command.Name = "model"
command.Args = string(fields[1])
// copy command for validation
modelCommand = command
case "LICENSE", "TEMPLATE", "SYSTEM", "PROMPT", "ADAPTER":
command.Name = string(bytes.ToLower(fields[0]))
command.Args = string(fields[1])
case "PARAMETER":
fields = bytes.SplitN(fields[1], []byte(" "), 2)
if len(fields) < 2 {
return nil, fmt.Errorf("missing value for %s", fields)
}
command.Name = string(fields[0])
command.Args = string(fields[1])
case "EMBED":
return nil, fmt.Errorf("deprecated command: EMBED is no longer supported, use the /embed API endpoint instead")
default:
if !bytes.HasPrefix(fields[0], []byte("#")) {
// log a warning for unknown commands
log.Printf("WARNING: Unknown command: %s", fields[0])
}
continue
}
commands = append(commands, command)
command.Reset()
}
if modelCommand.Args == "" {
return nil, errors.New("no FROM line for the model was specified")
}
return commands, scanner.Err()
}
```
The terminal content
```bash
>>> """explain this code
... func Parse(reader io.Reader) ([]Command, error) {
... var commands []Command
... var command, modelCommand Command
...
... scanner := bufio.NewScanner(reader)
... scanner.Buffer(make([]byte, 0, bufio.MaxScanTokenSize), bufio.MaxScanTokenSize)
... scanner.Split(scanModelfile)
... for scanner.Scan() {
... line := scanner.Bytes()
...
... fields := bytes.SplitN(line, []byte(" "), 2)
... if len(fields) == 0 || len(fields[0]) == 0 {
... continue
... }
...
... switch string(bytes.ToUpper(fields[0])) {
... case "FROM":
... command.Name = "model"
... command.Args = string(fields[1])
... // copy command for validation
... modelCommand = command
... case "LICENSE", "TEMPLATE", "SYSTEM", "PROMPT", "ADAPTER":
... command.Name = string(bytes.ToLower(fields[0]))
... command.Args = string(fields[1])
... case "PARAMETER":
... fields = bytes.SplitN(fields[1], []byte(" "), 2)
... if len(fields) < 2 {
... return nil, fmt.Errorf("missing value for %s", fields)
... }
...
... mmand.A command.Name = string(fields[0])
... command.Args = string(fields[1])
... case "EMBED":
... return nil, fmt.Errorf("deprecated command: EMBED is no longer supported, use the /embed API endpoint instead")
... default:
... if !bytes.HasPrefix(fields[0], []byte("#")) {
... // log a warning for unknown commands
... log.Printf("WARNING: Unknown command: %s", fields[0])
... }
... continue
... }
...
... commands = append(commands, command)
... command.Reset()
... }
...
... if modelCommand.Args == "" {
... return nil, errors.New("no FROM line for the model was specified")
... }
...
... return commands, scanner.Err()
... }
```

|
{
"login": "eramax",
"id": 542413,
"node_id": "MDQ6VXNlcjU0MjQxMw==",
"avatar_url": "https://avatars.githubusercontent.com/u/542413?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eramax",
"html_url": "https://github.com/eramax",
"followers_url": "https://api.github.com/users/eramax/followers",
"following_url": "https://api.github.com/users/eramax/following{/other_user}",
"gists_url": "https://api.github.com/users/eramax/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eramax/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eramax/subscriptions",
"organizations_url": "https://api.github.com/users/eramax/orgs",
"repos_url": "https://api.github.com/users/eramax/repos",
"events_url": "https://api.github.com/users/eramax/events{/privacy}",
"received_events_url": "https://api.github.com/users/eramax/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1298/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1298/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2702
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2702/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2702/comments
|
https://api.github.com/repos/ollama/ollama/issues/2702/events
|
https://github.com/ollama/ollama/issues/2702
| 2,150,637,835
|
I_kwDOJ0Z1Ps6AMCEL
| 2,702
|
[Petition] Enable "Discussion" tab on Ollama for Q/A and or 'I need help' type of questions
|
{
"login": "seanmavley",
"id": 5289083,
"node_id": "MDQ6VXNlcjUyODkwODM=",
"avatar_url": "https://avatars.githubusercontent.com/u/5289083?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/seanmavley",
"html_url": "https://github.com/seanmavley",
"followers_url": "https://api.github.com/users/seanmavley/followers",
"following_url": "https://api.github.com/users/seanmavley/following{/other_user}",
"gists_url": "https://api.github.com/users/seanmavley/gists{/gist_id}",
"starred_url": "https://api.github.com/users/seanmavley/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/seanmavley/subscriptions",
"organizations_url": "https://api.github.com/users/seanmavley/orgs",
"repos_url": "https://api.github.com/users/seanmavley/repos",
"events_url": "https://api.github.com/users/seanmavley/events{/privacy}",
"received_events_url": "https://api.github.com/users/seanmavley/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-02-23T08:48:05
| 2024-05-10T01:16:55
| 2024-05-10T01:16:55
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Going through the issues, some are less of an issue/bug specific to Ollama, and more of asking for help and or discussion.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2702/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2702/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2105
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2105/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2105/comments
|
https://api.github.com/repos/ollama/ollama/issues/2105/events
|
https://github.com/ollama/ollama/pull/2105
| 2,091,875,465
|
PR_kwDOJ0Z1Ps5kneb2
| 2,105
|
Update demo code for langchain-community separation
|
{
"login": "t-cool",
"id": 3023976,
"node_id": "MDQ6VXNlcjMwMjM5NzY=",
"avatar_url": "https://avatars.githubusercontent.com/u/3023976?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/t-cool",
"html_url": "https://github.com/t-cool",
"followers_url": "https://api.github.com/users/t-cool/followers",
"following_url": "https://api.github.com/users/t-cool/following{/other_user}",
"gists_url": "https://api.github.com/users/t-cool/gists{/gist_id}",
"starred_url": "https://api.github.com/users/t-cool/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/t-cool/subscriptions",
"organizations_url": "https://api.github.com/users/t-cool/orgs",
"repos_url": "https://api.github.com/users/t-cool/repos",
"events_url": "https://api.github.com/users/t-cool/events{/privacy}",
"received_events_url": "https://api.github.com/users/t-cool/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-01-20T04:07:15
| 2024-01-26T02:31:05
| 2024-01-26T02:31:05
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/2105",
"html_url": "https://github.com/ollama/ollama/pull/2105",
"diff_url": "https://github.com/ollama/ollama/pull/2105.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2105.patch",
"merged_at": null
}
|
I have updated the demo code following the separation of third-party integrations into langchain-community.
|
{
"login": "t-cool",
"id": 3023976,
"node_id": "MDQ6VXNlcjMwMjM5NzY=",
"avatar_url": "https://avatars.githubusercontent.com/u/3023976?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/t-cool",
"html_url": "https://github.com/t-cool",
"followers_url": "https://api.github.com/users/t-cool/followers",
"following_url": "https://api.github.com/users/t-cool/following{/other_user}",
"gists_url": "https://api.github.com/users/t-cool/gists{/gist_id}",
"starred_url": "https://api.github.com/users/t-cool/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/t-cool/subscriptions",
"organizations_url": "https://api.github.com/users/t-cool/orgs",
"repos_url": "https://api.github.com/users/t-cool/repos",
"events_url": "https://api.github.com/users/t-cool/events{/privacy}",
"received_events_url": "https://api.github.com/users/t-cool/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2105/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2105/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6580
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6580/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6580/comments
|
https://api.github.com/repos/ollama/ollama/issues/6580/events
|
https://github.com/ollama/ollama/issues/6580
| 2,498,937,562
|
I_kwDOJ0Z1Ps6U8sLa
| 6,580
|
phi3.5:3.8b-mini-instruct is missing parameters on Ollama's website.
|
{
"login": "vYLQs6",
"id": 143073604,
"node_id": "U_kgDOCIchRA",
"avatar_url": "https://avatars.githubusercontent.com/u/143073604?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vYLQs6",
"html_url": "https://github.com/vYLQs6",
"followers_url": "https://api.github.com/users/vYLQs6/followers",
"following_url": "https://api.github.com/users/vYLQs6/following{/other_user}",
"gists_url": "https://api.github.com/users/vYLQs6/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vYLQs6/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vYLQs6/subscriptions",
"organizations_url": "https://api.github.com/users/vYLQs6/orgs",
"repos_url": "https://api.github.com/users/vYLQs6/repos",
"events_url": "https://api.github.com/users/vYLQs6/events{/privacy}",
"received_events_url": "https://api.github.com/users/vYLQs6/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 2
| 2024-08-31T16:31:57
| 2024-09-05T19:01:26
| 2024-09-05T19:01:26
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
As shown in the screenshot, there is no parameters, so `ollama run phi3.5:latest` will not function correctly.

Here is the corrected modelfile if anyone needs it:
```
FROM phi3.5:3.8b-mini-128k-instruct-q8_0
TEMPLATE """{{ if .System }}<|system|>
{{ .System }}<|end|>
{{ end }}{{ if .Prompt }}<|user|>
{{ .Prompt }}<|end|>
{{ end }}<|assistant|>
{{ .Response }}<|end|>
"""
PARAMETER stop "<|system|>"
PARAMETER stop "<|user|>"
PARAMETER stop "<|end|>"
PARAMETER stop "<|assistant|>"
```
### OS
Windows
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.3.8
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6580/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6580/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1540
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1540/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1540/comments
|
https://api.github.com/repos/ollama/ollama/issues/1540/events
|
https://github.com/ollama/ollama/pull/1540
| 2,042,956,095
|
PR_kwDOJ0Z1Ps5iEbGQ
| 1,540
|
docs: add brew link
|
{
"login": "kassadin",
"id": 1104051,
"node_id": "MDQ6VXNlcjExMDQwNTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1104051?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kassadin",
"html_url": "https://github.com/kassadin",
"followers_url": "https://api.github.com/users/kassadin/followers",
"following_url": "https://api.github.com/users/kassadin/following{/other_user}",
"gists_url": "https://api.github.com/users/kassadin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kassadin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kassadin/subscriptions",
"organizations_url": "https://api.github.com/users/kassadin/orgs",
"repos_url": "https://api.github.com/users/kassadin/repos",
"events_url": "https://api.github.com/users/kassadin/events{/privacy}",
"received_events_url": "https://api.github.com/users/kassadin/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-12-15T06:00:30
| 2023-12-25T03:03:18
| 2023-12-25T03:03:18
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1540",
"html_url": "https://github.com/ollama/ollama/pull/1540",
"diff_url": "https://github.com/ollama/ollama/pull/1540.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1540.patch",
"merged_at": null
}
| null |
{
"login": "kassadin",
"id": 1104051,
"node_id": "MDQ6VXNlcjExMDQwNTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1104051?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kassadin",
"html_url": "https://github.com/kassadin",
"followers_url": "https://api.github.com/users/kassadin/followers",
"following_url": "https://api.github.com/users/kassadin/following{/other_user}",
"gists_url": "https://api.github.com/users/kassadin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kassadin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kassadin/subscriptions",
"organizations_url": "https://api.github.com/users/kassadin/orgs",
"repos_url": "https://api.github.com/users/kassadin/repos",
"events_url": "https://api.github.com/users/kassadin/events{/privacy}",
"received_events_url": "https://api.github.com/users/kassadin/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1540/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1540/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2032
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2032/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2032/comments
|
https://api.github.com/repos/ollama/ollama/issues/2032/events
|
https://github.com/ollama/ollama/issues/2032
| 2,086,604,523
|
I_kwDOJ0Z1Ps58Xw7r
| 2,032
|
Unable to pull models on NTFS filesystem
|
{
"login": "luannbertaud",
"id": 60100363,
"node_id": "MDQ6VXNlcjYwMTAwMzYz",
"avatar_url": "https://avatars.githubusercontent.com/u/60100363?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/luannbertaud",
"html_url": "https://github.com/luannbertaud",
"followers_url": "https://api.github.com/users/luannbertaud/followers",
"following_url": "https://api.github.com/users/luannbertaud/following{/other_user}",
"gists_url": "https://api.github.com/users/luannbertaud/gists{/gist_id}",
"starred_url": "https://api.github.com/users/luannbertaud/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/luannbertaud/subscriptions",
"organizations_url": "https://api.github.com/users/luannbertaud/orgs",
"repos_url": "https://api.github.com/users/luannbertaud/repos",
"events_url": "https://api.github.com/users/luannbertaud/events{/privacy}",
"received_events_url": "https://api.github.com/users/luannbertaud/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
|
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 10
| 2024-01-17T17:01:35
| 2024-04-08T16:54:05
| 2024-04-08T16:54:05
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi,
### Context
I am running **ollama** using the docker image, but I want to store the models on an external SSD to prevent the container from filling my computer storage.
The way I'm doing it, is that I mount the ` ~/.ollama/` directory of the container into my SSD.
### Issue
Since the docker image is built with Linux as OS, I suppose that the `GOOS` variable is set to `linux` (I found this variable [in code](https://github.com/jmorganca/ollama/blob/d5a73533574acb02069e74f1d01f6775577391bc/server/layers.go#L51)).
The problem is that my SSD is using NTFS filesystem, and the **:** (colon) character from the blobs file name (sha256:f7c4e...) is therefore forbidden.
> Error: open /root/.ollama/models/blobs/sha256:4dc8bd...6e0dac-partial-0: invalid argument
### Proposition
Make the replace condition (colon to hyphen) depends on filesystem, or replace colon by an universal character.
### Disclaimer
I've never developed in GO, so I'm really not sure about the origin of the problem, maybe the issue is very different from what I think. However, downloading the blobs into the container before renaming (in manifest to) and moving them into NTFS filesystem worked.
### Data
Docker Image: ollama/ollama:latest (sha256:80ed5afc9183bcf3b6c14d38f5b695472bb8af44f2d5fcfba5bbbb4a1a012e72)
Model: mistral:7b
OS: Fedora 37
Storage: External SSD - NTFS
Docker: 24.0.7
|
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2032/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2032/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3430
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3430/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3430/comments
|
https://api.github.com/repos/ollama/ollama/issues/3430/events
|
https://github.com/ollama/ollama/issues/3430
| 2,217,273,345
|
I_kwDOJ0Z1Ps6EKOgB
| 3,430
|
CUBLAS_STATUS_ALLOC_FAILED
|
{
"login": "nanshaws",
"id": 98329722,
"node_id": "U_kgDOBdxkeg",
"avatar_url": "https://avatars.githubusercontent.com/u/98329722?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nanshaws",
"html_url": "https://github.com/nanshaws",
"followers_url": "https://api.github.com/users/nanshaws/followers",
"following_url": "https://api.github.com/users/nanshaws/following{/other_user}",
"gists_url": "https://api.github.com/users/nanshaws/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nanshaws/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nanshaws/subscriptions",
"organizations_url": "https://api.github.com/users/nanshaws/orgs",
"repos_url": "https://api.github.com/users/nanshaws/repos",
"events_url": "https://api.github.com/users/nanshaws/events{/privacy}",
"received_events_url": "https://api.github.com/users/nanshaws/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 3
| 2024-04-01T00:29:56
| 2024-05-15T14:30:23
| 2024-04-01T02:15:25
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
time=2024-04-01T08:23:20.161+08:00 level=INFO source=gpu.go:115 msg="Detecting GPU type"
time=2024-04-01T08:23:20.161+08:00 level=INFO source=gpu.go:265 msg="Searching for GPU management library cudart64_*.dll"
time=2024-04-01T08:23:20.170+08:00 level=INFO source=gpu.go:311 msg="Discovered GPU libraries: [C:\\Users\\Administrator\\AppData\\Local\\Programs\\Ollama\\cudart64_110.dll c:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v11.2\\bin\\cudart64_110.dll C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v11.2\\bin\\cudart64_110.dll]"
time=2024-04-01T08:23:20.234+08:00 level=INFO source=gpu.go:120 msg="Nvidia GPU detected via cudart"
time=2024-04-01T08:23:20.235+08:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-04-01T08:23:20.382+08:00 level=INFO source=gpu.go:188 msg="[cudart] CUDART CUDA Compute Capability detected: 8.6"
time=2024-04-01T08:23:20.382+08:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-04-01T08:23:20.382+08:00 level=INFO source=gpu.go:188 msg="[cudart] CUDART CUDA Compute Capability detected: 8.6"
time=2024-04-01T08:23:20.383+08:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-04-01T08:23:20.383+08:00 level=INFO source=assets.go:108 msg="Updating PATH to C:\\Users\\ADMINI~1\\AppData\\Local\\Temp\\ollama1513712000\\runners\\cuda_v11.3;C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v11.2\\bin;C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v11.2\\libnvvp;C:\\Program Files (x86)\\jdk/bin;D:\\work\\graalvm-jdk-17_windows-x64_bin\\graalvm-jdk-17.0.9+11.1\\bin;D:\\WindowsVSC\\VC\\Tools\\MSVC\\14.36.32532\\bin\\Hostx64\\x64\\;C:\\Program Files\\PlasticSCM5\\server;C:\\Program Files\\PlasticSCM5\\client;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\Windows\\System32\\OpenSSH\\;D:\\work\\apache-tomcat-9.0.1-windows-x64\\apache-tomcat-9.0.1\\bin\\;D:\\work\\apache-maven-3.8.8-bin\\apache-maven-3.8.8\\bin\\;D:\\work\\gradle-8.2.1-all\\gradle-8.2.1\\bin;D:\\work\\apache-jmeter-5.5\\bin;D:\\work\\w64devkit-1.19.0\\w64devkit\\bin;C:\\Program Files\\Docker\\Docker\\resources\\bin;C:\\Program Files\\MySQL\\MySQL Server 8.0\\bin;D:\\Git\\cmd;D:\\python\\;D:\\nvm;C:\\Program Files\\nodejs;D:\\work\\visualvm_216\\bin;D:\\HashiCorp\\Vagrant\\bin;D:\\weixin\\微信web开发者工具\\dll;D:\\work\\netcat-win32-1.12;D:\\work\\VMware-ovftool-4.5.0-20459872-win.x86_64\\ovftool;D:\\work\\lu;D:\\work\\kotlin-compiler-1.9.22\\kotlinc\\bin;C:\\Program Files\\CMake\\bin;C:\\Program Files\\NVIDIA Corporation\\Nsight Compute 2020.3.0\\;D:\\miniconda3;D:\\miniconda3\\Library\\mingw-w64\\bin;D:\\miniconda3\\Library\\usr\\bin;D:\\miniconda3\\Library\\bin;D:\\miniconda3\\Scripts;C:\\Program Files\\MySQL\\MySQL Shell 8.0\\bin\\;C:\\Users\\Administrator\\AppData\\Local\\Microsoft\\WindowsApps;C:\\Users\\Administrator\\AppData\\Roaming\\npm;D:\\nvm;C:\\Program Files\\nodejs;D:\\work\\graalvm-jdk-17_windows-x64_bin\\graalvm-jdk-17.0.9+11.1\\bin\\;D:\\work\\graalvm-jdk-17_windows-x64_bin\\graalvm-jdk-17.0.9+11.1\\jre\\bin\\;C:\\Users\\Administrator\\AppData\\Local\\GitHubDesktop\\bin;C:\\Users\\Administrator\\.dotnet\\tools;D:\\work\\mongosh\\;;C:\\Users\\Administrator\\AppData\\Local\\Programs\\Ollama"
loading library C:\Users\ADMINI~1\AppData\Local\Temp\ollama1513712000\runners\cuda_v11.3\ext_server.dll
time=2024-04-01T08:23:20.407+08:00 level=INFO source=dyn_ext_server.go:87 msg="Loading Dynamic llm server: C:\\Users\\ADMINI~1\\AppData\\Local\\Temp\\ollama1513712000\\runners\\cuda_v11.3\\ext_server.dll"
time=2024-04-01T08:23:20.407+08:00 level=INFO source=dyn_ext_server.go:147 msg="Initializing llama server"
llama_model_loader: loaded meta data with 24 key-value pairs and 254 tensors from D:\ollama\blobs\sha256-456402914e838a953e0cf80caa6adbe75383d9e63584a964f504a7bbb8f7aad9 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = gemma
llama_model_loader: - kv 1: general.name str = gemma-7b-it
llama_model_loader: - kv 2: gemma.context_length u32 = 8192
llama_model_loader: - kv 3: gemma.embedding_length u32 = 3072
llama_model_loader: - kv 4: gemma.block_count u32 = 28
llama_model_loader: - kv 5: gemma.feed_forward_length u32 = 24576
llama_model_loader: - kv 6: gemma.attention.head_count u32 = 16
llama_model_loader: - kv 7: gemma.attention.head_count_kv u32 = 16
llama_model_loader: - kv 8: gemma.attention.layer_norm_rms_epsilon f32 = 0.000001
llama_model_loader: - kv 9: gemma.attention.key_length u32 = 256
llama_model_loader: - kv 10: gemma.attention.value_length u32 = 256
llama_model_loader: - kv 11: tokenizer.ggml.model str = llama
llama_model_loader: - kv 12: tokenizer.ggml.tokens arr[str,256000] = ["<pad>", "<eos>", "<bos>", "<unk>", ...
llama_model_loader: - kv 13: tokenizer.ggml.scores arr[f32,256000] = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv 14: tokenizer.ggml.token_type arr[i32,256000] = [3, 3, 3, 2, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 15: tokenizer.ggml.bos_token_id u32 = 2
llama_model_loader: - kv 16: tokenizer.ggml.eos_token_id u32 = 1
llama_model_loader: - kv 17: tokenizer.ggml.unknown_token_id u32 = 3
llama_model_loader: - kv 18: tokenizer.ggml.padding_token_id u32 = 0
llama_model_loader: - kv 19: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 20: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 21: tokenizer.chat_template str = {% if messages[0]['role'] == 'system'...
llama_model_loader: - kv 22: general.quantization_version u32 = 2
llama_model_loader: - kv 23: general.file_type u32 = 2
llama_model_loader: - type f32: 57 tensors
llama_model_loader: - type q4_0: 196 tensors
llama_model_loader: - type q8_0: 1 tensors
llm_load_vocab: mismatch in special tokens definition ( 416/256000 vs 260/256000 ).
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = gemma
llm_load_print_meta: vocab type = SPM
llm_load_print_meta: n_vocab = 256000
llm_load_print_meta: n_merges = 0
llm_load_print_meta: n_ctx_train = 8192
llm_load_print_meta: n_embd = 3072
llm_load_print_meta: n_head = 16
llm_load_print_meta: n_head_kv = 16
llm_load_print_meta: n_layer = 28
llm_load_print_meta: n_rot = 192
llm_load_print_meta: n_embd_head_k = 256
llm_load_print_meta: n_embd_head_v = 256
llm_load_print_meta: n_gqa = 1
llm_load_print_meta: n_embd_k_gqa = 4096
llm_load_print_meta: n_embd_v_gqa = 4096
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-06
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 24576
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 2
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx = 8192
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: model type = 7B
llm_load_print_meta: model ftype = Q4_0
llm_load_print_meta: model params = 8.54 B
llm_load_print_meta: model size = 4.84 GiB (4.87 BPW)
llm_load_print_meta: general.name = gemma-7b-it
llm_load_print_meta: BOS token = 2 '<bos>'
llm_load_print_meta: EOS token = 1 '<eos>'
llm_load_print_meta: UNK token = 3 '<unk>'
llm_load_print_meta: PAD token = 0 '<pad>'
llm_load_print_meta: LF token = 227 '<0x0A>'
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: CUDA_USE_TENSOR_CORES: yes
ggml_cuda_init: found 1 CUDA devices:
Device 0: GeForce RTX 3050 Laptop GPU, compute capability 8.6, VMM: yes
llm_load_tensors: ggml ctx size = 0.19 MiB
llm_load_tensors: offloading 11 repeating layers to GPU
llm_load_tensors: offloaded 11/29 layers to GPU
llm_load_tensors: CPU buffer size = 4955.54 MiB
llm_load_tensors: CUDA0 buffer size = 1633.76 MiB
...........................................................................
llama_new_context_with_model: n_ctx = 2048
llama_new_context_with_model: n_batch = 512
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: freq_base = 10000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: CUDA_Host KV buffer size = 544.00 MiB
llama_kv_cache_init: CUDA0 KV buffer size = 352.00 MiB
llama_new_context_with_model: KV self size = 896.00 MiB, K (f16): 448.00 MiB, V (f16): 448.00 MiB
llama_new_context_with_model: CUDA_Host output buffer size = 506.00 MiB
llama_new_context_with_model: CUDA0 compute buffer size = 1302.88 MiB
llama_new_context_with_model: CUDA_Host compute buffer size = 20.00 MiB
llama_new_context_with_model: graph nodes = 957
llama_new_context_with_model: graph splits = 191
CUDA error: CUBLAS_STATUS_ALLOC_FAILED
current device: 0, in function cublas_handle at C:\a\ollama\ollama\llm\llama.cpp\ggml-cuda.cu:659
cublasCreate_v2(&cublas_handles[device])
GGML_ASSERT: C:\a\ollama\ollama\llm\llama.cpp\ggml-cuda.cu:193: !"CUDA error"
### What did you expect to see?
_No response_
### Steps to reproduce
_No response_
### Are there any recent changes that introduced the issue?
_No response_
### OS
Windows
### Architecture
_No response_
### Platform
_No response_
### Ollama version
_No response_
### GPU
_No response_
### GPU info
GeForce RTX 3050 Laptop GPU
### CPU
Intel
### Other software
_No response_
|
{
"login": "nanshaws",
"id": 98329722,
"node_id": "U_kgDOBdxkeg",
"avatar_url": "https://avatars.githubusercontent.com/u/98329722?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nanshaws",
"html_url": "https://github.com/nanshaws",
"followers_url": "https://api.github.com/users/nanshaws/followers",
"following_url": "https://api.github.com/users/nanshaws/following{/other_user}",
"gists_url": "https://api.github.com/users/nanshaws/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nanshaws/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nanshaws/subscriptions",
"organizations_url": "https://api.github.com/users/nanshaws/orgs",
"repos_url": "https://api.github.com/users/nanshaws/repos",
"events_url": "https://api.github.com/users/nanshaws/events{/privacy}",
"received_events_url": "https://api.github.com/users/nanshaws/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3430/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3430/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2387
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2387/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2387/comments
|
https://api.github.com/repos/ollama/ollama/issues/2387/events
|
https://github.com/ollama/ollama/issues/2387
| 2,122,984,249
|
I_kwDOJ0Z1Ps5-iis5
| 2,387
|
[FEATURE]`search` command for the CLI app.
|
{
"login": "rumbleFTW",
"id": 85807431,
"node_id": "MDQ6VXNlcjg1ODA3NDMx",
"avatar_url": "https://avatars.githubusercontent.com/u/85807431?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rumbleFTW",
"html_url": "https://github.com/rumbleFTW",
"followers_url": "https://api.github.com/users/rumbleFTW/followers",
"following_url": "https://api.github.com/users/rumbleFTW/following{/other_user}",
"gists_url": "https://api.github.com/users/rumbleFTW/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rumbleFTW/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rumbleFTW/subscriptions",
"organizations_url": "https://api.github.com/users/rumbleFTW/orgs",
"repos_url": "https://api.github.com/users/rumbleFTW/repos",
"events_url": "https://api.github.com/users/rumbleFTW/events{/privacy}",
"received_events_url": "https://api.github.com/users/rumbleFTW/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
|
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 1
| 2024-02-07T12:59:41
| 2025-01-24T15:46:47
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Currently, one has to look for the models available in the website (https://ollama.ai/library) so that it can be installed using
```bash
ollama run <MODEL>
```
OR
```bash
ollama pull <MODEL>
```
It would be better if there is a `search` command in the CLI app itself such that it can be used to search for models in the repository, similar to `pacman -Ss` command in arch linux.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2387/reactions",
"total_count": 5,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2387/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/3611
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3611/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3611/comments
|
https://api.github.com/repos/ollama/ollama/issues/3611/events
|
https://github.com/ollama/ollama/issues/3611
| 2,239,072,038
|
I_kwDOJ0Z1Ps6FdYcm
| 3,611
|
Codegemma Instruct failed
|
{
"login": "MrBenzWorld",
"id": 113277019,
"node_id": "U_kgDOBsB4Ww",
"avatar_url": "https://avatars.githubusercontent.com/u/113277019?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MrBenzWorld",
"html_url": "https://github.com/MrBenzWorld",
"followers_url": "https://api.github.com/users/MrBenzWorld/followers",
"following_url": "https://api.github.com/users/MrBenzWorld/following{/other_user}",
"gists_url": "https://api.github.com/users/MrBenzWorld/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MrBenzWorld/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MrBenzWorld/subscriptions",
"organizations_url": "https://api.github.com/users/MrBenzWorld/orgs",
"repos_url": "https://api.github.com/users/MrBenzWorld/repos",
"events_url": "https://api.github.com/users/MrBenzWorld/events{/privacy}",
"received_events_url": "https://api.github.com/users/MrBenzWorld/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-04-12T04:49:51
| 2024-04-15T19:30:39
| 2024-04-15T19:12:00
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Ollama run Codegemma: instruct failed to start.
Others are working fine.
Windows 11,
Latest ollama.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3611/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3611/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/108
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/108/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/108/comments
|
https://api.github.com/repos/ollama/ollama/issues/108/events
|
https://github.com/ollama/ollama/issues/108
| 1,810,892,598
|
I_kwDOJ0Z1Ps5r8Ac2
| 108
|
Error after prompting Llama2 on M1.
|
{
"login": "aelder",
"id": 887897,
"node_id": "MDQ6VXNlcjg4Nzg5Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/887897?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aelder",
"html_url": "https://github.com/aelder",
"followers_url": "https://api.github.com/users/aelder/followers",
"following_url": "https://api.github.com/users/aelder/following{/other_user}",
"gists_url": "https://api.github.com/users/aelder/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aelder/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aelder/subscriptions",
"organizations_url": "https://api.github.com/users/aelder/orgs",
"repos_url": "https://api.github.com/users/aelder/repos",
"events_url": "https://api.github.com/users/aelder/events{/privacy}",
"received_events_url": "https://api.github.com/users/aelder/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 4
| 2023-07-19T00:06:19
| 2023-07-27T23:46:30
| 2023-07-27T23:46:30
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
After starting Ollama and running Llama2, any prompt results in:
`Error: Post "http://127.0.0.1:11434/api/generate": EOF
`
Installed on M1 Macbook Pro
Ollama reports that it is running.
'Ollama list' reports llama2:latest.
Is this a memory issue?
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/108/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/108/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3314
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3314/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3314/comments
|
https://api.github.com/repos/ollama/ollama/issues/3314/events
|
https://github.com/ollama/ollama/issues/3314
| 2,203,987,470
|
I_kwDOJ0Z1Ps6DXi4O
| 3,314
|
"server stop" and "server status" commands
|
{
"login": "FilkerZero",
"id": 42746731,
"node_id": "MDQ6VXNlcjQyNzQ2NzMx",
"avatar_url": "https://avatars.githubusercontent.com/u/42746731?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/FilkerZero",
"html_url": "https://github.com/FilkerZero",
"followers_url": "https://api.github.com/users/FilkerZero/followers",
"following_url": "https://api.github.com/users/FilkerZero/following{/other_user}",
"gists_url": "https://api.github.com/users/FilkerZero/gists{/gist_id}",
"starred_url": "https://api.github.com/users/FilkerZero/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/FilkerZero/subscriptions",
"organizations_url": "https://api.github.com/users/FilkerZero/orgs",
"repos_url": "https://api.github.com/users/FilkerZero/repos",
"events_url": "https://api.github.com/users/FilkerZero/events{/privacy}",
"received_events_url": "https://api.github.com/users/FilkerZero/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 16
| 2024-03-23T18:08:55
| 2024-11-20T15:04:10
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What are you trying to do?
I would like to have more control over the ollama server. As it stands, the ollama command line does not provide a convenient way to stop the server once it is running or get the server status. Having a way to get memory, runtime, GPU and CPU statistics would also be a plus.
### How should we solve this?
I suggest adding either new commands or flags to the `serve` command; some examples follow, but it's the functionality, not the particular syntax (option flags vs. commands) I care about:
- `ollama serve --status` - Print server status (running/not running) and perhaps the loaded model and API URL
- `ollama serve --stop` - Stop the server if it is running
- `ollama stop` - Alias for `ollama serve --stop`
- `ollama unload` - Unload the model from memory but leave the server running
- `ollama stats` - Display server memory, runtime, and other statistics (eg, number of connected clients (max, current))
### What is the impact of not solving this?
It will remain hard for the average home user to control the ollama server process and determine the resources in use by it.
### Anything else?
This appears to be a near duplicate of #3182
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3314/reactions",
"total_count": 17,
"+1": 17,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3314/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/3028
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3028/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3028/comments
|
https://api.github.com/repos/ollama/ollama/issues/3028/events
|
https://github.com/ollama/ollama/pull/3028
| 2,177,408,310
|
PR_kwDOJ0Z1Ps5pJ3Ou
| 3,028
|
CI release process
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-03-09T21:21:42
| 2024-03-15T23:40:55
| 2024-03-15T23:40:54
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3028",
"html_url": "https://github.com/ollama/ollama/pull/3028",
"diff_url": "https://github.com/ollama/ollama/pull/3028.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3028.patch",
"merged_at": "2024-03-15T23:40:54"
}
|
This is now fully wired up and tested with tag pushes and completes in ~34 minutes.
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3028/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3028/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3429
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3429/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3429/comments
|
https://api.github.com/repos/ollama/ollama/issues/3429/events
|
https://github.com/ollama/ollama/issues/3429
| 2,217,165,085
|
I_kwDOJ0Z1Ps6EJ0Ed
| 3,429
|
Add additional language translation layers with special model
|
{
"login": "rvsh2",
"id": 32043169,
"node_id": "MDQ6VXNlcjMyMDQzMTY5",
"avatar_url": "https://avatars.githubusercontent.com/u/32043169?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rvsh2",
"html_url": "https://github.com/rvsh2",
"followers_url": "https://api.github.com/users/rvsh2/followers",
"following_url": "https://api.github.com/users/rvsh2/following{/other_user}",
"gists_url": "https://api.github.com/users/rvsh2/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rvsh2/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rvsh2/subscriptions",
"organizations_url": "https://api.github.com/users/rvsh2/orgs",
"repos_url": "https://api.github.com/users/rvsh2/repos",
"events_url": "https://api.github.com/users/rvsh2/events{/privacy}",
"received_events_url": "https://api.github.com/users/rvsh2/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
open
| false
| null |
[] | null | 0
| 2024-03-31T20:09:52
| 2024-04-19T15:41:34
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What are you trying to do?
Many models are very good with specific tasks but they lack support of other languages. Their output in other language than english is very poor.
### How should we solve this?
The input could go first to translation layer with specific model for translation from any language to english,
then the choosen model gets the input in english and outputs in english.
The english output goes through another translation layer to convert from english to any language.
### What is the impact of not solving this?
Only choosing models that work with specifc language (but still these are not perfect).
### Anything else?
This model could be used for translation:
https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt
That would be awesome functionality.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3429/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 2
}
|
https://api.github.com/repos/ollama/ollama/issues/3429/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/3466
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3466/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3466/comments
|
https://api.github.com/repos/ollama/ollama/issues/3466/events
|
https://github.com/ollama/ollama/pull/3466
| 2,221,625,570
|
PR_kwDOJ0Z1Ps5rf1kW
| 3,466
|
default head_kv to 1
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-04-02T23:38:44
| 2024-04-03T17:41:01
| 2024-04-03T17:41:00
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3466",
"html_url": "https://github.com/ollama/ollama/pull/3466",
"diff_url": "https://github.com/ollama/ollama/pull/3466.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3466.patch",
"merged_at": "2024-04-03T17:41:00"
}
|
some older models do not set this kv resulting in the kv size estimate to be 0
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3466/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3466/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6011
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6011/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6011/comments
|
https://api.github.com/repos/ollama/ollama/issues/6011/events
|
https://github.com/ollama/ollama/issues/6011
| 2,433,383,661
|
I_kwDOJ0Z1Ps6RCnzt
| 6,011
|
lama runner process has terminated: exit status 0xc0000005 - snowflake-arctic-embed
|
{
"login": "imkebe",
"id": 4148157,
"node_id": "MDQ6VXNlcjQxNDgxNTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/4148157?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/imkebe",
"html_url": "https://github.com/imkebe",
"followers_url": "https://api.github.com/users/imkebe/followers",
"following_url": "https://api.github.com/users/imkebe/following{/other_user}",
"gists_url": "https://api.github.com/users/imkebe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/imkebe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/imkebe/subscriptions",
"organizations_url": "https://api.github.com/users/imkebe/orgs",
"repos_url": "https://api.github.com/users/imkebe/repos",
"events_url": "https://api.github.com/users/imkebe/events{/privacy}",
"received_events_url": "https://api.github.com/users/imkebe/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 12
| 2024-07-27T09:06:02
| 2024-09-25T02:24:16
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
It's again the #4334 issue. I have multiple local CPU nodes and using ollama behind the litellm proxy.
The issue is with embedding call for **snowflake-arctic-embed** model.
For example **nomic-embed-text** seems to be working fine.
```litellm.llms.ollama.OllamaError: {"error":"llama runner process has terminated: exit status 0xc0000005"}```
### OS
Windows
### GPU
_No response_
### CPU
Intel
### Ollama version
0.2.8 - 0.3.0
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6011/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6011/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/2829
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2829/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2829/comments
|
https://api.github.com/repos/ollama/ollama/issues/2829/events
|
https://github.com/ollama/ollama/issues/2829
| 2,160,674,291
|
I_kwDOJ0Z1Ps6AyUXz
| 2,829
|
Use cloudflared Tunnels to publish ollama service ports, with clients returning with no messages
|
{
"login": "online2311",
"id": 15675255,
"node_id": "MDQ6VXNlcjE1Njc1MjU1",
"avatar_url": "https://avatars.githubusercontent.com/u/15675255?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/online2311",
"html_url": "https://github.com/online2311",
"followers_url": "https://api.github.com/users/online2311/followers",
"following_url": "https://api.github.com/users/online2311/following{/other_user}",
"gists_url": "https://api.github.com/users/online2311/gists{/gist_id}",
"starred_url": "https://api.github.com/users/online2311/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/online2311/subscriptions",
"organizations_url": "https://api.github.com/users/online2311/orgs",
"repos_url": "https://api.github.com/users/online2311/repos",
"events_url": "https://api.github.com/users/online2311/events{/privacy}",
"received_events_url": "https://api.github.com/users/online2311/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] |
open
| false
| null |
[] | null | 4
| 2024-02-29T08:17:58
| 2024-11-06T18:17:35
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
The cloudflared Tunnels are used to publish the ollama service port, and the client uses the enchanted-llm app for the dialogue, with no messages returned. But without cloudflared Tunnels, everything works fine. Is ollama using streaming dialogue using websocket?
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2829/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2829/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/2379
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2379/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2379/comments
|
https://api.github.com/repos/ollama/ollama/issues/2379/events
|
https://github.com/ollama/ollama/issues/2379
| 2,121,853,829
|
I_kwDOJ0Z1Ps5-eOuF
| 2,379
|
The `qwen:72b-chat-v1.5` model (and likely all the other v1.5 models too) is missing the `rope_frequency_base` value in the GGUF file.
|
{
"login": "jukofyork",
"id": 69222624,
"node_id": "MDQ6VXNlcjY5MjIyNjI0",
"avatar_url": "https://avatars.githubusercontent.com/u/69222624?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jukofyork",
"html_url": "https://github.com/jukofyork",
"followers_url": "https://api.github.com/users/jukofyork/followers",
"following_url": "https://api.github.com/users/jukofyork/following{/other_user}",
"gists_url": "https://api.github.com/users/jukofyork/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jukofyork/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jukofyork/subscriptions",
"organizations_url": "https://api.github.com/users/jukofyork/orgs",
"repos_url": "https://api.github.com/users/jukofyork/repos",
"events_url": "https://api.github.com/users/jukofyork/events{/privacy}",
"received_events_url": "https://api.github.com/users/jukofyork/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
closed
| false
|
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 11
| 2024-02-06T23:30:18
| 2024-07-24T23:11:52
| 2024-07-24T23:11:52
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I've patched my Ollama to allow the setting of `rope_frequency_base` in the modelfile again, so I can fix this via:
```
PARAMETER rope_frequency_base 1000000
```
but it should also be possible to use `gguf-set-metadata` to do the same.
I'm not the only one who noticed this as the official GGUF `q5_k_m` and `q2_k` models are also missing the `rope_frequency_base` value:
https://huggingface.co/Qwen/Qwen1.5-72B-Chat-GGUF/discussions/1
> The transformers repo suggested that this model has a ROPE frequency of 1,000,000 while the gguf metadata here has a frequency of 10,000.
I can confirm this does seem to work as without this setting it just ends up outputting repeating newlines after a while - possibly because the default is 10000 (?) and it will make the context 'appear' to fill up 100x quicker to the model.
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2379/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2379/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3307
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3307/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3307/comments
|
https://api.github.com/repos/ollama/ollama/issues/3307/events
|
https://github.com/ollama/ollama/pull/3307
| 2,203,832,843
|
PR_kwDOJ0Z1Ps5qjzHw
| 3,307
|
Bump llama.cpp to b2474
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-03-23T11:21:53
| 2024-03-25T19:56:14
| 2024-03-25T19:56:14
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | true
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3307",
"html_url": "https://github.com/ollama/ollama/pull/3307",
"diff_url": "https://github.com/ollama/ollama/pull/3307.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3307.patch",
"merged_at": "2024-03-25T19:56:14"
}
|
The release just before ggml-cuda.cu refactoring
Marking draft until I have a chance to do more testing (simple happy path on mac works)
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3307/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3307/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3804
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3804/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3804/comments
|
https://api.github.com/repos/ollama/ollama/issues/3804/events
|
https://github.com/ollama/ollama/pull/3804
| 2,255,220,600
|
PR_kwDOJ0Z1Ps5tSAFm
| 3,804
|
Improve Documentation Security Considerations
|
{
"login": "breadtk",
"id": 1126756,
"node_id": "MDQ6VXNlcjExMjY3NTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1126756?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/breadtk",
"html_url": "https://github.com/breadtk",
"followers_url": "https://api.github.com/users/breadtk/followers",
"following_url": "https://api.github.com/users/breadtk/following{/other_user}",
"gists_url": "https://api.github.com/users/breadtk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/breadtk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/breadtk/subscriptions",
"organizations_url": "https://api.github.com/users/breadtk/orgs",
"repos_url": "https://api.github.com/users/breadtk/repos",
"events_url": "https://api.github.com/users/breadtk/events{/privacy}",
"received_events_url": "https://api.github.com/users/breadtk/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-04-21T19:05:43
| 2024-11-21T10:03:27
| 2024-11-21T10:03:27
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3804",
"html_url": "https://github.com/ollama/ollama/pull/3804",
"diff_url": "https://github.com/ollama/ollama/pull/3804.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3804.patch",
"merged_at": null
}
|
Change the default `OLLAMA_HOST` example to be something more secure and referenced actual underlying format as defined in [client.go](https://github.com/ollama/ollama/blob/62be2050dd83197864d771fe6891fc47486ee6a1/api/client.go#L55)
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3804/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3804/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2349
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2349/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2349/comments
|
https://api.github.com/repos/ollama/ollama/issues/2349/events
|
https://github.com/ollama/ollama/issues/2349
| 2,117,186,045
|
I_kwDOJ0Z1Ps5-MbH9
| 2,349
|
What Modelfile options are used by Chat and what by the Embedding api endpoints
|
{
"login": "tzolov",
"id": 1351573,
"node_id": "MDQ6VXNlcjEzNTE1NzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/1351573?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tzolov",
"html_url": "https://github.com/tzolov",
"followers_url": "https://api.github.com/users/tzolov/followers",
"following_url": "https://api.github.com/users/tzolov/following{/other_user}",
"gists_url": "https://api.github.com/users/tzolov/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tzolov/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tzolov/subscriptions",
"organizations_url": "https://api.github.com/users/tzolov/orgs",
"repos_url": "https://api.github.com/users/tzolov/repos",
"events_url": "https://api.github.com/users/tzolov/events{/privacy}",
"received_events_url": "https://api.github.com/users/tzolov/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396191,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aXw",
"url": "https://api.github.com/repos/ollama/ollama/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] |
closed
| false
|
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 2
| 2024-02-04T16:33:26
| 2024-05-11T00:40:19
| 2024-05-11T00:40:18
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Both the [generate-embeddings](https://github.com/ollama/ollama/blob/main/docs/api.md#generate-embeddings) and the [chat completion](https://github.com/ollama/ollama/blob/main/docs/api.md#generate-a-chat-completion) API endpoints take the `options` as an input parameter. E.g.
> options: additional model parameters listed in the documentation for the [Modelfile](https://github.com/ollama/ollama/blob/main/docs/modelfile.md#valid-parameters-and-values) such as temperature
Additionally the Options definitions in [api/types.go](https://github.com/ollama/ollama/blob/b538dc3858014f94b099730a592751a5454cab0a/api/types.go#L87-L128) includes many [undocumented](https://github.com/ollama/ollama/blob/main/docs/modelfile.md#valid-parameters-and-values) options.
I don't think that the embedding endpoint uses parameters like `temperature`, `topP` or alike?
Is there a clear distinctions as what options should be used by either the chat or the embedding endpoint? And conversely what are not?
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2349/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2349/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4638
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4638/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4638/comments
|
https://api.github.com/repos/ollama/ollama/issues/4638/events
|
https://github.com/ollama/ollama/pull/4638
| 2,317,125,146
|
PR_kwDOJ0Z1Ps5wjuFP
| 4,638
|
Report better warning on client closed abort of load
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-05-25T16:24:13
| 2024-05-25T21:32:31
| 2024-05-25T21:32:28
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4638",
"html_url": "https://github.com/ollama/ollama/pull/4638",
"diff_url": "https://github.com/ollama/ollama/pull/4638.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4638.patch",
"merged_at": "2024-05-25T21:32:28"
}
|
If the client closes the connection before we finish loading the model we abort, so lets make the log message clearer why to help users understand this failure mode
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4638/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4638/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7583
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7583/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7583/comments
|
https://api.github.com/repos/ollama/ollama/issues/7583/events
|
https://github.com/ollama/ollama/issues/7583
| 2,645,536,519
|
I_kwDOJ0Z1Ps6dr68H
| 7,583
|
How to configure Ollama in Termux to use the local GPU and CPU acceleration model calculation?
|
{
"login": "limited1010",
"id": 23415775,
"node_id": "MDQ6VXNlcjIzNDE1Nzc1",
"avatar_url": "https://avatars.githubusercontent.com/u/23415775?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/limited1010",
"html_url": "https://github.com/limited1010",
"followers_url": "https://api.github.com/users/limited1010/followers",
"following_url": "https://api.github.com/users/limited1010/following{/other_user}",
"gists_url": "https://api.github.com/users/limited1010/gists{/gist_id}",
"starred_url": "https://api.github.com/users/limited1010/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/limited1010/subscriptions",
"organizations_url": "https://api.github.com/users/limited1010/orgs",
"repos_url": "https://api.github.com/users/limited1010/repos",
"events_url": "https://api.github.com/users/limited1010/events{/privacy}",
"received_events_url": "https://api.github.com/users/limited1010/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 0
| 2024-11-09T03:24:34
| 2024-11-09T03:24:34
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
How to configure Ollama in Termux to use the local GPU and CPU acceleration model calculation?
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7583/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7583/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/1904
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1904/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1904/comments
|
https://api.github.com/repos/ollama/ollama/issues/1904/events
|
https://github.com/ollama/ollama/issues/1904
| 2,074,810,052
|
I_kwDOJ0Z1Ps57qxbE
| 1,904
|
zsh: illegal hardware instruction ollama run mistral
|
{
"login": "MagzhanUnited",
"id": 123943870,
"node_id": "U_kgDOB2M7vg",
"avatar_url": "https://avatars.githubusercontent.com/u/123943870?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MagzhanUnited",
"html_url": "https://github.com/MagzhanUnited",
"followers_url": "https://api.github.com/users/MagzhanUnited/followers",
"following_url": "https://api.github.com/users/MagzhanUnited/following{/other_user}",
"gists_url": "https://api.github.com/users/MagzhanUnited/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MagzhanUnited/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MagzhanUnited/subscriptions",
"organizations_url": "https://api.github.com/users/MagzhanUnited/orgs",
"repos_url": "https://api.github.com/users/MagzhanUnited/repos",
"events_url": "https://api.github.com/users/MagzhanUnited/events{/privacy}",
"received_events_url": "https://api.github.com/users/MagzhanUnited/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 12
| 2024-01-10T17:09:35
| 2024-01-10T20:04:23
| 2024-01-10T19:40:39
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I run mistral yesterday successfully on my Mac M1. But today I have the following error when I try to run mistral:
zsh: illegal hardware instruction ollama run mistral
|
{
"login": "MagzhanUnited",
"id": 123943870,
"node_id": "U_kgDOB2M7vg",
"avatar_url": "https://avatars.githubusercontent.com/u/123943870?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MagzhanUnited",
"html_url": "https://github.com/MagzhanUnited",
"followers_url": "https://api.github.com/users/MagzhanUnited/followers",
"following_url": "https://api.github.com/users/MagzhanUnited/following{/other_user}",
"gists_url": "https://api.github.com/users/MagzhanUnited/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MagzhanUnited/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MagzhanUnited/subscriptions",
"organizations_url": "https://api.github.com/users/MagzhanUnited/orgs",
"repos_url": "https://api.github.com/users/MagzhanUnited/repos",
"events_url": "https://api.github.com/users/MagzhanUnited/events{/privacy}",
"received_events_url": "https://api.github.com/users/MagzhanUnited/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1904/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1904/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7563
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7563/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7563/comments
|
https://api.github.com/repos/ollama/ollama/issues/7563/events
|
https://github.com/ollama/ollama/issues/7563
| 2,642,079,648
|
I_kwDOJ0Z1Ps6deu-g
| 7,563
|
LLama3.2 Vision refuses to analyse image
|
{
"login": "I-I-IT",
"id": 78900789,
"node_id": "MDQ6VXNlcjc4OTAwNzg5",
"avatar_url": "https://avatars.githubusercontent.com/u/78900789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/I-I-IT",
"html_url": "https://github.com/I-I-IT",
"followers_url": "https://api.github.com/users/I-I-IT/followers",
"following_url": "https://api.github.com/users/I-I-IT/following{/other_user}",
"gists_url": "https://api.github.com/users/I-I-IT/gists{/gist_id}",
"starred_url": "https://api.github.com/users/I-I-IT/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/I-I-IT/subscriptions",
"organizations_url": "https://api.github.com/users/I-I-IT/orgs",
"repos_url": "https://api.github.com/users/I-I-IT/repos",
"events_url": "https://api.github.com/users/I-I-IT/events{/privacy}",
"received_events_url": "https://api.github.com/users/I-I-IT/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] |
closed
| false
| null |
[] | null | 16
| 2024-11-07T20:17:36
| 2024-11-17T14:04:28
| 2024-11-17T14:04:28
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?

Disclaimer: I have no GPU (Integrated Graphics)
### OS
Linux
### GPU
Other
### CPU
Intel
### Ollama version
0.4
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7563/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7563/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7168
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7168/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7168/comments
|
https://api.github.com/repos/ollama/ollama/issues/7168/events
|
https://github.com/ollama/ollama/issues/7168
| 2,579,964,434
|
I_kwDOJ0Z1Ps6ZxyIS
| 7,168
|
ollama process uses 1 gb of memory when it's idle due to embedded runners
|
{
"login": "kha84",
"id": 110789576,
"node_id": "U_kgDOBpqDyA",
"avatar_url": "https://avatars.githubusercontent.com/u/110789576?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kha84",
"html_url": "https://github.com/kha84",
"followers_url": "https://api.github.com/users/kha84/followers",
"following_url": "https://api.github.com/users/kha84/following{/other_user}",
"gists_url": "https://api.github.com/users/kha84/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kha84/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kha84/subscriptions",
"organizations_url": "https://api.github.com/users/kha84/orgs",
"repos_url": "https://api.github.com/users/kha84/repos",
"events_url": "https://api.github.com/users/kha84/events{/privacy}",
"received_events_url": "https://api.github.com/users/kha84/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 5755339642,
"node_id": "LA_kwDOJ0Z1Ps8AAAABVwuDeg",
"url": "https://api.github.com/repos/ollama/ollama/labels/linux",
"name": "linux",
"color": "516E70",
"default": false,
"description": ""
}
] |
open
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 3
| 2024-10-10T22:25:55
| 2024-10-22T20:47:36
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
```
user@magicbook14:~$ ollama --version
ollama version is 0.3.12
user@magicbook14:~$ ollama ps
NAME ID SIZE PROCESSOR UNTIL
```

I'm pretty sure it's some kind of regress, because on previous versions I didn't notice that high memory usage when no models were loaded. The system it's running on is laptop with AMD Ryzen 7 APU - 5700, no discreet GPU is there, just an integrated one in CPU
### OS
Linux
### GPU
AMD
### CPU
AMD
### Ollama version
0.3.12
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7168/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7168/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/998
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/998/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/998/comments
|
https://api.github.com/repos/ollama/ollama/issues/998/events
|
https://github.com/ollama/ollama/issues/998
| 1,977,319,411
|
I_kwDOJ0Z1Ps5123_z
| 998
|
After upgrade to 0.1.8, models won't load
|
{
"login": "lestan",
"id": 1471736,
"node_id": "MDQ6VXNlcjE0NzE3MzY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1471736?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lestan",
"html_url": "https://github.com/lestan",
"followers_url": "https://api.github.com/users/lestan/followers",
"following_url": "https://api.github.com/users/lestan/following{/other_user}",
"gists_url": "https://api.github.com/users/lestan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lestan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lestan/subscriptions",
"organizations_url": "https://api.github.com/users/lestan/orgs",
"repos_url": "https://api.github.com/users/lestan/repos",
"events_url": "https://api.github.com/users/lestan/events{/privacy}",
"received_events_url": "https://api.github.com/users/lestan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 8
| 2023-11-04T12:49:50
| 2023-11-07T06:40:57
| 2023-11-07T06:40:51
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
After updating to 0.1.8 from a fully functioning Ollama install where I was able to successfully run LLaMA 2, Mistral and Zephyr without issues on my Intel MacBook Pro, I am now getting an error:
Error: llama runner exited,you may not have enough available memory to run this model
I was in the middle of testing these 3 models when I noticed the Ollama icon show an update was available. Once it updated and restarted, everything stopped working and I kept receiving this error. I closed all other programs, rebooted my laptop and it didn't help.
Is there an easy way to revert back to 0.1.7?
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/998/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/998/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4685
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4685/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4685/comments
|
https://api.github.com/repos/ollama/ollama/issues/4685/events
|
https://github.com/ollama/ollama/issues/4685
| 2,321,836,342
|
I_kwDOJ0Z1Ps6KZGk2
| 4,685
|
ollama docker container tags "latest" and "rocm" referencing old containers
|
{
"login": "dpublic",
"id": 7317163,
"node_id": "MDQ6VXNlcjczMTcxNjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7317163?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dpublic",
"html_url": "https://github.com/dpublic",
"followers_url": "https://api.github.com/users/dpublic/followers",
"following_url": "https://api.github.com/users/dpublic/following{/other_user}",
"gists_url": "https://api.github.com/users/dpublic/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dpublic/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dpublic/subscriptions",
"organizations_url": "https://api.github.com/users/dpublic/orgs",
"repos_url": "https://api.github.com/users/dpublic/repos",
"events_url": "https://api.github.com/users/dpublic/events{/privacy}",
"received_events_url": "https://api.github.com/users/dpublic/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-05-28T20:11:16
| 2024-05-28T21:14:15
| 2024-05-28T21:14:15
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
They should be referencing the "0.1.39" and "0.1.39-rocm" containers.
https://hub.docker.com/r/ollama/ollama/tags
### OS
Linux
### GPU
Nvidia
### CPU
Intel, AMD
### Ollama version
0.1.39
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4685/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4685/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/451
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/451/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/451/comments
|
https://api.github.com/repos/ollama/ollama/issues/451/events
|
https://github.com/ollama/ollama/pull/451
| 1,877,625,829
|
PR_kwDOJ0Z1Ps5ZWaCX
| 451
|
update readme
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-09-01T15:26:45
| 2023-09-01T20:44:15
| 2023-09-01T20:44:15
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/451",
"html_url": "https://github.com/ollama/ollama/pull/451",
"diff_url": "https://github.com/ollama/ollama/pull/451.diff",
"patch_url": "https://github.com/ollama/ollama/pull/451.patch",
"merged_at": "2023-09-01T20:44:14"
}
| null |
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/451/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/451/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/796
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/796/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/796/comments
|
https://api.github.com/repos/ollama/ollama/issues/796/events
|
https://github.com/ollama/ollama/issues/796
| 1,944,695,030
|
I_kwDOJ0Z1Ps5z6bD2
| 796
|
Support `ppc64le` architecture
|
{
"login": "orkutmuratyilmaz",
"id": 7395916,
"node_id": "MDQ6VXNlcjczOTU5MTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/7395916?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/orkutmuratyilmaz",
"html_url": "https://github.com/orkutmuratyilmaz",
"followers_url": "https://api.github.com/users/orkutmuratyilmaz/followers",
"following_url": "https://api.github.com/users/orkutmuratyilmaz/following{/other_user}",
"gists_url": "https://api.github.com/users/orkutmuratyilmaz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/orkutmuratyilmaz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/orkutmuratyilmaz/subscriptions",
"organizations_url": "https://api.github.com/users/orkutmuratyilmaz/orgs",
"repos_url": "https://api.github.com/users/orkutmuratyilmaz/repos",
"events_url": "https://api.github.com/users/orkutmuratyilmaz/events{/privacy}",
"received_events_url": "https://api.github.com/users/orkutmuratyilmaz/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 13
| 2023-10-16T08:45:41
| 2025-01-28T00:07:08
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hello all,
Thanks for Ollama, it is a great thing to use:)
I've installed it on my local (Manjaro) and it works nice. After that, I'm trying to install on a server, which is running with IBM POWER8NVL cpu and Ubuntu 18.04 is my latest updated choice. It means, I cannot run the install script, because of the script's demand for AMD64 cpu architecture. Then I've decided to build it.
First, I've installed gcc, cmake, nvidia-cuda-toolkit packages with apt and then, I've installed go with "`snap install go --classic`".
After that, I've downloaded Ollama with "`wget https://github.com/jmorganca/ollama/archive/refs/heads/main.zip`" and unzipped it. Then, I did "`go generate ./...`" in the unzipped directory, but at the end, I've received an error message, which is below:
```
go generate ./...
go: downloading gonum.org/v1/gonum v0.13.0
go: downloading github.com/spf13/cobra v1.7.0
go: downloading github.com/olekukonko/tablewriter v0.0.5
go: downloading github.com/dustin/go-humanize v1.0.1
go: downloading github.com/pdevine/readline v1.5.2
go: downloading golang.org/x/term v0.10.0
go: downloading golang.org/x/sync v0.3.0
go: downloading github.com/gin-contrib/cors v1.4.0
go: downloading github.com/mitchellh/colorstring v0.0.0-20190213212951-d06e56a500db
go: downloading github.com/mattn/go-runewidth v0.0.14
go: downloading github.com/gin-gonic/gin v1.9.1
go: downloading golang.org/x/crypto v0.10.0
go: downloading golang.org/x/exp v0.0.0-20230817173708-d852ddb80c63
go: downloading github.com/pbnjay/memory v0.0.0-20210728143218-7b4eea64cf58
go: downloading github.com/rivo/uniseg v0.2.0
go: downloading github.com/spf13/pflag v1.0.5
go: downloading github.com/gin-contrib/sse v0.1.0
go: downloading gopkg.in/yaml.v3 v3.0.1
go: downloading github.com/ugorji/go/codec v1.2.11
go: downloading golang.org/x/net v0.10.0
go: downloading github.com/mattn/go-isatty v0.0.19
go: downloading github.com/pelletier/go-toml/v2 v2.0.8
go: downloading google.golang.org/protobuf v1.30.0
go: downloading github.com/go-playground/validator/v10 v10.14.0
go: downloading golang.org/x/sys v0.11.0
go: downloading github.com/leodido/go-urn v1.2.4
go: downloading github.com/gabriel-vasile/mimetype v1.4.2
go: downloading github.com/go-playground/universal-translator v0.18.1
go: downloading golang.org/x/text v0.10.0
go: downloading github.com/go-playground/locales v0.14.1
fatal: not a git repository (or any of the parent directories): .git
llm/llama.cpp/generate_linux.go:3: running "git": exit status 128
```
I've also done some searching, but I couldn't find a solution. Do you have any ideas?
Best,
Orkut
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/796/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/796/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/3835
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3835/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3835/comments
|
https://api.github.com/repos/ollama/ollama/issues/3835/events
|
https://github.com/ollama/ollama/pull/3835
| 2,257,668,112
|
PR_kwDOJ0Z1Ps5taWBX
| 3,835
|
Trim spaces and quotes from llm lib override
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-04-23T00:11:52
| 2024-04-23T02:06:57
| 2024-04-23T02:06:54
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3835",
"html_url": "https://github.com/ollama/ollama/pull/3835",
"diff_url": "https://github.com/ollama/ollama/pull/3835.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3835.patch",
"merged_at": "2024-04-23T02:06:54"
}
|
Fixes #3512
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3835/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3835/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3051
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3051/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3051/comments
|
https://api.github.com/repos/ollama/ollama/issues/3051/events
|
https://github.com/ollama/ollama/issues/3051
| 2,178,326,238
|
I_kwDOJ0Z1Ps6B1p7e
| 3,051
|
Error when disk with temp dirs (e.g. `/tmp`) is full
|
{
"login": "hanaidong",
"id": 139938824,
"node_id": "U_kgDOCFdMCA",
"avatar_url": "https://avatars.githubusercontent.com/u/139938824?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hanaidong",
"html_url": "https://github.com/hanaidong",
"followers_url": "https://api.github.com/users/hanaidong/followers",
"following_url": "https://api.github.com/users/hanaidong/following{/other_user}",
"gists_url": "https://api.github.com/users/hanaidong/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hanaidong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hanaidong/subscriptions",
"organizations_url": "https://api.github.com/users/hanaidong/orgs",
"repos_url": "https://api.github.com/users/hanaidong/repos",
"events_url": "https://api.github.com/users/hanaidong/events{/privacy}",
"received_events_url": "https://api.github.com/users/hanaidong/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 2
| 2024-03-11T06:26:22
| 2024-03-20T15:28:05
| 2024-03-20T15:28:04
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I tested it will start failed,

This causes the systemd to constantly restart this service and download many temporary files that will not be deleted yet

key error
```
Error: unable to initialize llm library copy payload llama.cpp/build/linux/x86_64/cuda_v11/lib/libcublasLt.so.11.gz: gzip: invalid checksum
```
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3051/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3051/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8318
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8318/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8318/comments
|
https://api.github.com/repos/ollama/ollama/issues/8318/events
|
https://github.com/ollama/ollama/issues/8318
| 2,770,477,017
|
I_kwDOJ0Z1Ps6lIh_Z
| 8,318
|
context limit from user settings not actually applied
|
{
"login": "pmffromspace",
"id": 108571752,
"node_id": "U_kgDOBnisaA",
"avatar_url": "https://avatars.githubusercontent.com/u/108571752?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pmffromspace",
"html_url": "https://github.com/pmffromspace",
"followers_url": "https://api.github.com/users/pmffromspace/followers",
"following_url": "https://api.github.com/users/pmffromspace/following{/other_user}",
"gists_url": "https://api.github.com/users/pmffromspace/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pmffromspace/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pmffromspace/subscriptions",
"organizations_url": "https://api.github.com/users/pmffromspace/orgs",
"repos_url": "https://api.github.com/users/pmffromspace/repos",
"events_url": "https://api.github.com/users/pmffromspace/events{/privacy}",
"received_events_url": "https://api.github.com/users/pmffromspace/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 2
| 2025-01-06T11:42:16
| 2025-01-06T14:00:35
| 2025-01-06T14:00:25
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I think this issue is still persisting:
> "I noticed that modifying the Context Length in Settings - Advanced Parameters doesn't actually pass that parameter in the request. Only changing the Context Length in Chat Controls - Advanced Params actually sends the parameter. This seems to be a bug in OpenWebUI."
Original issue: https://github.com/ollama/ollama/issues/6026#issuecomment-2277915109
### OS
Docker
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.4.7
### Open Webui Version
0.5.4
|
{
"login": "pmffromspace",
"id": 108571752,
"node_id": "U_kgDOBnisaA",
"avatar_url": "https://avatars.githubusercontent.com/u/108571752?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pmffromspace",
"html_url": "https://github.com/pmffromspace",
"followers_url": "https://api.github.com/users/pmffromspace/followers",
"following_url": "https://api.github.com/users/pmffromspace/following{/other_user}",
"gists_url": "https://api.github.com/users/pmffromspace/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pmffromspace/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pmffromspace/subscriptions",
"organizations_url": "https://api.github.com/users/pmffromspace/orgs",
"repos_url": "https://api.github.com/users/pmffromspace/repos",
"events_url": "https://api.github.com/users/pmffromspace/events{/privacy}",
"received_events_url": "https://api.github.com/users/pmffromspace/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8318/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8318/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1538
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1538/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1538/comments
|
https://api.github.com/repos/ollama/ollama/issues/1538/events
|
https://github.com/ollama/ollama/issues/1538
| 2,042,842,121
|
I_kwDOJ0Z1Ps55w0wJ
| 1,538
|
How to import "MPTForCausalLM" models?
|
{
"login": "ansutung",
"id": 26163387,
"node_id": "MDQ6VXNlcjI2MTYzMzg3",
"avatar_url": "https://avatars.githubusercontent.com/u/26163387?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ansutung",
"html_url": "https://github.com/ansutung",
"followers_url": "https://api.github.com/users/ansutung/followers",
"following_url": "https://api.github.com/users/ansutung/following{/other_user}",
"gists_url": "https://api.github.com/users/ansutung/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ansutung/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ansutung/subscriptions",
"organizations_url": "https://api.github.com/users/ansutung/orgs",
"repos_url": "https://api.github.com/users/ansutung/repos",
"events_url": "https://api.github.com/users/ansutung/events{/privacy}",
"received_events_url": "https://api.github.com/users/ansutung/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info",
"name": "needs more info",
"color": "BA8041",
"default": false,
"description": "More information is needed to assist"
}
] |
closed
| false
| null |
[] | null | 1
| 2023-12-15T03:47:12
| 2024-03-11T18:56:28
| 2024-03-11T18:53:41
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hello everyone,
I'm not sure if I can use the model with the "MPTForCausalLM" architecture with Ollama. I tried to import it following the instructions here: https://github.com/jmorganca/ollama/blob/main/docs/import.md
The import process was successful, but when running with the new model, I encountered an error: "ollama run example "What is your favourite condiment?"
I am using models from this source:
https://huggingface.co/vinai/PhoGPT-7B5
https://huggingface.co/vinai/PhoGPT-7B5-Instruct

Thanks
|
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1538/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1538/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1608
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1608/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1608/comments
|
https://api.github.com/repos/ollama/ollama/issues/1608/events
|
https://github.com/ollama/ollama/issues/1608
| 2,049,028,865
|
I_kwDOJ0Z1Ps56IbMB
| 1,608
|
Is it possible to use ollama to generate embeddings?
|
{
"login": "almosnow",
"id": 4924797,
"node_id": "MDQ6VXNlcjQ5MjQ3OTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/4924797?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/almosnow",
"html_url": "https://github.com/almosnow",
"followers_url": "https://api.github.com/users/almosnow/followers",
"following_url": "https://api.github.com/users/almosnow/following{/other_user}",
"gists_url": "https://api.github.com/users/almosnow/gists{/gist_id}",
"starred_url": "https://api.github.com/users/almosnow/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/almosnow/subscriptions",
"organizations_url": "https://api.github.com/users/almosnow/orgs",
"repos_url": "https://api.github.com/users/almosnow/repos",
"events_url": "https://api.github.com/users/almosnow/events{/privacy}",
"received_events_url": "https://api.github.com/users/almosnow/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2023-12-19T16:37:51
| 2023-12-19T20:00:16
| 2023-12-19T20:00:16
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Sorry about the noob-ish question but am not familiar with how ollama does things.
I have a bunch of text snippets that I'd like to generate embeddings for, could ollama (any model, idc at tje moment) be used for this?
|
{
"login": "almosnow",
"id": 4924797,
"node_id": "MDQ6VXNlcjQ5MjQ3OTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/4924797?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/almosnow",
"html_url": "https://github.com/almosnow",
"followers_url": "https://api.github.com/users/almosnow/followers",
"following_url": "https://api.github.com/users/almosnow/following{/other_user}",
"gists_url": "https://api.github.com/users/almosnow/gists{/gist_id}",
"starred_url": "https://api.github.com/users/almosnow/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/almosnow/subscriptions",
"organizations_url": "https://api.github.com/users/almosnow/orgs",
"repos_url": "https://api.github.com/users/almosnow/repos",
"events_url": "https://api.github.com/users/almosnow/events{/privacy}",
"received_events_url": "https://api.github.com/users/almosnow/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1608/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1608/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6669
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6669/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6669/comments
|
https://api.github.com/repos/ollama/ollama/issues/6669/events
|
https://github.com/ollama/ollama/issues/6669
| 2,509,653,076
|
I_kwDOJ0Z1Ps6VlkRU
| 6,669
|
Ubuntu GPU not used
|
{
"login": "Andrii-suncor",
"id": 145366805,
"node_id": "U_kgDOCKofFQ",
"avatar_url": "https://avatars.githubusercontent.com/u/145366805?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Andrii-suncor",
"html_url": "https://github.com/Andrii-suncor",
"followers_url": "https://api.github.com/users/Andrii-suncor/followers",
"following_url": "https://api.github.com/users/Andrii-suncor/following{/other_user}",
"gists_url": "https://api.github.com/users/Andrii-suncor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Andrii-suncor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Andrii-suncor/subscriptions",
"organizations_url": "https://api.github.com/users/Andrii-suncor/orgs",
"repos_url": "https://api.github.com/users/Andrii-suncor/repos",
"events_url": "https://api.github.com/users/Andrii-suncor/events{/privacy}",
"received_events_url": "https://api.github.com/users/Andrii-suncor/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-09-06T06:46:20
| 2024-09-06T07:52:30
| 2024-09-06T07:52:30
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
The GPU is not used when using
Start log
```
ollama start
2024/09/06 06:40:42 routes.go:1125: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
time=2024-09-06T06:40:42.866Z level=INFO source=images.go:753 msg="total blobs: 5"
time=2024-09-06T06:40:42.867Z level=INFO source=images.go:760 msg="total unused blobs removed: 0"
time=2024-09-06T06:40:42.867Z level=INFO source=routes.go:1172 msg="Listening on 127.0.0.1:11434 (version 0.3.9)"
time=2024-09-06T06:40:42.867Z level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama3378081407/runners
time=2024-09-06T06:40:49.224Z level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cuda_v11 cuda_v12 rocm_v60102 cpu cpu_avx cpu_avx2]"
time=2024-09-06T06:40:49.224Z level=INFO source=gpu.go:200 msg="looking for compatible GPUs"
time=2024-09-06T06:40:49.224Z level=WARN source=gpu.go:222 msg="CPU does not have minimum vector extensions, GPU inference disabled" required=avx detected="no vector extensions"
time=2024-09-06T06:40:49.224Z level=INFO source=types.go:107 msg="inference compute" id=0 library=cpu variant="no vector extensions" compute="" driver=0.0 name="" total="40.2 GiB" available="39.2 GiB"
[GIN] 2024/09/06 - 06:41:19 | 200 | 22.99µs | 127.0.0.1 | HEAD "/"
[GIN] 2024/09/06 - 06:41:19 | 200 | 13.003297ms | 127.0.0.1 | POST "/api/show"
time=2024-09-06T06:41:19.646Z level=INFO source=memory.go:309 msg="offload to cpu" layers.requested=-1 layers.model=33 layers.offload=0 layers.split="" memory.available="[39.2 GiB]" memory.required.full="5.8 GiB" memory.required.partial="0 B" memory.required.kv="1.0 GiB" memory.required.allocations="[5.8 GiB]" memory.weights.total="4.7 GiB" memory.weights.repeating="4.3 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="560.0 MiB" memory.graph.partial="677.5 MiB"
time=2024-09-06T06:41:19.647Z level=INFO source=server.go:391 msg="starting llama server" cmd="/tmp/ollama3378081407/runners/cpu/ollama_llama_server --model /root/.ollama/models/blobs/sha256-8eeb52dfb3bb9aefdf9d1ef24b3bdbcfbe82238798c4b918278320b6fcef18fe --ctx-size 8192 --batch-size 512 --embedding --log-disable --no-mmap --parallel 4 --port 45485"
time=2024-09-06T06:41:19.647Z level=INFO source=sched.go:450 msg="loaded runners" count=1
time=2024-09-06T06:41:19.647Z level=INFO source=server.go:591 msg="waiting for llama runner to start responding"
time=2024-09-06T06:41:19.648Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server error"
INFO [main] build info | build=1 commit="1e6f655" tid="137084946550976" timestamp=1725604879
INFO [main] system info | n_threads=10 n_threads_batch=-1 system_info="AVX = 0 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 0 | SSSE3 = 0 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="137084946550976" timestamp=1725604879 total_threads=10
INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="9" port="45485" tid="137084946550976" timestamp=1725604879
llama_model_loader: loaded meta data with 29 key-value pairs and 292 tensors from /root/.ollama/models/blobs/sha256-8eeb52dfb3bb9aefdf9d1ef24b3bdbcfbe82238798c4b918278320b6fcef18fe (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = Meta Llama 3.1 8B Instruct
llama_model_loader: - kv 3: general.finetune str = Instruct
llama_model_loader: - kv 4: general.basename str = Meta-Llama-3.1
llama_model_loader: - kv 5: general.size_label str = 8B
llama_model_loader: - kv 6: general.license str = llama3.1
llama_model_loader: - kv 7: general.tags arr[str,6] = ["facebook", "meta", "pytorch", "llam...
llama_model_loader: - kv 8: general.languages arr[str,8] = ["en", "de", "fr", "it", "pt", "hi", ...
llama_model_loader: - kv 9: llama.block_count u32 = 32
llama_model_loader: - kv 10: llama.context_length u32 = 131072
llama_model_loader: - kv 11: llama.embedding_length u32 = 4096
llama_model_loader: - kv 12: llama.feed_forward_length u32 = 14336
llama_model_loader: - kv 13: llama.attention.head_count u32 = 32
llama_model_loader: - kv 14: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 15: llama.rope.freq_base f32 = 500000.000000
llama_model_loader: - kv 16: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 17: general.file_type u32 = 2
llama_model_loader: - kv 18: llama.vocab_size u32 = 128256
llama_model_loader: - kv 19: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 20: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 21: tokenizer.ggml.pre str = llama-bpe
llama_model_loader: - kv 22: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 23: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 24: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv 25: tokenizer.ggml.bos_token_id u32 = 128000
llama_model_loader: - kv 26: tokenizer.ggml.eos_token_id u32 = 128009
llama_model_loader: - kv 27: tokenizer.chat_template str = {{- bos_token }}\n{%- if custom_tools ...
llama_model_loader: - kv 28: general.quantization_version u32 = 2
llama_model_loader: - type f32: 66 tensors
llama_model_loader: - type q4_0: 225 tensors
llama_model_loader: - type q6_K: 1 tensors
time=2024-09-06T06:41:19.899Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model"
llm_load_vocab: special tokens cache size = 256
llm_load_vocab: token to piece cache size = 0.7999 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 128256
llm_load_print_meta: n_merges = 280147
llm_load_print_meta: vocab_only = 0
llm_load_print_meta: n_ctx_train = 131072
llm_load_print_meta: n_embd = 4096
llm_load_print_meta: n_layer = 32
llm_load_print_meta: n_head = 32
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_swa = 0
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 4
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 14336
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 131072
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: model type = 8B
llm_load_print_meta: model ftype = Q4_0
llm_load_print_meta: model params = 8.03 B
llm_load_print_meta: model size = 4.33 GiB (4.64 BPW)
llm_load_print_meta: general.name = Meta Llama 3.1 8B Instruct
llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>'
llm_load_print_meta: EOS token = 128009 '<|eot_id|>'
llm_load_print_meta: LF token = 128 'Ä'
llm_load_print_meta: EOT token = 128009 '<|eot_id|>'
llm_load_print_meta: max token length = 256
llm_load_tensors: ggml ctx size = 0.14 MiB
llm_load_tensors: CPU buffer size = 4437.81 MiB
llama_new_context_with_model: n_ctx = 8192
llama_new_context_with_model: n_batch = 512
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 500000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: CPU KV buffer size = 1024.00 MiB
llama_new_context_with_model: KV self size = 1024.00 MiB, K (f16): 512.00 MiB, V (f16): 512.00 MiB
llama_new_context_with_model: CPU output buffer size = 2.02 MiB
llama_new_context_with_model: CPU compute buffer size = 560.01 MiB
llama_new_context_with_model: graph nodes = 1030
llama_new_context_with_model: graph splits = 1
INFO [main] model loaded | tid="137084946550976" timestamp=1725604882
time=2024-09-06T06:41:22.659Z level=INFO source=server.go:630 msg="llama runner started in 3.01 seconds"
[GIN] 2024/09/06 - 06:41:22 | 200 | 3.048128204s | 127.0.0.1 | POST "/api/chat"
```
nvidia-smi

nvtop

ollama ps

ollama -v

dmesg | grep -i nvrm

dmesg | grep -i nvidia

If additional information is needed, I can provide it later.
### OS
Linux
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.3.9
|
{
"login": "Andrii-suncor",
"id": 145366805,
"node_id": "U_kgDOCKofFQ",
"avatar_url": "https://avatars.githubusercontent.com/u/145366805?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Andrii-suncor",
"html_url": "https://github.com/Andrii-suncor",
"followers_url": "https://api.github.com/users/Andrii-suncor/followers",
"following_url": "https://api.github.com/users/Andrii-suncor/following{/other_user}",
"gists_url": "https://api.github.com/users/Andrii-suncor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Andrii-suncor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Andrii-suncor/subscriptions",
"organizations_url": "https://api.github.com/users/Andrii-suncor/orgs",
"repos_url": "https://api.github.com/users/Andrii-suncor/repos",
"events_url": "https://api.github.com/users/Andrii-suncor/events{/privacy}",
"received_events_url": "https://api.github.com/users/Andrii-suncor/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6669/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6669/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/169
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/169/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/169/comments
|
https://api.github.com/repos/ollama/ollama/issues/169/events
|
https://github.com/ollama/ollama/issues/169
| 1,816,516,727
|
I_kwDOJ0Z1Ps5sRdh3
| 169
|
How to enter multiline text
|
{
"login": "abulka",
"id": 11467530,
"node_id": "MDQ6VXNlcjExNDY3NTMw",
"avatar_url": "https://avatars.githubusercontent.com/u/11467530?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abulka",
"html_url": "https://github.com/abulka",
"followers_url": "https://api.github.com/users/abulka/followers",
"following_url": "https://api.github.com/users/abulka/following{/other_user}",
"gists_url": "https://api.github.com/users/abulka/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abulka/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abulka/subscriptions",
"organizations_url": "https://api.github.com/users/abulka/orgs",
"repos_url": "https://api.github.com/users/abulka/repos",
"events_url": "https://api.github.com/users/abulka/events{/privacy}",
"received_events_url": "https://api.github.com/users/abulka/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 10
| 2023-07-22T00:50:05
| 2024-06-23T23:12:21
| 2023-08-02T18:51:38
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
How to enter multiline text?
When I hit enter, the input prompt teminates. In ChatGPT I can hit SHIFT enter to begin a new line but not with ollama.
Even pasting multiline text works in ChatGPT but not with ollama.
A workaround seems to be to pipe text files in - see #161
Shouldn't there be a multiline mode or something? Like https://github.com/ggerganov/llama.cpp/issues/1382
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/169/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/169/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5/comments
|
https://api.github.com/repos/ollama/ollama/issues/5/events
|
https://github.com/ollama/ollama/issues/5
| 1,777,853,979
|
I_kwDOJ0Z1Ps5p9-Yb
| 5
|
model selection
|
{
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.github.com/users/technovangelist/followers",
"following_url": "https://api.github.com/users/technovangelist/following{/other_user}",
"gists_url": "https://api.github.com/users/technovangelist/gists{/gist_id}",
"starred_url": "https://api.github.com/users/technovangelist/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/technovangelist/subscriptions",
"organizations_url": "https://api.github.com/users/technovangelist/orgs",
"repos_url": "https://api.github.com/users/technovangelist/repos",
"events_url": "https://api.github.com/users/technovangelist/events{/privacy}",
"received_events_url": "https://api.github.com/users/technovangelist/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 5675428184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUkgpWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/app",
"name": "app",
"color": "000000",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] | null | 0
| 2023-06-27T22:44:57
| 2023-07-24T20:46:59
| 2023-07-24T20:46:59
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I want to change the model. the only way to do it now is to kill it and restart. is there another way?
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2016
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2016/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2016/comments
|
https://api.github.com/repos/ollama/ollama/issues/2016/events
|
https://github.com/ollama/ollama/pull/2016
| 2,084,275,128
|
PR_kwDOJ0Z1Ps5kNghC
| 2,016
|
add Open Interpreter to README
|
{
"login": "MikeBirdTech",
"id": 63524998,
"node_id": "MDQ6VXNlcjYzNTI0OTk4",
"avatar_url": "https://avatars.githubusercontent.com/u/63524998?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MikeBirdTech",
"html_url": "https://github.com/MikeBirdTech",
"followers_url": "https://api.github.com/users/MikeBirdTech/followers",
"following_url": "https://api.github.com/users/MikeBirdTech/following{/other_user}",
"gists_url": "https://api.github.com/users/MikeBirdTech/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MikeBirdTech/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MikeBirdTech/subscriptions",
"organizations_url": "https://api.github.com/users/MikeBirdTech/orgs",
"repos_url": "https://api.github.com/users/MikeBirdTech/repos",
"events_url": "https://api.github.com/users/MikeBirdTech/events{/privacy}",
"received_events_url": "https://api.github.com/users/MikeBirdTech/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-01-16T15:30:38
| 2024-01-18T22:51:02
| 2024-01-18T21:59:39
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/2016",
"html_url": "https://github.com/ollama/ollama/pull/2016",
"diff_url": "https://github.com/ollama/ollama/pull/2016.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2016.patch",
"merged_at": "2024-01-18T21:59:39"
}
|
Adding Open Interpreter to the list of Extensions & Plugins. Includes link to OI documentation explaining how to use Ollama to power OI
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2016/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2016/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/8681
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8681/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8681/comments
|
https://api.github.com/repos/ollama/ollama/issues/8681/events
|
https://github.com/ollama/ollama/pull/8681
| 2,819,666,702
|
PR_kwDOJ0Z1Ps6JcEt2
| 8,681
|
Remove hard-coded GIN mode
|
{
"login": "yoonsio",
"id": 24367477,
"node_id": "MDQ6VXNlcjI0MzY3NDc3",
"avatar_url": "https://avatars.githubusercontent.com/u/24367477?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yoonsio",
"html_url": "https://github.com/yoonsio",
"followers_url": "https://api.github.com/users/yoonsio/followers",
"following_url": "https://api.github.com/users/yoonsio/following{/other_user}",
"gists_url": "https://api.github.com/users/yoonsio/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yoonsio/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yoonsio/subscriptions",
"organizations_url": "https://api.github.com/users/yoonsio/orgs",
"repos_url": "https://api.github.com/users/yoonsio/repos",
"events_url": "https://api.github.com/users/yoonsio/events{/privacy}",
"received_events_url": "https://api.github.com/users/yoonsio/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 0
| 2025-01-30T01:02:26
| 2025-01-30T01:07:02
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/8681",
"html_url": "https://github.com/ollama/ollama/pull/8681",
"diff_url": "https://github.com/ollama/ollama/pull/8681.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8681.patch",
"merged_at": null
}
|
## Context
https://github.com/ollama/ollama/issues/8682: Gin mode is hard-coded to `gin.DebugMode` and the server displays this log on start up.
```
[GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.
```
## Changes
This PR removes hard-coded `gin.DebugMode` from the source code, which allows users to set the desired `GIN_MODE` via environment variable without modifying the source code.
Gin v1.10.0 loads `GIN_MODE` from the [environment variable](https://github.com/gin-gonic/gin/blob/v1.10.0/mode.go#L16-L25) and falls back to debug mode if it is unset by default.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8681/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8681/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4243
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4243/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4243/comments
|
https://api.github.com/repos/ollama/ollama/issues/4243/events
|
https://github.com/ollama/ollama/issues/4243
| 2,284,421,390
|
I_kwDOJ0Z1Ps6IKYEO
| 4,243
|
moondream's abnormal reply
|
{
"login": "taozhiyuai",
"id": 146583103,
"node_id": "U_kgDOCLyuPw",
"avatar_url": "https://avatars.githubusercontent.com/u/146583103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/taozhiyuai",
"html_url": "https://github.com/taozhiyuai",
"followers_url": "https://api.github.com/users/taozhiyuai/followers",
"following_url": "https://api.github.com/users/taozhiyuai/following{/other_user}",
"gists_url": "https://api.github.com/users/taozhiyuai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/taozhiyuai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/taozhiyuai/subscriptions",
"organizations_url": "https://api.github.com/users/taozhiyuai/orgs",
"repos_url": "https://api.github.com/users/taozhiyuai/repos",
"events_url": "https://api.github.com/users/taozhiyuai/events{/privacy}",
"received_events_url": "https://api.github.com/users/taozhiyuai/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-05-08T00:01:45
| 2024-09-10T07:42:11
| 2024-05-09T08:57:55
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
<img width="851" alt="截屏2024-05-08 08 00 26" src="https://github.com/ollama/ollama/assets/146583103/4442ec59-4131-43de-b03c-3154d6c13fa2">
shown as above.
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.1.34
|
{
"login": "taozhiyuai",
"id": 146583103,
"node_id": "U_kgDOCLyuPw",
"avatar_url": "https://avatars.githubusercontent.com/u/146583103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/taozhiyuai",
"html_url": "https://github.com/taozhiyuai",
"followers_url": "https://api.github.com/users/taozhiyuai/followers",
"following_url": "https://api.github.com/users/taozhiyuai/following{/other_user}",
"gists_url": "https://api.github.com/users/taozhiyuai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/taozhiyuai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/taozhiyuai/subscriptions",
"organizations_url": "https://api.github.com/users/taozhiyuai/orgs",
"repos_url": "https://api.github.com/users/taozhiyuai/repos",
"events_url": "https://api.github.com/users/taozhiyuai/events{/privacy}",
"received_events_url": "https://api.github.com/users/taozhiyuai/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4243/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4243/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1974
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1974/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1974/comments
|
https://api.github.com/repos/ollama/ollama/issues/1974/events
|
https://github.com/ollama/ollama/pull/1974
| 2,080,190,886
|
PR_kwDOJ0Z1Ps5j_0YA
| 1,974
|
add `gcc -lstdc++` flag for linux cpu
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-01-13T08:52:55
| 2024-01-13T08:53:00
| 2024-01-13T08:53:00
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1974",
"html_url": "https://github.com/ollama/ollama/pull/1974",
"diff_url": "https://github.com/ollama/ollama/pull/1974.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1974.patch",
"merged_at": "2024-01-13T08:53:00"
}
| null |
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1974/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1974/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3537
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3537/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3537/comments
|
https://api.github.com/repos/ollama/ollama/issues/3537/events
|
https://github.com/ollama/ollama/issues/3537
| 2,230,718,203
|
I_kwDOJ0Z1Ps6E9g77
| 3,537
|
Error: no FROM line for the model was specified
|
{
"login": "hadoop2xu",
"id": 48076281,
"node_id": "MDQ6VXNlcjQ4MDc2Mjgx",
"avatar_url": "https://avatars.githubusercontent.com/u/48076281?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hadoop2xu",
"html_url": "https://github.com/hadoop2xu",
"followers_url": "https://api.github.com/users/hadoop2xu/followers",
"following_url": "https://api.github.com/users/hadoop2xu/following{/other_user}",
"gists_url": "https://api.github.com/users/hadoop2xu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hadoop2xu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hadoop2xu/subscriptions",
"organizations_url": "https://api.github.com/users/hadoop2xu/orgs",
"repos_url": "https://api.github.com/users/hadoop2xu/repos",
"events_url": "https://api.github.com/users/hadoop2xu/events{/privacy}",
"received_events_url": "https://api.github.com/users/hadoop2xu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 0
| 2024-04-08T09:33:32
| 2024-04-08T10:27:14
| 2024-04-08T10:27:14
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Modelfile 如下:
FROM converted.bin
TEMPLATE "[INST] {{ .Prompt }} [/INST]"
其中 converted.bin是通过 huggingface 模型转换而来。
执行 ollama create example -f Modelfile 报错:
Error: no FROM line for the model was specified
### What did you expect to see?
_No response_
### Steps to reproduce
_No response_
### Are there any recent changes that introduced the issue?
_No response_
### OS
_No response_
### Architecture
_No response_
### Platform
_No response_
### Ollama version
_No response_
### GPU
_No response_
### GPU info
_No response_
### CPU
_No response_
### Other software
_No response_
|
{
"login": "hadoop2xu",
"id": 48076281,
"node_id": "MDQ6VXNlcjQ4MDc2Mjgx",
"avatar_url": "https://avatars.githubusercontent.com/u/48076281?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hadoop2xu",
"html_url": "https://github.com/hadoop2xu",
"followers_url": "https://api.github.com/users/hadoop2xu/followers",
"following_url": "https://api.github.com/users/hadoop2xu/following{/other_user}",
"gists_url": "https://api.github.com/users/hadoop2xu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hadoop2xu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hadoop2xu/subscriptions",
"organizations_url": "https://api.github.com/users/hadoop2xu/orgs",
"repos_url": "https://api.github.com/users/hadoop2xu/repos",
"events_url": "https://api.github.com/users/hadoop2xu/events{/privacy}",
"received_events_url": "https://api.github.com/users/hadoop2xu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3537/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3537/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/749
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/749/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/749/comments
|
https://api.github.com/repos/ollama/ollama/issues/749/events
|
https://github.com/ollama/ollama/issues/749
| 1,936,042,964
|
I_kwDOJ0Z1Ps5zZavU
| 749
|
unable to terminate session in TUI
|
{
"login": "praveenc",
"id": 1090396,
"node_id": "MDQ6VXNlcjEwOTAzOTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1090396?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/praveenc",
"html_url": "https://github.com/praveenc",
"followers_url": "https://api.github.com/users/praveenc/followers",
"following_url": "https://api.github.com/users/praveenc/following{/other_user}",
"gists_url": "https://api.github.com/users/praveenc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/praveenc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/praveenc/subscriptions",
"organizations_url": "https://api.github.com/users/praveenc/orgs",
"repos_url": "https://api.github.com/users/praveenc/repos",
"events_url": "https://api.github.com/users/praveenc/events{/privacy}",
"received_events_url": "https://api.github.com/users/praveenc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 6
| 2023-10-10T19:11:33
| 2023-10-31T14:25:18
| 2023-10-26T23:30:25
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi, I pasted text to the prompt that contained many new line characters. Seems like the session is considering each line as an input prompt and it goes on to generate responses. I'v tried to `Ctrl-c`, `Ctrl-z`, `Ctrl-d` but the prompt wouldn't terminate, essentially locking up the terminal session
```shell
ollama run mistral
>>> Summarize this doc for me """Use Docker containers to build models
Makes extensive use of Docker containers for build and runtime tasks. Provides pre-built Docker images for its built-in algorithms and the supported deep learning frameworks used for training and inference. Using containers, you can train machine learning algorithms and deploy models quickly and reliably at any scale.
The topics in this section show how to deploy these containers for your own use cases. .
For information about how to bring your own containers for use , see Bring your own Docker image
```
couldn't close the starting quotes i.e. `"""` as the model started generating text already ...
Would be nice to capture this.
_p.s.: Seems the `Ctrl-C`, `Ctrl-D` signal was queued at the end..._
```shell
...
>>> ^C
Use Ctrl-D or /bye to exit.
>>> ^C
Use Ctrl-D or /bye to exit.
>>> ^C
Use Ctrl-D or /bye to exit.
>>> ^C
Use Ctrl-D or /bye to exit.
>>> ^C
Use Ctrl-D or /bye to exit.
>>> ^C
Use Ctrl-D or /bye to exit.
>>> ^C
Use Ctrl-D or /bye to exit.
>>> ^C
Use Ctrl-D or /bye to exit.
>>> ^D
```
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/749/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/749/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7890
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7890/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7890/comments
|
https://api.github.com/repos/ollama/ollama/issues/7890/events
|
https://github.com/ollama/ollama/issues/7890
| 2,707,109,182
|
I_kwDOJ0Z1Ps6hWzU-
| 7,890
|
Tools in modelfile
|
{
"login": "Put-to",
"id": 86911099,
"node_id": "MDQ6VXNlcjg2OTExMDk5",
"avatar_url": "https://avatars.githubusercontent.com/u/86911099?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Put-to",
"html_url": "https://github.com/Put-to",
"followers_url": "https://api.github.com/users/Put-to/followers",
"following_url": "https://api.github.com/users/Put-to/following{/other_user}",
"gists_url": "https://api.github.com/users/Put-to/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Put-to/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Put-to/subscriptions",
"organizations_url": "https://api.github.com/users/Put-to/orgs",
"repos_url": "https://api.github.com/users/Put-to/repos",
"events_url": "https://api.github.com/users/Put-to/events{/privacy}",
"received_events_url": "https://api.github.com/users/Put-to/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-11-30T10:02:21
| 2024-12-02T07:56:35
| 2024-12-02T07:56:35
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
It is a hassle to provide tools in the API call every time. When creating a custom model, it should be possible to define all tools in the model file.
|
{
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "https://api.github.com/users/ParthSareen/followers",
"following_url": "https://api.github.com/users/ParthSareen/following{/other_user}",
"gists_url": "https://api.github.com/users/ParthSareen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ParthSareen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ParthSareen/subscriptions",
"organizations_url": "https://api.github.com/users/ParthSareen/orgs",
"repos_url": "https://api.github.com/users/ParthSareen/repos",
"events_url": "https://api.github.com/users/ParthSareen/events{/privacy}",
"received_events_url": "https://api.github.com/users/ParthSareen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7890/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7890/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1894
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1894/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1894/comments
|
https://api.github.com/repos/ollama/ollama/issues/1894/events
|
https://github.com/ollama/ollama/issues/1894
| 2,074,339,852
|
I_kwDOJ0Z1Ps57o-oM
| 1,894
|
Llama Guard models
|
{
"login": "kylemclaren",
"id": 3727384,
"node_id": "MDQ6VXNlcjM3MjczODQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/3727384?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kylemclaren",
"html_url": "https://github.com/kylemclaren",
"followers_url": "https://api.github.com/users/kylemclaren/followers",
"following_url": "https://api.github.com/users/kylemclaren/following{/other_user}",
"gists_url": "https://api.github.com/users/kylemclaren/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kylemclaren/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kylemclaren/subscriptions",
"organizations_url": "https://api.github.com/users/kylemclaren/orgs",
"repos_url": "https://api.github.com/users/kylemclaren/repos",
"events_url": "https://api.github.com/users/kylemclaren/events{/privacy}",
"received_events_url": "https://api.github.com/users/kylemclaren/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
closed
| false
| null |
[] | null | 5
| 2024-01-10T13:06:19
| 2024-11-20T20:08:33
| 2024-10-23T17:36:09
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
https://huggingface.co/meta-llama/LlamaGuard-7b
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1894/reactions",
"total_count": 8,
"+1": 8,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1894/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2747
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2747/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2747/comments
|
https://api.github.com/repos/ollama/ollama/issues/2747/events
|
https://github.com/ollama/ollama/issues/2747
| 2,152,869,809
|
I_kwDOJ0Z1Ps6AUi-x
| 2,747
|
SIGFPE: floating-point exception during model initialization
|
{
"login": "mitar",
"id": 585279,
"node_id": "MDQ6VXNlcjU4NTI3OQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/585279?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mitar",
"html_url": "https://github.com/mitar",
"followers_url": "https://api.github.com/users/mitar/followers",
"following_url": "https://api.github.com/users/mitar/following{/other_user}",
"gists_url": "https://api.github.com/users/mitar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mitar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mitar/subscriptions",
"organizations_url": "https://api.github.com/users/mitar/orgs",
"repos_url": "https://api.github.com/users/mitar/repos",
"events_url": "https://api.github.com/users/mitar/events{/privacy}",
"received_events_url": "https://api.github.com/users/mitar/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-02-25T17:48:53
| 2024-05-10T01:15:18
| 2024-05-10T01:15:18
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I get `SIGFPE: floating-point exception` during model initialization:
```
llama_model_loader: loaded meta data with 23 key-value pairs and 291 tensors from .../.ollama/models/blobs/sha256:8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.name str = LLaMA v2
llama_model_loader: - kv 2: llama.context_length u32 = 4096
llama_model_loader: - kv 3: llama.embedding_length u32 = 4096
llama_model_loader: - kv 4: llama.block_count u32 = 32
llama_model_loader: - kv 5: llama.feed_forward_length u32 = 11008
llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 7: llama.attention.head_count u32 = 32
llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 32
llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 10: general.file_type u32 = 2
llama_model_loader: - kv 11: tokenizer.ggml.model str = llama
llama_model_loader: - kv 12: tokenizer.ggml.tokens arr[str,32000] = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv 13: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv 14: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv 15: tokenizer.ggml.merges arr[str,61249] = ["▁ t", "e r", "i n", "▁ a", "e n...
llama_model_loader: - kv 16: tokenizer.ggml.bos_token_id u32 = 1
llama_model_loader: - kv 17: tokenizer.ggml.eos_token_id u32 = 2
llama_model_loader: - kv 18: tokenizer.ggml.unknown_token_id u32 = 0
llama_model_loader: - kv 19: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 20: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 21: tokenizer.chat_template str = {% if messages[0]['role'] == 'system'...
llama_model_loader: - kv 22: general.quantization_version u32 = 2
llama_model_loader: - type f32: 65 tensors
llama_model_loader: - type q4_0: 225 tensors
llama_model_loader: - type q6_K: 1 tensors
llm_load_vocab: special tokens definition check successful ( 259/32000 ).
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = SPM
llm_load_print_meta: n_vocab = 32000
llm_load_print_meta: n_merges = 0
llm_load_print_meta: n_ctx_train = 4096
llm_load_print_meta: n_embd = 4096
llm_load_print_meta: n_head = 32
llm_load_print_meta: n_head_kv = 32
llm_load_print_meta: n_layer = 32
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 1
llm_load_print_meta: n_embd_k_gqa = 4096
llm_load_print_meta: n_embd_v_gqa = 4096
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: n_ff = 11008
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx = 4096
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: model type = 7B
llm_load_print_meta: model ftype = Q4_0
llm_load_print_meta: model params = 6.74 B
llm_load_print_meta: model size = 3.56 GiB (4.54 BPW)
llm_load_print_meta: general.name = LLaMA v2
llm_load_print_meta: BOS token = 1 '<s>'
llm_load_print_meta: EOS token = 2 '</s>'
llm_load_print_meta: UNK token = 0 '<unk>'
llm_load_print_meta: LF token = 13 '<0x0A>'
llm_load_tensors: ggml ctx size = 0.11 MiB
llm_load_tensors: CPU buffer size = 3647.87 MiB
..................................................................................................
llama_new_context_with_model: n_ctx = 4
llama_new_context_with_model: freq_base = 10000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: CPU KV buffer size = 4.00 MiB
llama_new_context_with_model: KV self size = 4.00 MiB, K (f32): 2.00 MiB, V (f32): 2.00 MiB
llama_new_context_with_model: CPU input buffer size = 0.00 MiB
SIGFPE: floating-point exception
PC=0x7f787963a686 m=9 sigcode=1
signal arrived during cgo execution
instruction bytes: 0x49 0xf7 0x7e 0x18 0x48 0x85 0xd2 0x75 0xa5 0x49 0x8b 0x45 0x20 0x48 0x99 0x49
goroutine 1 [syscall]:
runtime.cgocall(0x98b4e0, 0xc0004c51e0)
/usr/local/go/src/runtime/cgocall.go:157 +0x4b fp=0xc0004c51b8 sp=0xc0004c5180 pc=0x40ab6b
github.com/jmorganca/ollama/llm._Cfunc_dyn_llama_server_init({0x7f7868001380, 0x7f78795182f0, 0x7f787950f760, 0x7f7879511160, 0x7f7879520be0, 0x7f7879513a00, 0x7f7879514040, 0x7f787950f810, 0x7f7879519360, 0x7f7879519eb0, ...}, ...)
_cgo_gotypes.go:286 +0x45 fp=0xc0004c51e0 sp=0xc0004c51b8 pc=0x746525
github.com/jmorganca/ollama/llm.newDynExtServer.func7(0xac1b7a?, 0xc?)
.../ollama/llm/dyn_ext_server.go:153 +0xef fp=0xc0004c52d0 sp=0xc0004c51e0 pc=0x747a6f
github.com/jmorganca/ollama/llm.newDynExtServer({0xc0003af940, 0x3f}, {0xc000112ea0, _}, {_, _, _}, {0x0, 0x0, 0x0}, ...)
.../ollama/llm/dyn_ext_server.go:153 +0xa65 fp=0xc0004c5570 sp=0xc0004c52d0 pc=0x747705
github.com/jmorganca/ollama/llm.newLlmServer({{_, _, _}, {_, _}, {_, _}}, {_, _}, {0xc000112ea0, ...}, ...)
.../ollama/llm/llm.go:158 +0x425 fp=0xc0004c5730 sp=0xc0004c5570 pc=0x743e65
github.com/jmorganca/ollama/llm.New({0xacba2e, 0x25}, {0xc000112ea0, _}, {_, _, _}, {0x0, 0x0, 0x0}, ...)
.../ollama/llm/llm.go:123 +0x713 fp=0xc0004c59b0 sp=0xc0004c5730 pc=0x7437d3
```
I suspect the issue is in `CPU input buffer size = 0.00 MiB`. I am not sure why the input buffer size is 0? Probably there is some code which fails after that.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2747/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2747/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/933
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/933/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/933/comments
|
https://api.github.com/repos/ollama/ollama/issues/933/events
|
https://github.com/ollama/ollama/issues/933
| 1,965,430,731
|
I_kwDOJ0Z1Ps51JhfL
| 933
|
Request: Include Embedding Models
|
{
"login": "MarcellM01",
"id": 9119122,
"node_id": "MDQ6VXNlcjkxMTkxMjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/9119122?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MarcellM01",
"html_url": "https://github.com/MarcellM01",
"followers_url": "https://api.github.com/users/MarcellM01/followers",
"following_url": "https://api.github.com/users/MarcellM01/following{/other_user}",
"gists_url": "https://api.github.com/users/MarcellM01/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MarcellM01/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MarcellM01/subscriptions",
"organizations_url": "https://api.github.com/users/MarcellM01/orgs",
"repos_url": "https://api.github.com/users/MarcellM01/repos",
"events_url": "https://api.github.com/users/MarcellM01/events{/privacy}",
"received_events_url": "https://api.github.com/users/MarcellM01/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 2
| 2023-10-27T12:30:02
| 2023-12-04T23:09:27
| 2023-12-04T23:09:27
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
It would be great to have some embedding models as part of the Ollama model collection (such as "all-MiniLM-L6-v2"). Mistral and the others (I have tried) are unfortunately slow to handle embedding tasks for QA retrieval applications.
|
{
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.github.com/users/technovangelist/followers",
"following_url": "https://api.github.com/users/technovangelist/following{/other_user}",
"gists_url": "https://api.github.com/users/technovangelist/gists{/gist_id}",
"starred_url": "https://api.github.com/users/technovangelist/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/technovangelist/subscriptions",
"organizations_url": "https://api.github.com/users/technovangelist/orgs",
"repos_url": "https://api.github.com/users/technovangelist/repos",
"events_url": "https://api.github.com/users/technovangelist/events{/privacy}",
"received_events_url": "https://api.github.com/users/technovangelist/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/933/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/933/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5361
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5361/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5361/comments
|
https://api.github.com/repos/ollama/ollama/issues/5361/events
|
https://github.com/ollama/ollama/issues/5361
| 2,380,916,182
|
I_kwDOJ0Z1Ps6N6eXW
| 5,361
|
Ollama running very slow on Windows
|
{
"login": "AbhisheakSaraswat",
"id": 60028984,
"node_id": "MDQ6VXNlcjYwMDI4OTg0",
"avatar_url": "https://avatars.githubusercontent.com/u/60028984?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AbhisheakSaraswat",
"html_url": "https://github.com/AbhisheakSaraswat",
"followers_url": "https://api.github.com/users/AbhisheakSaraswat/followers",
"following_url": "https://api.github.com/users/AbhisheakSaraswat/following{/other_user}",
"gists_url": "https://api.github.com/users/AbhisheakSaraswat/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AbhisheakSaraswat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AbhisheakSaraswat/subscriptions",
"organizations_url": "https://api.github.com/users/AbhisheakSaraswat/orgs",
"repos_url": "https://api.github.com/users/AbhisheakSaraswat/repos",
"events_url": "https://api.github.com/users/AbhisheakSaraswat/events{/privacy}",
"received_events_url": "https://api.github.com/users/AbhisheakSaraswat/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 20
| 2024-06-28T17:59:20
| 2024-11-24T08:50:31
| 2024-07-02T21:06:00
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I have pulled a couple of LLMs via Ollama. When I run any LLM, the response is very slow – so much so that I can type faster than the responses I am getting.
My system specifications are: 13th Gen Intel(R) Core(TM) i5-1345U, 1600 MHz, 10 cores, and 12 logical processors.
### OS
Windows
### GPU
Intel
### CPU
Intel
### Ollama version
0.1.47
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5361/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5361/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7340
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7340/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7340/comments
|
https://api.github.com/repos/ollama/ollama/issues/7340/events
|
https://github.com/ollama/ollama/issues/7340
| 2,610,820,338
|
I_kwDOJ0Z1Ps6bnfTy
| 7,340
|
Is it possible to use the API to pass my documents to the model and have it understand them in depth?
|
{
"login": "robotom",
"id": 45123215,
"node_id": "MDQ6VXNlcjQ1MTIzMjE1",
"avatar_url": "https://avatars.githubusercontent.com/u/45123215?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/robotom",
"html_url": "https://github.com/robotom",
"followers_url": "https://api.github.com/users/robotom/followers",
"following_url": "https://api.github.com/users/robotom/following{/other_user}",
"gists_url": "https://api.github.com/users/robotom/gists{/gist_id}",
"starred_url": "https://api.github.com/users/robotom/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/robotom/subscriptions",
"organizations_url": "https://api.github.com/users/robotom/orgs",
"repos_url": "https://api.github.com/users/robotom/repos",
"events_url": "https://api.github.com/users/robotom/events{/privacy}",
"received_events_url": "https://api.github.com/users/robotom/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 3
| 2024-10-24T08:14:28
| 2024-10-24T23:01:58
| 2024-10-24T18:08:16
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I have a basic front end that talks via the Ollama API.
Should I just pass the entire document (300+ pages) as a prompt every time? Or is there some better way to do this.
I want it to know the documents extremely well or as well as possible.
Is this achievable?
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7340/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7340/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3753
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3753/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3753/comments
|
https://api.github.com/repos/ollama/ollama/issues/3753/events
|
https://github.com/ollama/ollama/issues/3753
| 2,252,802,709
|
I_kwDOJ0Z1Ps6GRwqV
| 3,753
|
多卡推理块开始单卡快?
|
{
"login": "papandadj",
"id": 25424898,
"node_id": "MDQ6VXNlcjI1NDI0ODk4",
"avatar_url": "https://avatars.githubusercontent.com/u/25424898?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/papandadj",
"html_url": "https://github.com/papandadj",
"followers_url": "https://api.github.com/users/papandadj/followers",
"following_url": "https://api.github.com/users/papandadj/following{/other_user}",
"gists_url": "https://api.github.com/users/papandadj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/papandadj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/papandadj/subscriptions",
"organizations_url": "https://api.github.com/users/papandadj/orgs",
"repos_url": "https://api.github.com/users/papandadj/repos",
"events_url": "https://api.github.com/users/papandadj/events{/privacy}",
"received_events_url": "https://api.github.com/users/papandadj/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 0
| 2024-04-19T12:01:07
| 2024-04-26T08:13:00
| 2024-04-26T08:13:00
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
我不太懂推理,假设我有4个gpu,每个24g。我的模型运行起来也需要24g,此时我应该选择多卡还是单卡。单卡使用24g快还是多卡每个使用6g快?
### OS
_No response_
### GPU
_No response_
### CPU
_No response_
### Ollama version
_No response_
|
{
"login": "papandadj",
"id": 25424898,
"node_id": "MDQ6VXNlcjI1NDI0ODk4",
"avatar_url": "https://avatars.githubusercontent.com/u/25424898?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/papandadj",
"html_url": "https://github.com/papandadj",
"followers_url": "https://api.github.com/users/papandadj/followers",
"following_url": "https://api.github.com/users/papandadj/following{/other_user}",
"gists_url": "https://api.github.com/users/papandadj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/papandadj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/papandadj/subscriptions",
"organizations_url": "https://api.github.com/users/papandadj/orgs",
"repos_url": "https://api.github.com/users/papandadj/repos",
"events_url": "https://api.github.com/users/papandadj/events{/privacy}",
"received_events_url": "https://api.github.com/users/papandadj/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3753/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3753/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8403
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8403/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8403/comments
|
https://api.github.com/repos/ollama/ollama/issues/8403/events
|
https://github.com/ollama/ollama/issues/8403
| 2,784,132,411
|
I_kwDOJ0Z1Ps6l8n07
| 8,403
|
Sky-T1-32B- Preview would be a great model to add
|
{
"login": "smach",
"id": 3394160,
"node_id": "MDQ6VXNlcjMzOTQxNjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/3394160?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/smach",
"html_url": "https://github.com/smach",
"followers_url": "https://api.github.com/users/smach/followers",
"following_url": "https://api.github.com/users/smach/following{/other_user}",
"gists_url": "https://api.github.com/users/smach/gists{/gist_id}",
"starred_url": "https://api.github.com/users/smach/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/smach/subscriptions",
"organizations_url": "https://api.github.com/users/smach/orgs",
"repos_url": "https://api.github.com/users/smach/repos",
"events_url": "https://api.github.com/users/smach/events{/privacy}",
"received_events_url": "https://api.github.com/users/smach/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
closed
| false
| null |
[] | null | 2
| 2025-01-13T14:27:33
| 2025-01-13T21:30:04
| 2025-01-13T21:30:03
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Sky-T1-32B-Preview from UC Berkeley would be a great model to add! They say it is equivalent to o1-preview for reasoning and coding and cost just $450 to train. So small for that quality!
Hugging Face: [https://huggingface.co/NovaSky-AI/Sky-T1-32B-Preview](https://huggingface.co/NovaSky-AI/Sky-T1-32B-Preview)
Blog post explaining model: [https://novasky-ai.github.io/posts/sky-t1/](https://novasky-ai.github.io/posts/sky-t1/)
Thank you for this great project.
|
{
"login": "smach",
"id": 3394160,
"node_id": "MDQ6VXNlcjMzOTQxNjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/3394160?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/smach",
"html_url": "https://github.com/smach",
"followers_url": "https://api.github.com/users/smach/followers",
"following_url": "https://api.github.com/users/smach/following{/other_user}",
"gists_url": "https://api.github.com/users/smach/gists{/gist_id}",
"starred_url": "https://api.github.com/users/smach/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/smach/subscriptions",
"organizations_url": "https://api.github.com/users/smach/orgs",
"repos_url": "https://api.github.com/users/smach/repos",
"events_url": "https://api.github.com/users/smach/events{/privacy}",
"received_events_url": "https://api.github.com/users/smach/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8403/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8403/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7679
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7679/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7679/comments
|
https://api.github.com/repos/ollama/ollama/issues/7679/events
|
https://github.com/ollama/ollama/issues/7679
| 2,660,920,543
|
I_kwDOJ0Z1Ps6emmzf
| 7,679
|
The fine tuned codegemma model exhibits abnormal performance
|
{
"login": "TheSongg",
"id": 145535169,
"node_id": "U_kgDOCKywwQ",
"avatar_url": "https://avatars.githubusercontent.com/u/145535169?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TheSongg",
"html_url": "https://github.com/TheSongg",
"followers_url": "https://api.github.com/users/TheSongg/followers",
"following_url": "https://api.github.com/users/TheSongg/following{/other_user}",
"gists_url": "https://api.github.com/users/TheSongg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TheSongg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TheSongg/subscriptions",
"organizations_url": "https://api.github.com/users/TheSongg/orgs",
"repos_url": "https://api.github.com/users/TheSongg/repos",
"events_url": "https://api.github.com/users/TheSongg/events{/privacy}",
"received_events_url": "https://api.github.com/users/TheSongg/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info",
"name": "needs more info",
"color": "BA8041",
"default": false,
"description": "More information is needed to assist"
}
] |
open
| false
| null |
[] | null | 3
| 2024-11-15T06:22:49
| 2024-11-21T07:24:03
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I downloaded the codegemma and codellama models from Huggingface and fine tuned them using llama factory. After importing the fine tuned model into Ollama, Codellama works normally, while the Codegemma model seems to have not learned the knowledge of the fine tuned dataset. Similarly, importing the fine tuned codegemma model into llama factory works normally. I have made multiple modifications to the modelfile file when creating the codegemma, but it has not been effective. May I ask what the reason is and how can I resolve it? thank you
ollama:0.4.1
llama factory:0.8.3
codegemma:https://huggingface.co/google/codegemma-7b
codellama:https://huggingface.co/codellama/CodeLlama-7b-hf
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.4.1
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7679/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7679/timeline
| null | null | false
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.