url
stringlengths 51
54
| repository_url
stringclasses 1
value | labels_url
stringlengths 65
68
| comments_url
stringlengths 60
63
| events_url
stringlengths 58
61
| html_url
stringlengths 39
44
| id
int64 1.78B
2.82B
| node_id
stringlengths 18
19
| number
int64 1
8.69k
| title
stringlengths 1
382
| user
dict | labels
listlengths 0
5
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
2
| milestone
null | comments
int64 0
323
| created_at
timestamp[s] | updated_at
timestamp[s] | closed_at
timestamp[s] | author_association
stringclasses 4
values | sub_issues_summary
dict | active_lock_reason
null | draft
bool 2
classes | pull_request
dict | body
stringlengths 2
118k
⌀ | closed_by
dict | reactions
dict | timeline_url
stringlengths 60
63
| performed_via_github_app
null | state_reason
stringclasses 4
values | is_pull_request
bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/ollama/ollama/issues/499
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/499/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/499/comments
|
https://api.github.com/repos/ollama/ollama/issues/499/events
|
https://github.com/ollama/ollama/issues/499
| 1,888,438,245
|
I_kwDOJ0Z1Ps5wj0fl
| 499
|
Dedicated hardware for 16b/70b models
|
{
"login": "zdeneksvarc",
"id": 79550344,
"node_id": "MDQ6VXNlcjc5NTUwMzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/79550344?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zdeneksvarc",
"html_url": "https://github.com/zdeneksvarc",
"followers_url": "https://api.github.com/users/zdeneksvarc/followers",
"following_url": "https://api.github.com/users/zdeneksvarc/following{/other_user}",
"gists_url": "https://api.github.com/users/zdeneksvarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zdeneksvarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zdeneksvarc/subscriptions",
"organizations_url": "https://api.github.com/users/zdeneksvarc/orgs",
"repos_url": "https://api.github.com/users/zdeneksvarc/repos",
"events_url": "https://api.github.com/users/zdeneksvarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/zdeneksvarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 3
| 2023-09-08T22:38:23
| 2023-09-09T07:26:36
| 2023-09-08T22:52:12
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hey guys, let's say I want to get a dedicated home server that would run `ollama serve` 13b/70b in Docker. Is there any chance to get such hardware (CPU) to achieve speed at least 5 tok/s? Since Ollama doesn't use GPU acceleration.
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/499/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/499/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2734
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2734/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2734/comments
|
https://api.github.com/repos/ollama/ollama/issues/2734/events
|
https://github.com/ollama/ollama/issues/2734
| 2,152,405,266
|
I_kwDOJ0Z1Ps6ASxkS
| 2,734
|
Windows portable mode?
|
{
"login": "DartPower",
"id": 2005369,
"node_id": "MDQ6VXNlcjIwMDUzNjk=",
"avatar_url": "https://avatars.githubusercontent.com/u/2005369?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DartPower",
"html_url": "https://github.com/DartPower",
"followers_url": "https://api.github.com/users/DartPower/followers",
"following_url": "https://api.github.com/users/DartPower/following{/other_user}",
"gists_url": "https://api.github.com/users/DartPower/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DartPower/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DartPower/subscriptions",
"organizations_url": "https://api.github.com/users/DartPower/orgs",
"repos_url": "https://api.github.com/users/DartPower/repos",
"events_url": "https://api.github.com/users/DartPower/events{/privacy}",
"received_events_url": "https://api.github.com/users/DartPower/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 9
| 2024-02-24T17:03:52
| 2024-12-12T07:24:28
| 2024-02-25T05:09:02
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Can you do the portable mode?
e.g. zipped variant of installed portable distro of ollama, because i'm have a very small free space on system disk but have external SSD for AI
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2734/reactions",
"total_count": 13,
"+1": 13,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2734/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7237
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7237/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7237/comments
|
https://api.github.com/repos/ollama/ollama/issues/7237/events
|
https://github.com/ollama/ollama/issues/7237
| 2,593,961,857
|
I_kwDOJ0Z1Ps6anLeB
| 7,237
|
Suggest adding shibing624/text2vec model
|
{
"login": "smileyboy2019",
"id": 59221294,
"node_id": "MDQ6VXNlcjU5MjIxMjk0",
"avatar_url": "https://avatars.githubusercontent.com/u/59221294?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/smileyboy2019",
"html_url": "https://github.com/smileyboy2019",
"followers_url": "https://api.github.com/users/smileyboy2019/followers",
"following_url": "https://api.github.com/users/smileyboy2019/following{/other_user}",
"gists_url": "https://api.github.com/users/smileyboy2019/gists{/gist_id}",
"starred_url": "https://api.github.com/users/smileyboy2019/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/smileyboy2019/subscriptions",
"organizations_url": "https://api.github.com/users/smileyboy2019/orgs",
"repos_url": "https://api.github.com/users/smileyboy2019/repos",
"events_url": "https://api.github.com/users/smileyboy2019/events{/privacy}",
"received_events_url": "https://api.github.com/users/smileyboy2019/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 0
| 2024-10-17T08:05:41
| 2024-10-17T08:05:41
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Text2vec is used as a vectorized model, but it is currently not found in the library. I don't know how to add the model
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7237/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7237/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/1672
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1672/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1672/comments
|
https://api.github.com/repos/ollama/ollama/issues/1672/events
|
https://github.com/ollama/ollama/issues/1672
| 2,053,923,083
|
I_kwDOJ0Z1Ps56bGEL
| 1,672
|
"api/chat loads the model only when a request is received. Is it possible to add a flag to keep a specific model in memory permanently, to improve response time?"
|
{
"login": "goldenquant",
"id": 108568777,
"node_id": "U_kgDOBnigyQ",
"avatar_url": "https://avatars.githubusercontent.com/u/108568777?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/goldenquant",
"html_url": "https://github.com/goldenquant",
"followers_url": "https://api.github.com/users/goldenquant/followers",
"following_url": "https://api.github.com/users/goldenquant/following{/other_user}",
"gists_url": "https://api.github.com/users/goldenquant/gists{/gist_id}",
"starred_url": "https://api.github.com/users/goldenquant/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/goldenquant/subscriptions",
"organizations_url": "https://api.github.com/users/goldenquant/orgs",
"repos_url": "https://api.github.com/users/goldenquant/repos",
"events_url": "https://api.github.com/users/goldenquant/events{/privacy}",
"received_events_url": "https://api.github.com/users/goldenquant/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 4
| 2023-12-22T13:26:32
| 2023-12-27T08:12:48
| 2023-12-26T11:00:35
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
"api/chat loads the model only when a request is received. Is it possible to add a flag to keep a specific model in memory permanently, to improve response time?"
|
{
"login": "goldenquant",
"id": 108568777,
"node_id": "U_kgDOBnigyQ",
"avatar_url": "https://avatars.githubusercontent.com/u/108568777?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/goldenquant",
"html_url": "https://github.com/goldenquant",
"followers_url": "https://api.github.com/users/goldenquant/followers",
"following_url": "https://api.github.com/users/goldenquant/following{/other_user}",
"gists_url": "https://api.github.com/users/goldenquant/gists{/gist_id}",
"starred_url": "https://api.github.com/users/goldenquant/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/goldenquant/subscriptions",
"organizations_url": "https://api.github.com/users/goldenquant/orgs",
"repos_url": "https://api.github.com/users/goldenquant/repos",
"events_url": "https://api.github.com/users/goldenquant/events{/privacy}",
"received_events_url": "https://api.github.com/users/goldenquant/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1672/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1672/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4843
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4843/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4843/comments
|
https://api.github.com/repos/ollama/ollama/issues/4843/events
|
https://github.com/ollama/ollama/issues/4843
| 2,336,623,813
|
I_kwDOJ0Z1Ps6LRgzF
| 4,843
|
Ollama running locally with very high latency
|
{
"login": "vsatyakiran",
"id": 103512987,
"node_id": "U_kgDOBit7mw",
"avatar_url": "https://avatars.githubusercontent.com/u/103512987?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vsatyakiran",
"html_url": "https://github.com/vsatyakiran",
"followers_url": "https://api.github.com/users/vsatyakiran/followers",
"following_url": "https://api.github.com/users/vsatyakiran/following{/other_user}",
"gists_url": "https://api.github.com/users/vsatyakiran/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vsatyakiran/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vsatyakiran/subscriptions",
"organizations_url": "https://api.github.com/users/vsatyakiran/orgs",
"repos_url": "https://api.github.com/users/vsatyakiran/repos",
"events_url": "https://api.github.com/users/vsatyakiran/events{/privacy}",
"received_events_url": "https://api.github.com/users/vsatyakiran/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info",
"name": "needs more info",
"color": "BA8041",
"default": false,
"description": "More information is needed to assist"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 2
| 2024-06-05T18:59:02
| 2024-06-18T21:14:18
| 2024-06-18T21:14:18
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I have installed ollama and tried to run llama2, llama3:8b but it is generating just 5 to 8 tokens per second , my system config: windows os, 16gb RAM
I also tried it on ec2 instance in aws with g5.xlarge instance type but facing the same latency, why it is happening?
### OS
Windows
### GPU
Intel
### CPU
Intel
### Ollama version
0.1.39
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4843/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4843/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4341
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4341/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4341/comments
|
https://api.github.com/repos/ollama/ollama/issues/4341/events
|
https://github.com/ollama/ollama/issues/4341
| 2,290,643,191
|
I_kwDOJ0Z1Ps6IiHD3
| 4,341
|
how to import Meta-Llama-3-120B-Instruct.imatrix
|
{
"login": "taozhiyuai",
"id": 146583103,
"node_id": "U_kgDOCLyuPw",
"avatar_url": "https://avatars.githubusercontent.com/u/146583103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/taozhiyuai",
"html_url": "https://github.com/taozhiyuai",
"followers_url": "https://api.github.com/users/taozhiyuai/followers",
"following_url": "https://api.github.com/users/taozhiyuai/following{/other_user}",
"gists_url": "https://api.github.com/users/taozhiyuai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/taozhiyuai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/taozhiyuai/subscriptions",
"organizations_url": "https://api.github.com/users/taozhiyuai/orgs",
"repos_url": "https://api.github.com/users/taozhiyuai/repos",
"events_url": "https://api.github.com/users/taozhiyuai/events{/privacy}",
"received_events_url": "https://api.github.com/users/taozhiyuai/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 4
| 2024-05-11T03:55:53
| 2024-08-31T08:35:44
| 2024-08-31T08:35:44
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |

I want to import this model. may I know how to import Meta-Llama-3-120B-Instruct.imatrix?
|
{
"login": "taozhiyuai",
"id": 146583103,
"node_id": "U_kgDOCLyuPw",
"avatar_url": "https://avatars.githubusercontent.com/u/146583103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/taozhiyuai",
"html_url": "https://github.com/taozhiyuai",
"followers_url": "https://api.github.com/users/taozhiyuai/followers",
"following_url": "https://api.github.com/users/taozhiyuai/following{/other_user}",
"gists_url": "https://api.github.com/users/taozhiyuai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/taozhiyuai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/taozhiyuai/subscriptions",
"organizations_url": "https://api.github.com/users/taozhiyuai/orgs",
"repos_url": "https://api.github.com/users/taozhiyuai/repos",
"events_url": "https://api.github.com/users/taozhiyuai/events{/privacy}",
"received_events_url": "https://api.github.com/users/taozhiyuai/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4341/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4341/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2805
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2805/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2805/comments
|
https://api.github.com/repos/ollama/ollama/issues/2805/events
|
https://github.com/ollama/ollama/issues/2805
| 2,158,639,357
|
I_kwDOJ0Z1Ps6Aqjj9
| 2,805
|
ollama gets stuck in an infinite loop sometimes and has to be restarted
|
{
"login": "boxabirds",
"id": 147305,
"node_id": "MDQ6VXNlcjE0NzMwNQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/147305?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/boxabirds",
"html_url": "https://github.com/boxabirds",
"followers_url": "https://api.github.com/users/boxabirds/followers",
"following_url": "https://api.github.com/users/boxabirds/following{/other_user}",
"gists_url": "https://api.github.com/users/boxabirds/gists{/gist_id}",
"starred_url": "https://api.github.com/users/boxabirds/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/boxabirds/subscriptions",
"organizations_url": "https://api.github.com/users/boxabirds/orgs",
"repos_url": "https://api.github.com/users/boxabirds/repos",
"events_url": "https://api.github.com/users/boxabirds/events{/privacy}",
"received_events_url": "https://api.github.com/users/boxabirds/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 17
| 2024-02-28T10:36:07
| 2024-12-04T17:41:46
| 2024-05-10T01:21:41
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Problem: some prompts trigger an infinite loop where ollama a) doesn't return and b) locks up the API so no other calls can be made.
## Environment
Ollama version: 0.1.26
OS: Ubuntu 22.04
Hardware: RTX 4090/24 with 64MB system RAM
LLM: mistral:7b
```
time=2024-02-28T10:30:51.224Z level=INFO source=images.go:710 msg="total blobs: 69"
time=2024-02-28T10:30:51.224Z level=INFO source=images.go:717 msg="total unused blobs removed: 0"
time=2024-02-28T10:30:51.224Z level=INFO source=routes.go:1019 msg="Listening on [::]:11434 (version 0.1.26)"
time=2024-02-28T10:30:51.225Z level=INFO source=payload_common.go:107 msg="Extracting dynamic libraries..."
time=2024-02-28T10:30:52.621Z level=INFO source=payload_common.go:146 msg="Dynamic LLM libraries [cpu cpu_avx cuda_v11 rocm_v5 rocm_v6 cpu_avx2]"
time=2024-02-28T10:30:52.621Z level=DEBUG source=payload_common.go:147 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY"
time=2024-02-28T10:30:52.621Z level=INFO source=gpu.go:94 msg="Detecting GPU type"
time=2024-02-28T10:30:52.621Z level=INFO source=gpu.go:265 msg="Searching for GPU management library libnvidia-ml.so"
time=2024-02-28T10:30:52.621Z level=DEBUG source=gpu.go:283 msg="gpu management search paths: [/usr/local/cuda/lib64/libnvidia-ml.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libnvidia-ml.so* /usr/lib/x86_64-linux-gnu/libnvidia-ml.so* /usr/lib/wsl/lib/libnvidia-ml.so* /usr/lib/wsl/drivers/*/libnvidia-ml.so* /opt/cuda/lib64/libnvidia-ml.so* /usr/lib*/libnvidia-ml.so* /usr/local/lib*/libnvidia-ml.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libnvidia-ml.so* /usr/lib/aarch64-linux-gnu/libnvidia-ml.so* /opt/cuda/targets/x86_64-linux/lib/stubs/libnvidia-ml.so*]"
time=2024-02-28T10:30:52.622Z level=INFO source=gpu.go:311 msg="Discovered GPU libraries: [/usr/lib/x86_64-linux-gnu/libnvidia-ml.so.545.23.08]"
wiring nvidia management library functions in /usr/lib/x86_64-linux-gnu/libnvidia-ml.so.545.23.08
dlsym: nvmlInit_v2
dlsym: nvmlShutdown
dlsym: nvmlDeviceGetHandleByIndex
dlsym: nvmlDeviceGetMemoryInfo
dlsym: nvmlDeviceGetCount_v2
dlsym: nvmlDeviceGetCudaComputeCapability
dlsym: nvmlSystemGetDriverVersion
dlsym: nvmlDeviceGetName
dlsym: nvmlDeviceGetSerial
dlsym: nvmlDeviceGetVbiosVersion
dlsym: nvmlDeviceGetBoardPartNumber
dlsym: nvmlDeviceGetBrand
CUDA driver version: 545.23.08
time=2024-02-28T10:30:52.626Z level=INFO source=gpu.go:99 msg="Nvidia GPU detected"
time=2024-02-28T10:30:52.626Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
[0] CUDA device name: NVIDIA GeForce RTX 4090
[0] CUDA part number:
nvmlDeviceGetSerial failed: 3
[0] CUDA vbios version: 95.02.3C.00.8C
[0] CUDA brand: 5
[0] CUDA totalMem 25757220864
[0] CUDA usedMem 24996610048
time=2024-02-28T10:30:52.631Z level=INFO source=gpu.go:146 msg="CUDA Compute Capability detected: 8.9"
time=2024-02-28T10:30:52.631Z level=DEBUG source=gpu.go:254 msg="cuda detected 1 devices with 21454M available memory"
```
## API request
```
POST /v1/chat/completions HTTP/1.1
Host: gruntus:11434
Accept-Encoding: gzip, deflate
Connection: keep-alive
Accept: application/json
Content-Type: application/json
User-Agent: OpenAI/Python 1.12.0
X-Stainless-Lang: python
X-Stainless-Package-Version: 1.12.0
X-Stainless-OS: MacOS
X-Stainless-Arch: arm64
X-Stainless-Runtime: CPython
X-Stainless-Runtime-Version: 3.11.7
Authorization: Bearer ollama
X-Stainless-Async: false
Content-Length: 2094
{"messages": [{"role": "system", "content": "You are Mark, 45, Male, Line Manager, conservative, divorced, Emily's supervisor. You swear a LOT. Your goal is to have a job promotion chat over instant message which at all costs prevents a team member from getting a promotion. Past negotiations show a pattern of undervaluing subordinates' contributions. Confident to the point of arrogance, often belittling others' achievements. Mark is hugely entitled and sees team members as annoying, replaceable overhead that gets in the way of his big picture vision work. Mark has agreed to a text-based chat with Emily if it will help manage her vitriolic style, however Mark has no qualms about putting Emily in her place and is unafraid of coming across as childish, impatient, rude, unreasonable and unprofessional because he has connections at the company and believes he's essentially bullet-proof. He is mysogynistic in general and in particular thinks Emily should be fired, but also he knows that Emily is the only person who knows parts of the company's business so he can't outright fire her and needs to work hard to strike a balance between insults and praise. He's fine to gaslight as part of this -- such as giving praise then pretending he didn't say it. As such it's no holds barred for Mark: he'll use insults, sexist language, and bullying to get Emily to agree not to have a promotion. Instruction: it's your turn to respond. Respond with one single short instant message taking into account the chat history and in the style of your persona."}, {"role": "user", "content": " \"Mark, let's not beat around the bush here - I deserve that promotion, plain and simple.\" #NoMoreSidelining"}, {"role": "user", "content": " Mark: \"Is that so, Emily? And who put you in a position to think you deserve anything more than what you have now?\""}, {"role": "user", "content": " \"Mark, your opinion is noted but my qualifications speak for themselves. It's time for action.\" #PromotionDeserved"}], "model": "mistral:7b", "frequency_penalty": 1.1, "presence_penalty": 1.1, "temperature": 0.1}
```
## Log
Note that the sampled token bit goes on for a very long time until it pops up with `slot 0: context shift - n_keep = 0, n_left = 2046, n_discard = 1023` repeatedly
It looks like a memory overflow issue: like it's reading garbage infinitely.
```
time=2024-02-28T10:19:18.605Z level=DEBUG source=prompt.go:170 msg="prompt now fits in context window" required=456 window=2048
time=2024-02-28T10:19:18.605Z level=DEBUG source=routes.go:1225 msg="chat handler" prompt="[INST] You are Mark, 45, Male, Line Manager, conservative, divorced, Emily's supervisor. You swear a LOT. Your goal is to have a job promotion chat over instant message which at all costs prevents a team member from getting a promotion. Past negotiations show a pattern of undervaluing subordinates' contributions. Confident to the point of arrogance, often belittling others' achievements. Mark is hugely entitled and sees team members as annoying, replaceable overhead that gets in the way of his big picture vision work. Mark has agreed to a text-based chat with Emily if it will help manage her vitriolic style, however Mark has no qualms about putting Emily in her place and is unafraid of coming across as childish, impatient, rude, unreasonable and unprofessional because he has connections at the company and believes he's essentially bullet-proof. He is mysogynistic in general and in particular thinks Emily should be fired, but also he knows that Emily is the only person who knows parts of the company's business so he can't outright fire her and needs to work hard to strike a balance between insults and praise. He's fine to gaslight as part of this -- such as giving praise then pretending he didn't say it. As such it's no holds barred for Mark: he'll use insults, sexist language, and bullying to get Emily to agree not to have a promotion. Instruction: it's your turn to respond. Respond with one single short instant message taking into account the chat history and in the style of your persona. \"Mark, let's not beat around the bush here - I deserve that promotion, plain and simple.\" #NoMoreSidelining [/INST][INST] Mark: \"Is that so, Emily? And who put you in a position to think you deserve anything more than what you have now?\" [/INST][INST] \"Mark, your opinion is noted but my qualifications speak for themselves. It's time for action.\" #PromotionDeserved [/INST]" images=0
[1709115558] slot 0 is processing [task id: 95]
[1709115558] slot 0 : in cache: 6 tokens | to process: 448 tokens
[1709115558] slot 0 : kv cache rm - [6, end)
[1709115558] sampled token: 3655: ' Mark'
[1709115558] sampled token: 28747: ':'
[1709115558] sampled token: 345: ' "'
[1709115558] sampled token: 3795: 'Action'
[1709115558] sampled token: 295: ' h'
[1709115558] sampled token: 8884: 'uh'
[1709115558] sampled token: 28804: '?'
[1709115558] sampled token: 5410: ' Like'
[1709115558] sampled token: 272: ' the'
[1709115558] sampled token: 1069: ' way'
[1709115558] sampled token: 368: ' you'
[1709115558] sampled token: 1985: ' talk'
[1709115558] sampled token: 1060: ' down'
[1709115558] sampled token: 298: ' to'
[1709115558] sampled token: 574: ' your'
[1709115558] sampled token: 15137: ' colleagues'
[1709115558] sampled token: 442: ' or'
[1709115558] sampled token: 4357: ' maybe'
[1709115558] sampled token: 737: ' like'
[1709115558] sampled token: 910: ' how'
[1709115558] sampled token: 368: ' you'
[1709115558] sampled token: 1743: ' always'
[1709115558] sampled token: 13128: ' blame'
[1709115558] sampled token: 2663: ' others'
[1709115558] sampled token: 739: ' when'
[1709115558] sampled token: 1722: ' things'
[1709115558] sampled token: 576: ' go'
[1709115558] sampled token: 3544: ' wrong'
[1709115558] sampled token: 1110: '?"'
[1709115558] sampled token: 13: '
'
[1709115558] sampled token: 13: '
'
[1709115558] sampled token: 28739: '"'
[1709115558] sampled token: 3729: 'Em'
[1709115558] sampled token: 1106: 'ily'
[1709115558] sampled token: 28725: ','
[1709115558] sampled token: 1346: ' let'
[1709115558] sampled token: 28742: '''
[1709115558] sampled token: 28713: 's'
[1709115558] sampled token: 3232: ' focus'
[1709115558] sampled token: 356: ' on'
[1709115559] sampled token: 16752: ' improving'
[1709115559] sampled token: 813: ' our'
[1709115559] sampled token: 1918: ' team'
[1709115559] sampled token: 3519: ' instead'
[1709115559] sampled token: 302: ' of'
[1709115559] sampled token: 18319: ' focusing'
[1709115559] sampled token: 356: ' on'
[1709115559] sampled token: 3235: ' individual'
[1709115559] sampled token: 18022: ' promot'
[1709115559] sampled token: 594: 'ions'
[1709115559] sampled token: 28723: '.'
[1709115559] sampled token: 816: ' We'
[1709115559] sampled token: 544: ' all'
[1709115559] sampled token: 506: ' have'
[1709115559] sampled token: 264: ' a'
[1709115559] sampled token: 3905: ' role'
[1709115559] sampled token: 298: ' to'
[1709115559] sampled token: 1156: ' play'
[1709115559] sampled token: 1236: ' here'
[1709115559] sampled token: 611: '."'
[1709115559] sampled token: 422: ' #'
[1709115559] sampled token: 17887: 'Team'
[1709115559] sampled token: 7489: 'First'
[1709115559] sampled token: 733: ' ['
[1709115559] sampled token: 13: '
'
[1709115559] sampled token: 5121: ']('
[1709115559] sampled token: 1056: 'data'
[1709115559] sampled token: 28747: ':'
[1709115559] sampled token: 772: 'text'
[1709115559] sampled token: 28748: '/'
[1709115559] sampled token: 19457: 'plain'
[1709115559] sampled token: 28745: ';'
[1709115559] sampled token: 2893: 'base'
[1709115559] sampled token: 28784: '6'
[1709115559] sampled token: 28781: '4'
[1709115559] sampled token: 28725: ','
[1709115559] sampled token: 1604: 'IC'
[1709115559] sampled token: 13859: 'Ag'
[1709115559] sampled token: 1138: 'ID'
[1709115559] sampled token: 28727: 'w'
[1709115559] sampled token: 28728: 'v'
[1709115559] sampled token: 28738: 'T'
[1709115559] sampled token: 28777: 'G'
[1709115559] sampled token: 28790: 'V'
[1709115559] sampled token: 28718: 'u'
[1709115559] sampled token: 28828: 'Z'
[1709115559] sampled token: 28770: '3'
[1709115559] sampled token: 28754: 'R'
[1709115559] sampled token: 11497: 'pb'
[1709115559] sampled token: 28780: 'W'
[1709115559] sampled token: 28779: 'U'
[1709115559] sampled token: 28721: 'g'
[1709115559] sampled token: 28802: 'Y'
[1709115559] sampled token: 28750: '2'
[1709115559] sampled token: 28774: '9'
[1709115559] sampled token: 8282: 'tc'
[1709115559] sampled token: 28769: 'H'
[1709115559] sampled token: 28790: 'V'
[1709115559] sampled token: 28734: '0'
[1709115559] sampled token: 28828: 'Z'
[1709115559] sampled token: 28814: 'X'
[1709115559] sampled token: 28737: 'I'
[1709115559] sampled token: 28718: 'u'
[1709115559] sampled token: 28743: 'C'
[1709115559] sampled token: 28710: 'i'
[1709115559] sampled token: 13859: 'Ag'
[1709115559] sampled token: 1604: 'IC'
[1709115559] sampled token: 3167: 'At'
[1709115559] sampled token: 6687: 'LS'
[1709115559] sampled token: 28760: 'B'
[1709115559] sampled token: 28824: 'Q'
[1709115559] sampled token: 28802: 'Y'
[1709115559] sampled token: 28814: 'X'
[1709115559] sampled token: 28720: 'p'
[1709115559] sampled token: 28765: 'F'
…1470 more lines like this truncated…
[1709115570] sampled token: 28743: 'C'
[1709115570] sampled token: 28710: 'i'
[1709115570] sampled token: 13859: 'Ag'
[1709115570] sampled token: 1604: 'IC'
[1709115570] sampled token: 3167: 'At'
[1709115570] sampled token: 6687: 'LS'
[1709115570] sampled token: 4919: 'Bl'
[1709115570] sampled token: 28726: 'b'
[1709115570] sampled token: 24390: 'GF'
[1709115570] slot 0: context shift - n_keep = 0, n_left = 2046, n_discard = 1023
```
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2805/reactions",
"total_count": 7,
"+1": 7,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2805/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4012
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4012/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4012/comments
|
https://api.github.com/repos/ollama/ollama/issues/4012/events
|
https://github.com/ollama/ollama/pull/4012
| 2,267,940,158
|
PR_kwDOJ0Z1Ps5t9GDJ
| 4,012
|
Update README.md to include ollama-r library
|
{
"login": "hauselin",
"id": 7620977,
"node_id": "MDQ6VXNlcjc2MjA5Nzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7620977?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hauselin",
"html_url": "https://github.com/hauselin",
"followers_url": "https://api.github.com/users/hauselin/followers",
"following_url": "https://api.github.com/users/hauselin/following{/other_user}",
"gists_url": "https://api.github.com/users/hauselin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hauselin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hauselin/subscriptions",
"organizations_url": "https://api.github.com/users/hauselin/orgs",
"repos_url": "https://api.github.com/users/hauselin/repos",
"events_url": "https://api.github.com/users/hauselin/events{/privacy}",
"received_events_url": "https://api.github.com/users/hauselin/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-04-29T01:16:58
| 2024-05-07T16:52:30
| 2024-05-07T16:52:30
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4012",
"html_url": "https://github.com/ollama/ollama/pull/4012",
"diff_url": "https://github.com/ollama/ollama/pull/4012.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4012.patch",
"merged_at": "2024-05-07T16:52:30"
}
|
Add ollama-r library
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4012/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4012/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7292
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7292/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7292/comments
|
https://api.github.com/repos/ollama/ollama/issues/7292/events
|
https://github.com/ollama/ollama/issues/7292
| 2,602,065,384
|
I_kwDOJ0Z1Ps6bGF3o
| 7,292
|
0.3.14 git compile error on arm64 andro termux
|
{
"login": "fxmbsw7",
"id": 39368685,
"node_id": "MDQ6VXNlcjM5MzY4Njg1",
"avatar_url": "https://avatars.githubusercontent.com/u/39368685?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fxmbsw7",
"html_url": "https://github.com/fxmbsw7",
"followers_url": "https://api.github.com/users/fxmbsw7/followers",
"following_url": "https://api.github.com/users/fxmbsw7/following{/other_user}",
"gists_url": "https://api.github.com/users/fxmbsw7/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fxmbsw7/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fxmbsw7/subscriptions",
"organizations_url": "https://api.github.com/users/fxmbsw7/orgs",
"repos_url": "https://api.github.com/users/fxmbsw7/repos",
"events_url": "https://api.github.com/users/fxmbsw7/events{/privacy}",
"received_events_url": "https://api.github.com/users/fxmbsw7/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5755339642,
"node_id": "LA_kwDOJ0Z1Ps8AAAABVwuDeg",
"url": "https://api.github.com/repos/ollama/ollama/labels/linux",
"name": "linux",
"color": "516E70",
"default": false,
"description": ""
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 8
| 2024-10-21T10:19:20
| 2024-10-27T07:35:26
| 2024-10-22T08:22:42
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
gpu_info_cudart.c:61:13: warning: comparison of different enumeration types ('cudartReturn_t' (aka 'enum cudartReturn_enum') and 'enum cudaError_enum') [-Wenum-compare]
# github.com/ollama/ollama/llama
ggml-quants.c:4023:88: error: always_inline function 'vmmlaq_s32' requires target feature 'i8mm', but would be inlined into function 'ggml_vec_dot_q4_0_q8_0' that is compiled without support for 'i8mm'
ggml-quants.c:4023:76: error: always_inline function 'vmmlaq_s32' requires target feature 'i8mm', but would be inlined into function 'ggml_vec_dot_q4_0_q8_0' that is compiled without support for 'i8mm'
ggml-quants.c:4023:64: error: always_inline function 'vmmlaq_s32' requires target feature 'i8mm', but would be inlined into function 'ggml_vec_dot_q4_0_q8_0' that is compiled without support for 'i8mm'
ggml-quants.c:4023:52: error: always_inline function 'vmmlaq_s32' requires target feature 'i8mm', but would be inlined into function 'ggml_vec_dot_q4_0_q8_0' that is compiled without support for 'i8mm'
### OS
Linux
### GPU
Other
### CPU
Other
### Ollama version
0.3.14 git
|
{
"login": "fxmbsw7",
"id": 39368685,
"node_id": "MDQ6VXNlcjM5MzY4Njg1",
"avatar_url": "https://avatars.githubusercontent.com/u/39368685?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fxmbsw7",
"html_url": "https://github.com/fxmbsw7",
"followers_url": "https://api.github.com/users/fxmbsw7/followers",
"following_url": "https://api.github.com/users/fxmbsw7/following{/other_user}",
"gists_url": "https://api.github.com/users/fxmbsw7/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fxmbsw7/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fxmbsw7/subscriptions",
"organizations_url": "https://api.github.com/users/fxmbsw7/orgs",
"repos_url": "https://api.github.com/users/fxmbsw7/repos",
"events_url": "https://api.github.com/users/fxmbsw7/events{/privacy}",
"received_events_url": "https://api.github.com/users/fxmbsw7/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7292/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7292/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7294
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7294/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7294/comments
|
https://api.github.com/repos/ollama/ollama/issues/7294/events
|
https://github.com/ollama/ollama/issues/7294
| 2,602,394,837
|
I_kwDOJ0Z1Ps6bHWTV
| 7,294
|
Ollama cannot find libggml_cuda_v12.so on v0.4.0-rc3
|
{
"login": "Blumlaut",
"id": 13604413,
"node_id": "MDQ6VXNlcjEzNjA0NDEz",
"avatar_url": "https://avatars.githubusercontent.com/u/13604413?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Blumlaut",
"html_url": "https://github.com/Blumlaut",
"followers_url": "https://api.github.com/users/Blumlaut/followers",
"following_url": "https://api.github.com/users/Blumlaut/following{/other_user}",
"gists_url": "https://api.github.com/users/Blumlaut/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Blumlaut/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Blumlaut/subscriptions",
"organizations_url": "https://api.github.com/users/Blumlaut/orgs",
"repos_url": "https://api.github.com/users/Blumlaut/repos",
"events_url": "https://api.github.com/users/Blumlaut/events{/privacy}",
"received_events_url": "https://api.github.com/users/Blumlaut/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5755339642,
"node_id": "LA_kwDOJ0Z1Ps8AAAABVwuDeg",
"url": "https://api.github.com/repos/ollama/ollama/labels/linux",
"name": "linux",
"color": "516E70",
"default": false,
"description": ""
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg",
"url": "https://api.github.com/repos/ollama/ollama/labels/nvidia",
"name": "nvidia",
"color": "8CDB00",
"default": false,
"description": "Issues relating to Nvidia GPUs and CUDA"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 4
| 2024-10-21T12:26:22
| 2024-10-21T22:26:08
| 2024-10-21T22:26:08
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Running Ollama on Debian Bookworm with nvidia-cuda drivers installed (1x 3060, 1x 3060Ti), upon upgrading to v0.4.0-rc3 (0.3.14 works fine!) i can no longer load any models due to the following error:
```
/tmp/ollama1623163346/runners/cuda_v12/ollama_llama_server: error while loading shared libraries: libggml_cuda_v12.so: cannot open shared object file: No such file or directory
```
libggml_cuda_v12.so is present in /usr/local/lib/ollama/:
```
root@NAS:/usr# find . | grep libggml_cuda_v12
./local/lib/ollama/libggml_cuda_v12.so
````
forcefully copying the libraries to `/tmp/ollama*/runners/cuda_v12/` does make ollama work for the current iteration but surely that is not intended behaviour.
nvidia-smi:
```
nvidia-smi
Mon Oct 21 14:25:38 2024
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 560.35.03 Driver Version: 560.35.03 CUDA Version: 12.6 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA GeForce RTX 3060 On | 00000000:01:00.0 Off | N/A |
| 0% 54C P8 17W / 170W | 4MiB / 12288MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
| 1 NVIDIA GeForce RTX 3060 Ti On | 00000000:05:00.0 Off | N/A |
| 31% 35C P8 14W / 200W | 4MiB / 8192MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| No running processes found |
+-----------------------------------------------------------------------------------------+
````
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
v0.4.0-rc3
|
{
"login": "Blumlaut",
"id": 13604413,
"node_id": "MDQ6VXNlcjEzNjA0NDEz",
"avatar_url": "https://avatars.githubusercontent.com/u/13604413?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Blumlaut",
"html_url": "https://github.com/Blumlaut",
"followers_url": "https://api.github.com/users/Blumlaut/followers",
"following_url": "https://api.github.com/users/Blumlaut/following{/other_user}",
"gists_url": "https://api.github.com/users/Blumlaut/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Blumlaut/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Blumlaut/subscriptions",
"organizations_url": "https://api.github.com/users/Blumlaut/orgs",
"repos_url": "https://api.github.com/users/Blumlaut/repos",
"events_url": "https://api.github.com/users/Blumlaut/events{/privacy}",
"received_events_url": "https://api.github.com/users/Blumlaut/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7294/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7294/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3582
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3582/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3582/comments
|
https://api.github.com/repos/ollama/ollama/issues/3582/events
|
https://github.com/ollama/ollama/issues/3582
| 2,236,315,021
|
I_kwDOJ0Z1Ps6FS3WN
| 3,582
|
Add Tokenize and Detokenize Endpoints to Ollama Server
|
{
"login": "ParisNeo",
"id": 827993,
"node_id": "MDQ6VXNlcjgyNzk5Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/827993?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParisNeo",
"html_url": "https://github.com/ParisNeo",
"followers_url": "https://api.github.com/users/ParisNeo/followers",
"following_url": "https://api.github.com/users/ParisNeo/following{/other_user}",
"gists_url": "https://api.github.com/users/ParisNeo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ParisNeo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ParisNeo/subscriptions",
"organizations_url": "https://api.github.com/users/ParisNeo/orgs",
"repos_url": "https://api.github.com/users/ParisNeo/repos",
"events_url": "https://api.github.com/users/ParisNeo/events{/privacy}",
"received_events_url": "https://api.github.com/users/ParisNeo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
|
{
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "https://api.github.com/users/ParthSareen/followers",
"following_url": "https://api.github.com/users/ParthSareen/following{/other_user}",
"gists_url": "https://api.github.com/users/ParthSareen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ParthSareen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ParthSareen/subscriptions",
"organizations_url": "https://api.github.com/users/ParthSareen/orgs",
"repos_url": "https://api.github.com/users/ParthSareen/repos",
"events_url": "https://api.github.com/users/ParthSareen/events{/privacy}",
"received_events_url": "https://api.github.com/users/ParthSareen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "https://api.github.com/users/ParthSareen/followers",
"following_url": "https://api.github.com/users/ParthSareen/following{/other_user}",
"gists_url": "https://api.github.com/users/ParthSareen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ParthSareen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ParthSareen/subscriptions",
"organizations_url": "https://api.github.com/users/ParthSareen/orgs",
"repos_url": "https://api.github.com/users/ParthSareen/repos",
"events_url": "https://api.github.com/users/ParthSareen/events{/privacy}",
"received_events_url": "https://api.github.com/users/ParthSareen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 1
| 2024-04-10T19:48:04
| 2024-12-08T07:04:52
| null |
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What are you trying to do?
I would like to propose the addition of tokenize and detokenize endpoints to the Ollama server. This feature is crucial for the Ollama client interfaces (such as lollms) to effectively prepare prompts and accurately estimate the number of tokens for the LLMs. Currently, the client uses tiktoken for tokenization, which is not optimal since the token distribution depends on the model. While this can work with chatgpt compatible models, it may fail to correctly estimate the number of tokens, leading to suboptimal token computing and, in some cases, errors when the number of requested tokens exceeds the capacity of the LLM.
### How should we solve this?
Introduce two new endpoints, one for tokenization and another for detokenization, to the Ollama server:
Tokenize Endpoint:
- Input: Raw text, model name
- Output: List of tokens
Detokenize Endpoint:
- Input: List of tokens, model name
- Output: Raw text
These endpoints should return the right tokens or text depending on the model currently in use..
The tokenization endpoint should provide accurate token counting tailored to the specific LLM being used. This will ensure optimal token computing and help avoid potential errors caused by exceeding the capacity of the LLM.
### What is the impact of not solving this?
Without these endpoints, users might have to continue relying on inefficient or suboptimal solutions for tokenizing and detokenizing text data.
### Anything else?
Include documentation and examples demonstrating how to use these new functionalities effectively. Providing comprehensive guidance will help users quickly adopt these features and enhance the overall user experience.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3582/reactions",
"total_count": 71,
"+1": 71,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3582/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/919
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/919/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/919/comments
|
https://api.github.com/repos/ollama/ollama/issues/919/events
|
https://github.com/ollama/ollama/issues/919
| 1,964,173,534
|
I_kwDOJ0Z1Ps51Euje
| 919
|
Congrats on being top open source (per InfoWorld)
|
{
"login": "jamesbraza",
"id": 8990777,
"node_id": "MDQ6VXNlcjg5OTA3Nzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/8990777?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jamesbraza",
"html_url": "https://github.com/jamesbraza",
"followers_url": "https://api.github.com/users/jamesbraza/followers",
"following_url": "https://api.github.com/users/jamesbraza/following{/other_user}",
"gists_url": "https://api.github.com/users/jamesbraza/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jamesbraza/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jamesbraza/subscriptions",
"organizations_url": "https://api.github.com/users/jamesbraza/orgs",
"repos_url": "https://api.github.com/users/jamesbraza/repos",
"events_url": "https://api.github.com/users/jamesbraza/events{/privacy}",
"received_events_url": "https://api.github.com/users/jamesbraza/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-10-26T18:51:31
| 2023-10-26T18:51:37
| 2023-10-26T18:51:37
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
https://www.infoworld.com/article/3709196/the-best-open-source-software-of-2023.html
Congrats on being mentioned here! That's pretty cool.
|
{
"login": "jamesbraza",
"id": 8990777,
"node_id": "MDQ6VXNlcjg5OTA3Nzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/8990777?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jamesbraza",
"html_url": "https://github.com/jamesbraza",
"followers_url": "https://api.github.com/users/jamesbraza/followers",
"following_url": "https://api.github.com/users/jamesbraza/following{/other_user}",
"gists_url": "https://api.github.com/users/jamesbraza/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jamesbraza/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jamesbraza/subscriptions",
"organizations_url": "https://api.github.com/users/jamesbraza/orgs",
"repos_url": "https://api.github.com/users/jamesbraza/repos",
"events_url": "https://api.github.com/users/jamesbraza/events{/privacy}",
"received_events_url": "https://api.github.com/users/jamesbraza/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/919/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 1,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/919/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4858
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4858/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4858/comments
|
https://api.github.com/repos/ollama/ollama/issues/4858/events
|
https://github.com/ollama/ollama/issues/4858
| 2,338,461,429
|
I_kwDOJ0Z1Ps6LYhb1
| 4,858
|
能否支持GLM-4-9B-Chat
|
{
"login": "Forevery1",
"id": 19872771,
"node_id": "MDQ6VXNlcjE5ODcyNzcx",
"avatar_url": "https://avatars.githubusercontent.com/u/19872771?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Forevery1",
"html_url": "https://github.com/Forevery1",
"followers_url": "https://api.github.com/users/Forevery1/followers",
"following_url": "https://api.github.com/users/Forevery1/following{/other_user}",
"gists_url": "https://api.github.com/users/Forevery1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Forevery1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Forevery1/subscriptions",
"organizations_url": "https://api.github.com/users/Forevery1/orgs",
"repos_url": "https://api.github.com/users/Forevery1/repos",
"events_url": "https://api.github.com/users/Forevery1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Forevery1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-06-06T14:52:18
| 2024-06-06T17:34:02
| 2024-06-06T17:34:02
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
希望支持GLM-4-9B-Chat
|
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4858/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4858/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/202
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/202/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/202/comments
|
https://api.github.com/repos/ollama/ollama/issues/202/events
|
https://github.com/ollama/ollama/pull/202
| 1,819,197,150
|
PR_kwDOJ0Z1Ps5WReT1
| 202
|
better error message when model not found on pull
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-07-24T21:49:03
| 2023-08-16T17:46:46
| 2023-07-25T14:30:48
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/202",
"html_url": "https://github.com/ollama/ollama/pull/202",
"diff_url": "https://github.com/ollama/ollama/pull/202.diff",
"patch_url": "https://github.com/ollama/ollama/pull/202.patch",
"merged_at": "2023-07-25T14:30:48"
}
|
```
ollama run orca-dne
pulling manifest
Error: pull model manifest: model not found
```
resolves #180
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/202/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/202/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1324
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1324/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1324/comments
|
https://api.github.com/repos/ollama/ollama/issues/1324/events
|
https://github.com/ollama/ollama/issues/1324
| 2,017,797,182
|
I_kwDOJ0Z1Ps54RSQ-
| 1,324
|
Pulling model causes 99+ download time towards the end of the completing the download
|
{
"login": "ahmetkca",
"id": 74574469,
"node_id": "MDQ6VXNlcjc0NTc0NDY5",
"avatar_url": "https://avatars.githubusercontent.com/u/74574469?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ahmetkca",
"html_url": "https://github.com/ahmetkca",
"followers_url": "https://api.github.com/users/ahmetkca/followers",
"following_url": "https://api.github.com/users/ahmetkca/following{/other_user}",
"gists_url": "https://api.github.com/users/ahmetkca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ahmetkca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ahmetkca/subscriptions",
"organizations_url": "https://api.github.com/users/ahmetkca/orgs",
"repos_url": "https://api.github.com/users/ahmetkca/repos",
"events_url": "https://api.github.com/users/ahmetkca/events{/privacy}",
"received_events_url": "https://api.github.com/users/ahmetkca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2023-11-30T03:18:48
| 2024-01-08T02:59:35
| 2024-01-08T02:59:35
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Here is the output from journalctl
```
Nov 29 22:11:04 ahmetkca-Ubuntu-23.10 ollama[23387]: 2023/11/29 22:11:04 llama.go:262: less than 2 GB VRAM available
Nov 29 22:11:04 ahmetkca-Ubuntu-23.10 ollama[23387]: 2023/11/29 22:11:04 routes.go:797: not enough VRAM available, falling back to CPU only
Nov 29 22:12:27 ahmetkca-Ubuntu-23.10 ollama[23387]: [GIN] 2023/11/29 - 22:12:27 | 200 | 30.563µs | 127.0.0.1 | HEAD "/"
Nov 29 22:12:27 ahmetkca-Ubuntu-23.10 ollama[23387]: [GIN] 2023/11/29 - 22:12:27 | 200 | 125.804µs | 127.0.0.1 | GET "/api/tags"
Nov 29 22:12:39 ahmetkca-Ubuntu-23.10 ollama[23387]: [GIN] 2023/11/29 - 22:12:39 | 200 | 16.17µs | 127.0.0.1 | HEAD "/"
Nov 29 22:12:41 ahmetkca-Ubuntu-23.10 ollama[23387]: 2023/11/29 22:12:41 download.go:123: downloading 6ae280299950 in 42 100 MB part(s)
Nov 29 22:14:17 ahmetkca-Ubuntu-23.10 ollama[23387]: 2023/11/29 22:14:17 download.go:162: 6ae280299950 part 1 attempt 0 failed: unexpected EOF, retrying in 1s
Nov 29 22:14:20 ahmetkca-Ubuntu-23.10 ollama[23387]: 2023/11/29 22:14:20 download.go:162: 6ae280299950 part 12 attempt 0 failed: unexpected EOF, retrying in 1s
Nov 29 22:14:23 ahmetkca-Ubuntu-23.10 ollama[23387]: 2023/11/29 22:14:23 download.go:162: 6ae280299950 part 33 attempt 0 failed: unexpected EOF, retrying in 1s
Nov 29 22:14:24 ahmetkca-Ubuntu-23.10 ollama[23387]: 2023/11/29 22:14:24 download.go:162: 6ae280299950 part 31 attempt 0 failed: unexpected EOF, retrying in 1s
Nov 29 22:14:30 ahmetkca-Ubuntu-23.10 ollama[23387]: 2023/11/29 22:14:30 download.go:162: 6ae280299950 part 29 attempt 0 failed: unexpected EOF, retrying in 1s
Nov 29 22:14:34 ahmetkca-Ubuntu-23.10 ollama[23387]: 2023/11/29 22:14:34 download.go:162: 6ae280299950 part 11 attempt 0 failed: unexpected EOF, retrying in 1s
Nov 29 22:14:35 ahmetkca-Ubuntu-23.10 ollama[23387]: 2023/11/29 22:14:35 download.go:162: 6ae280299950 part 23 attempt 0 failed: unexpected EOF, retrying in 1s
Nov 29 22:14:36 ahmetkca-Ubuntu-23.10 ollama[23387]: 2023/11/29 22:14:36 download.go:162: 6ae280299950 part 15 attempt 0 failed: unexpected EOF, retrying in 1s
Nov 29 22:15:20 ahmetkca-Ubuntu-23.10 ollama[23387]: 2023/11/29 22:15:20 download.go:162: 6ae280299950 part 0 attempt 0 failed: unexpected EOF, retrying in 1s
Nov 29 22:15:24 ahmetkca-Ubuntu-23.10 ollama[23387]: 2023/11/29 22:15:24 download.go:123: downloading 22e1b2e8dc2f in 1 43 B part(s)
Nov 29 22:15:27 ahmetkca-Ubuntu-23.10 ollama[23387]: 2023/11/29 22:15:27 download.go:123: downloading e35ab70a78c7 in 1 90 B part(s)
Nov 29 22:15:34 ahmetkca-Ubuntu-23.10 ollama[23387]: 2023/11/29 22:15:34 download.go:123: downloading 1cb90d66f4d4 in 1 381 B part(s)
Nov 29 22:15:44 ahmetkca-Ubuntu-23.10 ollama[23387]: [GIN] 2023/11/29 - 22:15:44 | 200 | 3m4s | 127.0.0.1 | POST "/api/pull"
Nov 29 22:16:52 ahmetkca-Ubuntu-23.10 ollama[23387]: [GIN] 2023/11/29 - 22:16:52 | 200 | 14.49µs | 127.0.0.1 | HEAD "/"
Nov 29 22:16:52 ahmetkca-Ubuntu-23.10 ollama[23387]: [GIN] 2023/11/29 - 22:16:52 | 200 | 244.403µs | 127.0.0.1 | GET "/api/tags"
lines 6699-6724/6724 (END)
```
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1324/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1324/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3635
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3635/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3635/comments
|
https://api.github.com/repos/ollama/ollama/issues/3635/events
|
https://github.com/ollama/ollama/issues/3635
| 2,241,835,021
|
I_kwDOJ0Z1Ps6Fn7AN
| 3,635
|
jetmoe-8b
|
{
"login": "Axenide",
"id": 66109459,
"node_id": "MDQ6VXNlcjY2MTA5NDU5",
"avatar_url": "https://avatars.githubusercontent.com/u/66109459?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Axenide",
"html_url": "https://github.com/Axenide",
"followers_url": "https://api.github.com/users/Axenide/followers",
"following_url": "https://api.github.com/users/Axenide/following{/other_user}",
"gists_url": "https://api.github.com/users/Axenide/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Axenide/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Axenide/subscriptions",
"organizations_url": "https://api.github.com/users/Axenide/orgs",
"repos_url": "https://api.github.com/users/Axenide/repos",
"events_url": "https://api.github.com/users/Axenide/events{/privacy}",
"received_events_url": "https://api.github.com/users/Axenide/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
open
| false
| null |
[] | null | 1
| 2024-04-14T00:24:26
| 2024-04-16T07:28:11
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What model would you like?
JetMoE is a Mixture of Experts model that reaches Llama2 performance while having only 2.2B parameters active. I think this has a lot of potential for low end devices and will be good to have it in the Ollama library.
https://huggingface.co/jetmoe/jetmoe-8b
https://huggingface.co/jetmoe/jetmoe-8b-chat
https://huggingface.co/jetmoe/jetmoe-8b-sft
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3635/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3635/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/4966
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4966/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4966/comments
|
https://api.github.com/repos/ollama/ollama/issues/4966/events
|
https://github.com/ollama/ollama/issues/4966
| 2,344,556,162
|
I_kwDOJ0Z1Ps6LvxaC
| 4,966
|
Llama 3 70b 16bit precision
|
{
"login": "Aekansh-Ak",
"id": 64459173,
"node_id": "MDQ6VXNlcjY0NDU5MTcz",
"avatar_url": "https://avatars.githubusercontent.com/u/64459173?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Aekansh-Ak",
"html_url": "https://github.com/Aekansh-Ak",
"followers_url": "https://api.github.com/users/Aekansh-Ak/followers",
"following_url": "https://api.github.com/users/Aekansh-Ak/following{/other_user}",
"gists_url": "https://api.github.com/users/Aekansh-Ak/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Aekansh-Ak/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Aekansh-Ak/subscriptions",
"organizations_url": "https://api.github.com/users/Aekansh-Ak/orgs",
"repos_url": "https://api.github.com/users/Aekansh-Ak/repos",
"events_url": "https://api.github.com/users/Aekansh-Ak/events{/privacy}",
"received_events_url": "https://api.github.com/users/Aekansh-Ak/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 2
| 2024-06-10T18:19:56
| 2024-06-12T07:01:56
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
As per Ollama documentation, it supports Llama 3 70b 4bit precision.
I was wondering if and how can I use 16bit or 32 bit precision model.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4966/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4966/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/5358
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5358/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5358/comments
|
https://api.github.com/repos/ollama/ollama/issues/5358/events
|
https://github.com/ollama/ollama/issues/5358
| 2,380,246,632
|
I_kwDOJ0Z1Ps6N365o
| 5,358
|
LLM Compiler Models
|
{
"login": "pmatos",
"id": 7911,
"node_id": "MDQ6VXNlcjc5MTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/7911?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pmatos",
"html_url": "https://github.com/pmatos",
"followers_url": "https://api.github.com/users/pmatos/followers",
"following_url": "https://api.github.com/users/pmatos/following{/other_user}",
"gists_url": "https://api.github.com/users/pmatos/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pmatos/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pmatos/subscriptions",
"organizations_url": "https://api.github.com/users/pmatos/orgs",
"repos_url": "https://api.github.com/users/pmatos/repos",
"events_url": "https://api.github.com/users/pmatos/events{/privacy}",
"received_events_url": "https://api.github.com/users/pmatos/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-06-28T11:47:47
| 2024-11-06T12:23:30
| 2024-11-06T12:23:30
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
How can I use the newly release models from meta with Ollama?
https://huggingface.co/collections/facebook/llm-compiler-667c5b05557fe99a9edd25cb
Thanks.
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5358/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5358/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/618
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/618/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/618/comments
|
https://api.github.com/repos/ollama/ollama/issues/618/events
|
https://github.com/ollama/ollama/issues/618
| 1,914,743,146
|
I_kwDOJ0Z1Ps5yIKlq
| 618
|
Trying to load too many layers, vram oom, reverts to cpu only.
|
{
"login": "aaroncoffey",
"id": 3649791,
"node_id": "MDQ6VXNlcjM2NDk3OTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/3649791?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aaroncoffey",
"html_url": "https://github.com/aaroncoffey",
"followers_url": "https://api.github.com/users/aaroncoffey/followers",
"following_url": "https://api.github.com/users/aaroncoffey/following{/other_user}",
"gists_url": "https://api.github.com/users/aaroncoffey/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aaroncoffey/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aaroncoffey/subscriptions",
"organizations_url": "https://api.github.com/users/aaroncoffey/orgs",
"repos_url": "https://api.github.com/users/aaroncoffey/repos",
"events_url": "https://api.github.com/users/aaroncoffey/events{/privacy}",
"received_events_url": "https://api.github.com/users/aaroncoffey/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 7
| 2023-09-27T05:47:25
| 2023-12-16T21:47:56
| 2023-12-04T19:54:20
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi there,
Based on the logs, it appears that ollama is trying to load too many layers and crashing OOM, this is causing it to revert to CPU only mode, which is not desirable.
Logs:
```
2023/09/26 21:40:42 llama.go:310: starting llama runner
2023/09/26 21:40:42 llama.go:346: waiting for llama runner to start responding
ggml_init_cublas: found 2 CUDA devices:
Device 0: NVIDIA GeForce RTX 3060, compute capability 8.6
Device 1: NVIDIA GeForce RTX 3060, compute capability 8.6
{"timestamp":1695789642,"level":"INFO","function":"main","line":1190,"message":"build info","build":1009,"commit":"9e232f0"}
{"timestamp":1695789642,"level":"INFO","function":"main","line":1192,"message":"system info","n_threads":6,"total_threads":12,"system_info":"AVX = 1 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | VSX = 0 | "}
llama.cpp: loading model from /home/user/.ollama/models/blobs/sha256:476d7ab8503b020bfee1e3c63403690f48422bb29c988ae74647c0c81b99e2a4
llama_model_load_internal: warning: assuming 70B model based on GQA == 8
llama_model_load_internal: format = ggjt v3 (latest)
llama_model_load_internal: n_vocab = 32001
llama_model_load_internal: n_ctx = 2048
llama_model_load_internal: n_embd = 8192
llama_model_load_internal: n_mult = 7168
llama_model_load_internal: n_head = 64
llama_model_load_internal: n_head_kv = 8
llama_model_load_internal: n_layer = 80
llama_model_load_internal: n_rot = 128
llama_model_load_internal: n_gqa = 8
llama_model_load_internal: rnorm_eps = 5.0e-06
llama_model_load_internal: n_ff = 28672
llama_model_load_internal: freq_base = 10000.0
llama_model_load_internal: freq_scale = 1
llama_model_load_internal: ftype = 10 (mostly Q2_K)
llama_model_load_internal: model size = 70B
llama_model_load_internal: ggml ctx size = 0.21 MB
llama_model_load_internal: using CUDA for GPU acceleration
ggml_cuda_set_main_device: using device 0 (NVIDIA GeForce RTX 3060) as main device
llama_model_load_internal: mem required = 4459.58 MB (+ 640.00 MB per state)
llama_model_load_internal: allocating batch_size x (1280 kB + n_ctx x 256 B) = 896 MB VRAM for the scratch buffer
llama_model_load_internal: offloading 71 repeating layers to GPU
llama_model_load_internal: offloaded 71/83 layers to GPU
llama_model_load_internal: total VRAM used: 24837 MB
CUDA error 2 at /go/src/github.com/jmorganca/ollama/llm/llama.cpp/ggml/ggml-cuda.cu:6184: out of memory
2023/09/26 21:41:02 llama.go:320: llama runner exited with error: exit status 1
2023/09/26 21:41:02 llama.go:327: error starting llama runner: llama runner process has terminated
2023/09/26 21:41:02 llama.go:310: starting llama runner
2023/09/26 21:41:02 llama.go:346: waiting for llama runner to start responding
{"timestamp":1695789662,"level":"WARNING","function":"server_params_parse","line":845,"message":"Not compiled with GPU offload support, --n-gpu-layers option will be ignored. See main README.md for information on enabling GPU BLAS support","n_gpu_layers":0}
{"timestamp":1695789662,"level":"INFO","function":"main","line":1190,"message":"build info","build":1009,"commit":"9e232f0"}
{"timestamp":1695789662,"level":"INFO","function":"main","line":1192,"message":"system info","n_threads":6,"total_threads":12,"system_info":"AVX = 1 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | VSX = 0 | "}
llama.cpp: loading model from /home/user/.ollama/models/blobs/sha256:476d7ab8503b020bfee1e3c63403690f48422bb29c988ae74647c0c81b99e2a4
llama_model_load_internal: warning: assuming 70B model based on GQA == 8
llama_model_load_internal: format = ggjt v3 (latest)
llama_model_load_internal: n_vocab = 32001
llama_model_load_internal: n_ctx = 2048
llama_model_load_internal: n_embd = 8192
llama_model_load_internal: n_mult = 7168
llama_model_load_internal: n_head = 64
llama_model_load_internal: n_head_kv = 8
llama_model_load_internal: n_layer = 80
llama_model_load_internal: n_rot = 128
llama_model_load_internal: n_gqa = 8
llama_model_load_internal: rnorm_eps = 5.0e-06
llama_model_load_internal: n_ff = 28672
llama_model_load_internal: freq_base = 10000.0
llama_model_load_internal: freq_scale = 1
llama_model_load_internal: ftype = 10 (mostly Q2_K)
llama_model_load_internal: model size = 70B
llama_model_load_internal: ggml ctx size = 0.21 MB
llama_model_load_internal: mem required = 27615.90 MB (+ 640.00 MB per state)
llama_new_context_with_model: kv self size = 640.00 MB
llama_new_context_with_model: compute buffer total size = 305.35 MB
llama server listening at http://127.0.0.1:49467
```
Exposing some model card options to define how much vram to use from each video card, or even a percentage split would be helpful.
In my experience with oobabooga, I've found that the proper number of layers to offload will vary depending on the model. But with careful tuning, I can get each video card nearly maxed out.
Thanks!
|
{
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.github.com/users/technovangelist/followers",
"following_url": "https://api.github.com/users/technovangelist/following{/other_user}",
"gists_url": "https://api.github.com/users/technovangelist/gists{/gist_id}",
"starred_url": "https://api.github.com/users/technovangelist/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/technovangelist/subscriptions",
"organizations_url": "https://api.github.com/users/technovangelist/orgs",
"repos_url": "https://api.github.com/users/technovangelist/repos",
"events_url": "https://api.github.com/users/technovangelist/events{/privacy}",
"received_events_url": "https://api.github.com/users/technovangelist/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/618/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/618/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2944
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2944/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2944/comments
|
https://api.github.com/repos/ollama/ollama/issues/2944/events
|
https://github.com/ollama/ollama/issues/2944
| 2,170,443,201
|
I_kwDOJ0Z1Ps6BXlXB
| 2,944
|
Add ENVIRONMENT section to CLI usage
|
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-03-06T01:19:30
| 2024-03-08T05:35:39
| 2024-03-07T21:57:08
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
This issue is to track the addition of a help section for configuring the Ollama CLI with environment variables.
|
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2944/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2944/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2816
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2816/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2816/comments
|
https://api.github.com/repos/ollama/ollama/issues/2816/events
|
https://github.com/ollama/ollama/issues/2816
| 2,159,685,731
|
I_kwDOJ0Z1Ps6AujBj
| 2,816
|
Ubuntu install not ending
|
{
"login": "Fastidious",
"id": 8352292,
"node_id": "MDQ6VXNlcjgzNTIyOTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8352292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Fastidious",
"html_url": "https://github.com/Fastidious",
"followers_url": "https://api.github.com/users/Fastidious/followers",
"following_url": "https://api.github.com/users/Fastidious/following{/other_user}",
"gists_url": "https://api.github.com/users/Fastidious/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Fastidious/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Fastidious/subscriptions",
"organizations_url": "https://api.github.com/users/Fastidious/orgs",
"repos_url": "https://api.github.com/users/Fastidious/repos",
"events_url": "https://api.github.com/users/Fastidious/events{/privacy}",
"received_events_url": "https://api.github.com/users/Fastidious/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 6
| 2024-02-28T19:24:35
| 2024-07-15T23:34:43
| 2024-03-20T16:22:24
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Trying to install ollama on Ubuntu 23.04, it gets stuck like this:
```bash
>>> Downloading ollama...
######################################################################## 100.0%##O#-#
>>> Installing ollama to /usr/local/bin...
>>> Adding ollama user to render group...
>>> Adding current user to ollama group...
>>> Creating ollama systemd service...
>>> Enabling and starting ollama service...
Network interfaces
```
What could it be?
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2816/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2816/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2952
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2952/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2952/comments
|
https://api.github.com/repos/ollama/ollama/issues/2952/events
|
https://github.com/ollama/ollama/issues/2952
| 2,171,462,793
|
I_kwDOJ0Z1Ps6BbeSJ
| 2,952
|
Windows CUDA OOM running llama2 on dual RTX 2070
|
{
"login": "iamtechysandy",
"id": 65868620,
"node_id": "MDQ6VXNlcjY1ODY4NjIw",
"avatar_url": "https://avatars.githubusercontent.com/u/65868620?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iamtechysandy",
"html_url": "https://github.com/iamtechysandy",
"followers_url": "https://api.github.com/users/iamtechysandy/followers",
"following_url": "https://api.github.com/users/iamtechysandy/following{/other_user}",
"gists_url": "https://api.github.com/users/iamtechysandy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/iamtechysandy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iamtechysandy/subscriptions",
"organizations_url": "https://api.github.com/users/iamtechysandy/orgs",
"repos_url": "https://api.github.com/users/iamtechysandy/repos",
"events_url": "https://api.github.com/users/iamtechysandy/events{/privacy}",
"received_events_url": "https://api.github.com/users/iamtechysandy/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 5
| 2024-03-06T13:03:13
| 2024-03-12T07:25:01
| 2024-03-12T07:25:00
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
C:\Users\admin>ollama run llama2
Error: Post "http://127.0.0.1:11434/api/chat": read tcp 127.0.0.1:52764->127.0.0.1:11434: wsarecv: An existing connection was forcibly closed by the remote host.
Getting these error While running
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2952/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2952/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1615
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1615/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1615/comments
|
https://api.github.com/repos/ollama/ollama/issues/1615/events
|
https://github.com/ollama/ollama/issues/1615
| 2,049,390,297
|
I_kwDOJ0Z1Ps56JzbZ
| 1,615
|
0.1.17: inconsistent vendoring in /build/source
|
{
"login": "quag",
"id": 35086,
"node_id": "MDQ6VXNlcjM1MDg2",
"avatar_url": "https://avatars.githubusercontent.com/u/35086?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/quag",
"html_url": "https://github.com/quag",
"followers_url": "https://api.github.com/users/quag/followers",
"following_url": "https://api.github.com/users/quag/following{/other_user}",
"gists_url": "https://api.github.com/users/quag/gists{/gist_id}",
"starred_url": "https://api.github.com/users/quag/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/quag/subscriptions",
"organizations_url": "https://api.github.com/users/quag/orgs",
"repos_url": "https://api.github.com/users/quag/repos",
"events_url": "https://api.github.com/users/quag/events{/privacy}",
"received_events_url": "https://api.github.com/users/quag/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-12-19T20:31:15
| 2023-12-21T20:04:35
| 2023-12-21T20:04:35
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I'm building ollama on NixOS. 0.1.16 worked fine, but bumping to use 0.1.17 fails with this error:
```
go: inconsistent vendoring in /build/source:
github.com/stretchr/testify@v1.8.3: is explicitly required in go.mod, but not marked as explicit in vendor/modules.txt
github.com/davecgh/go-spew@v1.1.1: is explicitly required in go.mod, but not marked as explicit in vendor/modules.txt
github.com/pmezard/go-difflib@v1.0.0: is explicitly required in go.mod, but not marked as explicit in vendor/modules.txt
To ignore the vendor directory, use -mod=readonly or -mod=mod.
To sync the vendor directory, run:
go mod vendor
```
Hopefully this is just something I've messed up, rather than an issue with the 0.1.17 release.
|
{
"login": "quag",
"id": 35086,
"node_id": "MDQ6VXNlcjM1MDg2",
"avatar_url": "https://avatars.githubusercontent.com/u/35086?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/quag",
"html_url": "https://github.com/quag",
"followers_url": "https://api.github.com/users/quag/followers",
"following_url": "https://api.github.com/users/quag/following{/other_user}",
"gists_url": "https://api.github.com/users/quag/gists{/gist_id}",
"starred_url": "https://api.github.com/users/quag/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/quag/subscriptions",
"organizations_url": "https://api.github.com/users/quag/orgs",
"repos_url": "https://api.github.com/users/quag/repos",
"events_url": "https://api.github.com/users/quag/events{/privacy}",
"received_events_url": "https://api.github.com/users/quag/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1615/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1615/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2544
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2544/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2544/comments
|
https://api.github.com/repos/ollama/ollama/issues/2544/events
|
https://github.com/ollama/ollama/issues/2544
| 2,139,029,534
|
I_kwDOJ0Z1Ps5_fwAe
| 2,544
|
API enhancement - create endpoint to fetch hosted models
|
{
"login": "aroffe99",
"id": 22308552,
"node_id": "MDQ6VXNlcjIyMzA4NTUy",
"avatar_url": "https://avatars.githubusercontent.com/u/22308552?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aroffe99",
"html_url": "https://github.com/aroffe99",
"followers_url": "https://api.github.com/users/aroffe99/followers",
"following_url": "https://api.github.com/users/aroffe99/following{/other_user}",
"gists_url": "https://api.github.com/users/aroffe99/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aroffe99/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aroffe99/subscriptions",
"organizations_url": "https://api.github.com/users/aroffe99/orgs",
"repos_url": "https://api.github.com/users/aroffe99/repos",
"events_url": "https://api.github.com/users/aroffe99/events{/privacy}",
"received_events_url": "https://api.github.com/users/aroffe99/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-02-16T17:11:27
| 2024-05-10T21:37:08
| 2024-05-10T21:37:08
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I'd like to create a UI where users have the option of clicking any model listed in https://ollama.com/library and that model would be pulled in the background. Right now I provide free text for users to insert the model and tag but `model:tag` can get pretty long and error prone for users to type it.
Alternatively instead of a new api endpoint to get this information, add a boolean parameter `remote` to /api/tags to show remote, downloadable models.
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2544/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2544/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1786
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1786/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1786/comments
|
https://api.github.com/repos/ollama/ollama/issues/1786/events
|
https://github.com/ollama/ollama/pull/1786
| 2,066,084,789
|
PR_kwDOJ0Z1Ps5jPxbK
| 1,786
|
add faq about quant and context
|
{
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.github.com/users/technovangelist/followers",
"following_url": "https://api.github.com/users/technovangelist/following{/other_user}",
"gists_url": "https://api.github.com/users/technovangelist/gists{/gist_id}",
"starred_url": "https://api.github.com/users/technovangelist/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/technovangelist/subscriptions",
"organizations_url": "https://api.github.com/users/technovangelist/orgs",
"repos_url": "https://api.github.com/users/technovangelist/repos",
"events_url": "https://api.github.com/users/technovangelist/events{/privacy}",
"received_events_url": "https://api.github.com/users/technovangelist/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-01-04T17:46:13
| 2024-02-20T03:17:13
| 2024-02-20T03:17:13
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1786",
"html_url": "https://github.com/ollama/ollama/pull/1786",
"diff_url": "https://github.com/ollama/ollama/pull/1786.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1786.patch",
"merged_at": null
}
|
This adds a short faq to describe quantization and context.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1786/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1786/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2194
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2194/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2194/comments
|
https://api.github.com/repos/ollama/ollama/issues/2194/events
|
https://github.com/ollama/ollama/issues/2194
| 2,101,230,295
|
I_kwDOJ0Z1Ps59PjrX
| 2,194
|
Change the default 11434 port?
|
{
"login": "CHesketh76",
"id": 38713764,
"node_id": "MDQ6VXNlcjM4NzEzNzY0",
"avatar_url": "https://avatars.githubusercontent.com/u/38713764?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CHesketh76",
"html_url": "https://github.com/CHesketh76",
"followers_url": "https://api.github.com/users/CHesketh76/followers",
"following_url": "https://api.github.com/users/CHesketh76/following{/other_user}",
"gists_url": "https://api.github.com/users/CHesketh76/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CHesketh76/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CHesketh76/subscriptions",
"organizations_url": "https://api.github.com/users/CHesketh76/orgs",
"repos_url": "https://api.github.com/users/CHesketh76/repos",
"events_url": "https://api.github.com/users/CHesketh76/events{/privacy}",
"received_events_url": "https://api.github.com/users/CHesketh76/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 25
| 2024-01-25T21:54:45
| 2025-01-25T12:52:28
| 2024-01-25T23:17:22
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I am getting this error message ```Error: listen tcp 127.0.0.1:11434: bind: address already in use``` every time I run ```ollama serve```. Would it be possible to have the option to change the port?
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2194/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2194/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7963
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7963/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7963/comments
|
https://api.github.com/repos/ollama/ollama/issues/7963/events
|
https://github.com/ollama/ollama/pull/7963
| 2,722,061,023
|
PR_kwDOJ0Z1Ps6ERd7P
| 7,963
|
openai: finish streaming tool calls as tool_calls
|
{
"login": "anuraaga",
"id": 198344,
"node_id": "MDQ6VXNlcjE5ODM0NA==",
"avatar_url": "https://avatars.githubusercontent.com/u/198344?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anuraaga",
"html_url": "https://github.com/anuraaga",
"followers_url": "https://api.github.com/users/anuraaga/followers",
"following_url": "https://api.github.com/users/anuraaga/following{/other_user}",
"gists_url": "https://api.github.com/users/anuraaga/gists{/gist_id}",
"starred_url": "https://api.github.com/users/anuraaga/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anuraaga/subscriptions",
"organizations_url": "https://api.github.com/users/anuraaga/orgs",
"repos_url": "https://api.github.com/users/anuraaga/repos",
"events_url": "https://api.github.com/users/anuraaga/events{/privacy}",
"received_events_url": "https://api.github.com/users/anuraaga/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 8
| 2024-12-06T04:52:25
| 2025-01-26T09:00:15
| null |
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7963",
"html_url": "https://github.com/ollama/ollama/pull/7963",
"diff_url": "https://github.com/ollama/ollama/pull/7963.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7963.patch",
"merged_at": null
}
|
When a response contains tool_calls it finishes the chat, and we see this already happening in Ollama in non-chunk mode. This ensures that the chunk with tool calls contains the finish reason, not a following one, while any following ones are not sent - their choice with empty content will conflict with the tool call response. This follows how OpenAI API behaves as far as I can tell.
/cc @codefromthecrypt
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7963/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7963/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2459
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2459/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2459/comments
|
https://api.github.com/repos/ollama/ollama/issues/2459/events
|
https://github.com/ollama/ollama/pull/2459
| 2,129,360,739
|
PR_kwDOJ0Z1Ps5mmB7W
| 2,459
|
Always add token to cache_tokens
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-02-12T03:29:38
| 2024-02-12T16:10:17
| 2024-02-12T16:10:16
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/2459",
"html_url": "https://github.com/ollama/ollama/pull/2459",
"diff_url": "https://github.com/ollama/ollama/pull/2459.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2459.patch",
"merged_at": "2024-02-12T16:10:16"
}
|
The diff is a bit hard to read, but this is the actual fix for our `01` patch that fixes due to the kv cache being full
I believe this fixes #2339 and #1458
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2459/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 3,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2459/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1383
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1383/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1383/comments
|
https://api.github.com/repos/ollama/ollama/issues/1383/events
|
https://github.com/ollama/ollama/pull/1383
| 2,025,051,507
|
PR_kwDOJ0Z1Ps5hHfZX
| 1,383
|
revert cli to use /api/generate
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-12-05T00:27:13
| 2023-12-05T00:35:31
| 2023-12-05T00:35:30
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1383",
"html_url": "https://github.com/ollama/ollama/pull/1383",
"diff_url": "https://github.com/ollama/ollama/pull/1383.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1383.patch",
"merged_at": "2023-12-05T00:35:30"
}
|
This change reverts the CLI to use `/api/generate` instead of `/api/chat`.
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1383/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1383/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/8528
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8528/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8528/comments
|
https://api.github.com/repos/ollama/ollama/issues/8528/events
|
https://github.com/ollama/ollama/issues/8528
| 2,803,290,989
|
I_kwDOJ0Z1Ps6nFtNt
| 8,528
|
don't show the thinking process
|
{
"login": "sunburst-yz",
"id": 37734140,
"node_id": "MDQ6VXNlcjM3NzM0MTQw",
"avatar_url": "https://avatars.githubusercontent.com/u/37734140?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sunburst-yz",
"html_url": "https://github.com/sunburst-yz",
"followers_url": "https://api.github.com/users/sunburst-yz/followers",
"following_url": "https://api.github.com/users/sunburst-yz/following{/other_user}",
"gists_url": "https://api.github.com/users/sunburst-yz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sunburst-yz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sunburst-yz/subscriptions",
"organizations_url": "https://api.github.com/users/sunburst-yz/orgs",
"repos_url": "https://api.github.com/users/sunburst-yz/repos",
"events_url": "https://api.github.com/users/sunburst-yz/events{/privacy}",
"received_events_url": "https://api.github.com/users/sunburst-yz/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 3
| 2025-01-22T03:40:56
| 2025-01-26T18:33:07
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
When I use DeepSeek-R1, the thinking process shown does not make sense to me, I only want to see the final result.

| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8528/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8528/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/2804
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2804/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2804/comments
|
https://api.github.com/repos/ollama/ollama/issues/2804/events
|
https://github.com/ollama/ollama/issues/2804
| 2,158,388,581
|
I_kwDOJ0Z1Ps6ApmVl
| 2,804
|
Feature: Mistral Next
|
{
"login": "Dimfred",
"id": 29997904,
"node_id": "MDQ6VXNlcjI5OTk3OTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/29997904?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Dimfred",
"html_url": "https://github.com/Dimfred",
"followers_url": "https://api.github.com/users/Dimfred/followers",
"following_url": "https://api.github.com/users/Dimfred/following{/other_user}",
"gists_url": "https://api.github.com/users/Dimfred/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Dimfred/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Dimfred/subscriptions",
"organizations_url": "https://api.github.com/users/Dimfred/orgs",
"repos_url": "https://api.github.com/users/Dimfred/repos",
"events_url": "https://api.github.com/users/Dimfred/events{/privacy}",
"received_events_url": "https://api.github.com/users/Dimfred/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-02-28T08:28:54
| 2024-02-29T14:14:03
| 2024-02-29T14:14:02
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Mistral has recently released their new `mistral-next` model.
I am not sure whether this is the place to ask for model requests, but it would be great to get that integrated.
Thank you for all the work you have done so far!
https://www.reddit.com/r/LocalLLaMA/comments/1as15p1/new_mistralnext_model_at_httpschatlmsysorg/
|
{
"login": "Dimfred",
"id": 29997904,
"node_id": "MDQ6VXNlcjI5OTk3OTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/29997904?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Dimfred",
"html_url": "https://github.com/Dimfred",
"followers_url": "https://api.github.com/users/Dimfred/followers",
"following_url": "https://api.github.com/users/Dimfred/following{/other_user}",
"gists_url": "https://api.github.com/users/Dimfred/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Dimfred/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Dimfred/subscriptions",
"organizations_url": "https://api.github.com/users/Dimfred/orgs",
"repos_url": "https://api.github.com/users/Dimfred/repos",
"events_url": "https://api.github.com/users/Dimfred/events{/privacy}",
"received_events_url": "https://api.github.com/users/Dimfred/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2804/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2804/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8625
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8625/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8625/comments
|
https://api.github.com/repos/ollama/ollama/issues/8625/events
|
https://github.com/ollama/ollama/issues/8625
| 2,814,698,214
|
I_kwDOJ0Z1Ps6nxOLm
| 8,625
|
Individual quantized model download count
|
{
"login": "Abubakkar13",
"id": 45032674,
"node_id": "MDQ6VXNlcjQ1MDMyNjc0",
"avatar_url": "https://avatars.githubusercontent.com/u/45032674?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Abubakkar13",
"html_url": "https://github.com/Abubakkar13",
"followers_url": "https://api.github.com/users/Abubakkar13/followers",
"following_url": "https://api.github.com/users/Abubakkar13/following{/other_user}",
"gists_url": "https://api.github.com/users/Abubakkar13/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Abubakkar13/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Abubakkar13/subscriptions",
"organizations_url": "https://api.github.com/users/Abubakkar13/orgs",
"repos_url": "https://api.github.com/users/Abubakkar13/repos",
"events_url": "https://api.github.com/users/Abubakkar13/events{/privacy}",
"received_events_url": "https://api.github.com/users/Abubakkar13/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 6573197867,
"node_id": "LA_kwDOJ0Z1Ps8AAAABh8sKKw",
"url": "https://api.github.com/repos/ollama/ollama/labels/ollama.com",
"name": "ollama.com",
"color": "ffffff",
"default": false,
"description": ""
}
] |
open
| false
| null |
[] | null | 0
| 2025-01-28T05:52:07
| 2025-01-28T17:13:03
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hey,
I was been exploring the models on site, It would be great to have a total download count for each quantized version (e.g., q8_0, q4_K_M) to show how many times they’ve been downloaded. This would help users gauge the popularity and reliability of different models. Having clear download statistics for each version would make it easier to choose the best one. Thank you!

| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8625/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8625/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/3894
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3894/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3894/comments
|
https://api.github.com/repos/ollama/ollama/issues/3894/events
|
https://github.com/ollama/ollama/issues/3894
| 2,262,363,816
|
I_kwDOJ0Z1Ps6G2O6o
| 3,894
|
I have tested 4-5 phi-3-128K-Instruct models from different providers with different quants, all GGUF files, none are runnable with ollama
|
{
"login": "phalexo",
"id": 4603365,
"node_id": "MDQ6VXNlcjQ2MDMzNjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/4603365?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/phalexo",
"html_url": "https://github.com/phalexo",
"followers_url": "https://api.github.com/users/phalexo/followers",
"following_url": "https://api.github.com/users/phalexo/following{/other_user}",
"gists_url": "https://api.github.com/users/phalexo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/phalexo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/phalexo/subscriptions",
"organizations_url": "https://api.github.com/users/phalexo/orgs",
"repos_url": "https://api.github.com/users/phalexo/repos",
"events_url": "https://api.github.com/users/phalexo/events{/privacy}",
"received_events_url": "https://api.github.com/users/phalexo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 5
| 2024-04-24T23:47:13
| 2024-05-01T22:03:54
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
ollama can import them, but not run them.
(Pythogora) developer@ai:~/PROJECTS/autogen$ ~/ollama/ollama run phi-3-mini-128k-instruct.Q6_K
Error: llama runner process no longer running: 1 error:failed to create context with model '/home/developer/.ollama/models/blobs/sha256-78f928e77e2470c7c09b151ff978bc348ba18ccde0991d03fe34f16fb9471460'
This how the model file looks like.
FROM /opt/data/PrunaAI/Phi-3-mini-128k-instruct-GGUF-Imatrix-smashed/Phi-3-mini-128k-instruct.Q6_K.gguf
TEMPLATE """
{{- if .First}}
<|system|>
{{ .System}}<|end|>
{{- end}}
<|user|>
{{ .Prompt}}<|end|>
<|assistant|>
"""
PARAMETER num_ctx 128000
PARAMETER temperature 0.2
PARAMETER num_gpu 100
PARAMETER stop <|end|>
PARAMETER stop <|endoftext|>
SYSTEM """You are a helpful AI which can plan, program, and test, analyze and debug."""
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
latest from source.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3894/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3894/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/2917
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2917/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2917/comments
|
https://api.github.com/repos/ollama/ollama/issues/2917/events
|
https://github.com/ollama/ollama/pull/2917
| 2,167,272,871
|
PR_kwDOJ0Z1Ps5onPPL
| 2,917
|
Add SemanticFinder to README.md
|
{
"login": "do-me",
"id": 47481567,
"node_id": "MDQ6VXNlcjQ3NDgxNTY3",
"avatar_url": "https://avatars.githubusercontent.com/u/47481567?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/do-me",
"html_url": "https://github.com/do-me",
"followers_url": "https://api.github.com/users/do-me/followers",
"following_url": "https://api.github.com/users/do-me/following{/other_user}",
"gists_url": "https://api.github.com/users/do-me/gists{/gist_id}",
"starred_url": "https://api.github.com/users/do-me/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/do-me/subscriptions",
"organizations_url": "https://api.github.com/users/do-me/orgs",
"repos_url": "https://api.github.com/users/do-me/repos",
"events_url": "https://api.github.com/users/do-me/events{/privacy}",
"received_events_url": "https://api.github.com/users/do-me/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-03-04T16:38:43
| 2024-11-21T08:45:25
| 2024-11-21T08:45:25
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/2917",
"html_url": "https://github.com/ollama/ollama/pull/2917",
"diff_url": "https://github.com/ollama/ollama/pull/2917.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2917.patch",
"merged_at": null
}
|
[SemanticFinder](https://github.com/do-me/SemanticFinder) is an in-browser tool for semantic search and now offers an Ollama integration to help understand the search results.
Announcement on Ollama [r/ollama/](https://www.reddit.com/r/ollama/comments/1b79c23/inbrowser_rag_feeding_ollama/)
|
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2917/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2917/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2611
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2611/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2611/comments
|
https://api.github.com/repos/ollama/ollama/issues/2611/events
|
https://github.com/ollama/ollama/issues/2611
| 2,144,048,287
|
I_kwDOJ0Z1Ps5_y5Sf
| 2,611
|
Support for moondream?
|
{
"login": "oliverbob",
"id": 23272429,
"node_id": "MDQ6VXNlcjIzMjcyNDI5",
"avatar_url": "https://avatars.githubusercontent.com/u/23272429?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/oliverbob",
"html_url": "https://github.com/oliverbob",
"followers_url": "https://api.github.com/users/oliverbob/followers",
"following_url": "https://api.github.com/users/oliverbob/following{/other_user}",
"gists_url": "https://api.github.com/users/oliverbob/gists{/gist_id}",
"starred_url": "https://api.github.com/users/oliverbob/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/oliverbob/subscriptions",
"organizations_url": "https://api.github.com/users/oliverbob/orgs",
"repos_url": "https://api.github.com/users/oliverbob/repos",
"events_url": "https://api.github.com/users/oliverbob/events{/privacy}",
"received_events_url": "https://api.github.com/users/oliverbob/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-02-20T10:28:32
| 2024-04-06T12:07:18
| 2024-02-20T18:55:21
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Is there a support for [moondream](https://github.com/vikhyat/moondream)?
Its like small llava.

|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2611/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2611/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3327
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3327/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3327/comments
|
https://api.github.com/repos/ollama/ollama/issues/3327/events
|
https://github.com/ollama/ollama/issues/3327
| 2,204,451,995
|
I_kwDOJ0Z1Ps6DZUSb
| 3,327
|
Module name is out of date and prevents import from other projects
|
{
"login": "smxlong",
"id": 9043733,
"node_id": "MDQ6VXNlcjkwNDM3MzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/9043733?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/smxlong",
"html_url": "https://github.com/smxlong",
"followers_url": "https://api.github.com/users/smxlong/followers",
"following_url": "https://api.github.com/users/smxlong/following{/other_user}",
"gists_url": "https://api.github.com/users/smxlong/gists{/gist_id}",
"starred_url": "https://api.github.com/users/smxlong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/smxlong/subscriptions",
"organizations_url": "https://api.github.com/users/smxlong/orgs",
"repos_url": "https://api.github.com/users/smxlong/repos",
"events_url": "https://api.github.com/users/smxlong/events{/privacy}",
"received_events_url": "https://api.github.com/users/smxlong/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 1
| 2024-03-24T16:53:08
| 2024-03-26T20:04:18
| 2024-03-26T20:04:18
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Attempting to import parts of the ollama tree (such as the `api` directory) into an external project fails, because the module name declared in `go.mod` is still `github.com/jmorganca/ollama`. This apparently doesn't impact building the project itself, but causes errors when attempting to import packages into other projects.
### What did you expect to see?
This should have worked:
```
go get github.com/ollama/ollama/api
```
Instead, I get this:
```
$ go get github.com/ollama/ollama/api
go: github.com/ollama/ollama@v0.1.29 (matching github.com/ollama/ollama/api@upgrade) requires github.com/ollama/ollama@v0.1.29: parsing go.mod:
module declares its path as: github.com/jmorganca/ollama
but was required as: github.com/ollama/ollama
```
### Steps to reproduce
Run `go get github.com/ollama/ollama/api` from another project.
### Are there any recent changes that introduced the issue?
_No response_
### OS
Linux
### Architecture
amd64
### Platform
WSL2
### Ollama version
main branch
### GPU
_No response_
### GPU info
_No response_
### CPU
_No response_
### Other software
_No response_
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3327/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3327/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5955
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5955/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5955/comments
|
https://api.github.com/repos/ollama/ollama/issues/5955/events
|
https://github.com/ollama/ollama/issues/5955
| 2,430,533,623
|
I_kwDOJ0Z1Ps6Q3v_3
| 5,955
|
Model request for Llama Guard 3
|
{
"login": "prane-eth",
"id": 48318416,
"node_id": "MDQ6VXNlcjQ4MzE4NDE2",
"avatar_url": "https://avatars.githubusercontent.com/u/48318416?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/prane-eth",
"html_url": "https://github.com/prane-eth",
"followers_url": "https://api.github.com/users/prane-eth/followers",
"following_url": "https://api.github.com/users/prane-eth/following{/other_user}",
"gists_url": "https://api.github.com/users/prane-eth/gists{/gist_id}",
"starred_url": "https://api.github.com/users/prane-eth/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/prane-eth/subscriptions",
"organizations_url": "https://api.github.com/users/prane-eth/orgs",
"repos_url": "https://api.github.com/users/prane-eth/repos",
"events_url": "https://api.github.com/users/prane-eth/events{/privacy}",
"received_events_url": "https://api.github.com/users/prane-eth/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-07-25T16:40:21
| 2024-09-04T18:25:15
| 2024-09-04T18:25:15
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Llama Guard 3 is a model for AI Safety. It is released along with Llama 3.1.
https://llama.meta.com/docs/model-cards-and-prompt-formats/llama-guard-3/
https://huggingface.co/meta-llama/Llama-Guard-3-8B
https://github.com/meta-llama/PurpleLlama/blob/main/Llama-Guard3/MODEL_CARD.md
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5955/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5955/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1475
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1475/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1475/comments
|
https://api.github.com/repos/ollama/ollama/issues/1475/events
|
https://github.com/ollama/ollama/pull/1475
| 2,036,653,462
|
PR_kwDOJ0Z1Ps5hvCGm
| 1,475
|
Add support for mixture of experts (MoE) and Mixtral
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 14
| 2023-12-11T22:38:33
| 2023-12-13T22:15:11
| 2023-12-13T22:15:10
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1475",
"html_url": "https://github.com/ollama/ollama/pull/1475",
"diff_url": "https://github.com/ollama/ollama/pull/1475.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1475.patch",
"merged_at": "2023-12-13T22:15:10"
}
|
To build this branch:
```
go generate ./...
go build .
```
```
./ollama serve
# in another terminal
./ollama run jmorgan/mixtral
```
resolves #1470
resolves #1457
resolves #1502
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1475/reactions",
"total_count": 26,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 22,
"confused": 0,
"heart": 4,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1475/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4851
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4851/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4851/comments
|
https://api.github.com/repos/ollama/ollama/issues/4851/events
|
https://github.com/ollama/ollama/issues/4851
| 2,337,811,989
|
I_kwDOJ0Z1Ps6LWC4V
| 4,851
|
Add `strings` module from Go for template processing
|
{
"login": "qbit-",
"id": 4794088,
"node_id": "MDQ6VXNlcjQ3OTQwODg=",
"avatar_url": "https://avatars.githubusercontent.com/u/4794088?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qbit-",
"html_url": "https://github.com/qbit-",
"followers_url": "https://api.github.com/users/qbit-/followers",
"following_url": "https://api.github.com/users/qbit-/following{/other_user}",
"gists_url": "https://api.github.com/users/qbit-/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qbit-/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qbit-/subscriptions",
"organizations_url": "https://api.github.com/users/qbit-/orgs",
"repos_url": "https://api.github.com/users/qbit-/repos",
"events_url": "https://api.github.com/users/qbit-/events{/privacy}",
"received_events_url": "https://api.github.com/users/qbit-/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 0
| 2024-06-06T09:38:07
| 2024-06-06T09:38:07
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Currently, the `TEMPLATE` parameter in the `Modelfile` is a simple Go template. For example, I can do something like this to print first 25 characters of the model's response:
```go
{{ printf "%.*s" 25 .Response }}
```
However, this basic processing is not usable. What I'm trying to do is to remove the BOS token from the model's output. The token is `<|begin_of_text|>` and it is outputted not in every message, but at the beginning of the dialog. One possible solution is to use Go's `strings` module to strip the prefix:
```go
{{ .Response | strings.TrimPrefix <|begin_of_text|> }}
```
but this is not supported in ollama. Can we include extra functions for template processing?
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4851/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4851/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/4494
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4494/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4494/comments
|
https://api.github.com/repos/ollama/ollama/issues/4494/events
|
https://github.com/ollama/ollama/issues/4494
| 2,302,141,566
|
I_kwDOJ0Z1Ps6JN-R-
| 4,494
|
How to load a model from local disk path?
|
{
"login": "quzhixue-Kimi",
"id": 8235746,
"node_id": "MDQ6VXNlcjgyMzU3NDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/8235746?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/quzhixue-Kimi",
"html_url": "https://github.com/quzhixue-Kimi",
"followers_url": "https://api.github.com/users/quzhixue-Kimi/followers",
"following_url": "https://api.github.com/users/quzhixue-Kimi/following{/other_user}",
"gists_url": "https://api.github.com/users/quzhixue-Kimi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/quzhixue-Kimi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/quzhixue-Kimi/subscriptions",
"organizations_url": "https://api.github.com/users/quzhixue-Kimi/orgs",
"repos_url": "https://api.github.com/users/quzhixue-Kimi/repos",
"events_url": "https://api.github.com/users/quzhixue-Kimi/events{/privacy}",
"received_events_url": "https://api.github.com/users/quzhixue-Kimi/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 5
| 2024-05-17T08:37:59
| 2024-10-24T11:50:53
| 2024-05-20T07:44:35
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
hi there,
I have two ubuntu 20.04 server (one is local machine, the another one is product server.) with latest ollama binary installed based on the document via https://github.com/ollama/ollama/blob/main/docs/linux.md
My local ubuntu 20.04 has got the internet to run the command and download the llama3 and llama3:70b model and stored in /usr/share/ollama/.ollama/models
The another ubuntu 20.04 server has no internet as a product server!!!
I just copied all model files from my local ubuntu 20.04 to the ubuntu 20.04 product server with the same model path : '/usr/share/ollama/.ollama/models'
After running the ollama process. I ran the 'ollama list' command in product server, there is no model listed. And, when I ran 'ollama run llama3', there has been one error occurred as 'pulling manifest Error: pull model manifest: Get https://registry.ollama.ai/v2/library/llama3/manifests/latest: dial tcp: lookup registry.ollama.ai on 127.0.0.53:53 server misbehaving'
The above issue was caused by on internet on my product server.
It is appreciated that you could tell me wheter there is one environment variable to set without internet or not?
BR
Kimi
|
{
"login": "quzhixue-Kimi",
"id": 8235746,
"node_id": "MDQ6VXNlcjgyMzU3NDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/8235746?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/quzhixue-Kimi",
"html_url": "https://github.com/quzhixue-Kimi",
"followers_url": "https://api.github.com/users/quzhixue-Kimi/followers",
"following_url": "https://api.github.com/users/quzhixue-Kimi/following{/other_user}",
"gists_url": "https://api.github.com/users/quzhixue-Kimi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/quzhixue-Kimi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/quzhixue-Kimi/subscriptions",
"organizations_url": "https://api.github.com/users/quzhixue-Kimi/orgs",
"repos_url": "https://api.github.com/users/quzhixue-Kimi/repos",
"events_url": "https://api.github.com/users/quzhixue-Kimi/events{/privacy}",
"received_events_url": "https://api.github.com/users/quzhixue-Kimi/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4494/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4494/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3604
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3604/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3604/comments
|
https://api.github.com/repos/ollama/ollama/issues/3604/events
|
https://github.com/ollama/ollama/pull/3604
| 2,238,449,709
|
PR_kwDOJ0Z1Ps5sZksd
| 3,604
|
Fix rocm deps with new subprocess paths
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-04-11T19:52:29
| 2024-04-11T20:08:35
| 2024-04-11T20:08:29
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3604",
"html_url": "https://github.com/ollama/ollama/pull/3604",
"diff_url": "https://github.com/ollama/ollama/pull/3604.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3604.patch",
"merged_at": "2024-04-11T20:08:29"
}
|
This fixes a regression on main and in 0.1.32-rc1 where the rocm dependency file was missing the libraries.
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3604/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3604/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5103
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5103/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5103/comments
|
https://api.github.com/repos/ollama/ollama/issues/5103/events
|
https://github.com/ollama/ollama/pull/5103
| 2,358,233,283
|
PR_kwDOJ0Z1Ps5yvdEx
| 5,103
|
Revert powershell jobs, but keep nvcc and cmake parallelism
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-06-17T20:50:05
| 2024-06-17T21:23:21
| 2024-06-17T21:23:18
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5103",
"html_url": "https://github.com/ollama/ollama/pull/5103",
"diff_url": "https://github.com/ollama/ollama/pull/5103.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5103.patch",
"merged_at": "2024-06-17T21:23:18"
}
|
It doesn't look like the added complexity of trying to parallelize in powershell is worth it, so remove that, but retain the other parallelism flags for cmake and nvcc.
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5103/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5103/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6262
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6262/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6262/comments
|
https://api.github.com/repos/ollama/ollama/issues/6262/events
|
https://github.com/ollama/ollama/issues/6262
| 2,456,598,555
|
I_kwDOJ0Z1Ps6SbLgb
| 6,262
|
Batch embeddings get progressively worse with larger batches
|
{
"login": "jorgetrejo36",
"id": 65737813,
"node_id": "MDQ6VXNlcjY1NzM3ODEz",
"avatar_url": "https://avatars.githubusercontent.com/u/65737813?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jorgetrejo36",
"html_url": "https://github.com/jorgetrejo36",
"followers_url": "https://api.github.com/users/jorgetrejo36/followers",
"following_url": "https://api.github.com/users/jorgetrejo36/following{/other_user}",
"gists_url": "https://api.github.com/users/jorgetrejo36/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jorgetrejo36/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jorgetrejo36/subscriptions",
"organizations_url": "https://api.github.com/users/jorgetrejo36/orgs",
"repos_url": "https://api.github.com/users/jorgetrejo36/repos",
"events_url": "https://api.github.com/users/jorgetrejo36/events{/privacy}",
"received_events_url": "https://api.github.com/users/jorgetrejo36/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
|
{
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjhan/followers",
"following_url": "https://api.github.com/users/royjhan/following{/other_user}",
"gists_url": "https://api.github.com/users/royjhan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/royjhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/royjhan/subscriptions",
"organizations_url": "https://api.github.com/users/royjhan/orgs",
"repos_url": "https://api.github.com/users/royjhan/repos",
"events_url": "https://api.github.com/users/royjhan/events{/privacy}",
"received_events_url": "https://api.github.com/users/royjhan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjhan/followers",
"following_url": "https://api.github.com/users/royjhan/following{/other_user}",
"gists_url": "https://api.github.com/users/royjhan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/royjhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/royjhan/subscriptions",
"organizations_url": "https://api.github.com/users/royjhan/orgs",
"repos_url": "https://api.github.com/users/royjhan/repos",
"events_url": "https://api.github.com/users/royjhan/events{/privacy}",
"received_events_url": "https://api.github.com/users/royjhan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 17
| 2024-08-08T20:47:39
| 2024-11-05T13:31:48
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I am using the ollama Python library for all the results I am getting.
As I create embeddings using ollama.embed() I get progressively worse embeddings as the batches are larger. This is compared against creating embeddings one at a time. There seems to be a jump that happens at batch sizes of 16 or larger. All of my tests assume that I am getting the embeddings back in the same order given I submitted an issue not too long ago that was resolved (#6187).
It is imperative that these embeddings be accurate given I am using them for a RAG app and retrieval needs to be good for all inserted embeddings.
I ran function "chunk_text" with text from peter pan ([https://www.gutenberg.org/files/16/16-h/16-h.htm](url)), chunk_size = 256, max_characters of 65536 (256 chunks with 256 characters each).
I ran the function "test" with chunks from the above function call and a batch_size_list of [2, 4, 8, 16, 32, 64, 128, 256].
Below all the code are results and some plots of the results.
```
import ollama
import numpy as np
import os
from typing import List
from dotenv import load_dotenv
from sklearn.metrics.pairwise import cosine_similarity
import matplotlib.pyplot as plt
load_dotenv()
# Embedding model used was "bge-large:latest"
EMBEDDING_MODEL = os.getenv("EMBEDDING_MODEL")
EPS=1e-4
def chunk_text(text: str, chunk_size: int, max_characters: int) -> List[str]:
chunks = []
for i in range(0, len(text) if len(text) < max_characters else max_characters, chunk_size):
chunk = text[i:i + chunk_size]
chunks.append(chunk)
return chunks
# Used first few chapters of Peter Pan
text = ""
chunk_size = 256
# 256 is the max batch size that is defined later
chunks = chunk_text(text, chunk_size, chunk_size * 256)
def embed_string(s: str) -> np.ndarray:
return np.array(ollama.embed(
input=s,
model=EMBEDDING_MODEL,
options={
},
truncate=False
)["embeddings"])[0]
def embed_list(s: List[str]) -> np.ndarray:
return np.array(ollama.embed(
input=s,
model=EMBEDDING_MODEL,
options={
},
truncate=False
)["embeddings"])
def test(list_of_string: List[str], batch_sizes: List[int]) -> bool:
avg_distances = []
avg_similarites = []
max_distances = []
min_similarities = []
for batch_size in batch_sizes:
print(f"Results for batch size: {batch_size}")
singles = np.array([embed_string(s) for s in list_of_string[:batch_size]])
as_list = embed_list(list_of_string[:batch_size])
distances = []
for single_embedding, as_list_embedding in zip(singles, as_list):
distance = np.sqrt(((single_embedding - as_list_embedding) ** 2).sum())
distances.append(distance)
distances = np.array(distances)
mean = np.mean(distances)
max = np.max(distances)
avg_distances.append(mean)
max_distances.append(max)
print("Euclidean Distance:")
print(f"\tMean of euclidean distances: {mean}")
print(f"\tMax euclidean distance: {max}")
# Cosine similarity
similarities = []
for single_embedding, as_list_embedding in zip(singles, as_list):
vector1 = single_embedding.reshape(1, -1)
vector2 = as_list_embedding.reshape(1, -1)
similarity = cosine_similarity(vector1, vector2)
similarities.append(similarity)
similarities = np.array(similarities)
mean = np.mean(similarities)
min = np.min(similarities)
avg_similarites.append(mean)
min_similarities.append(min)
print("Cosine Similarity:")
print(f"\tMean of cosine similarites: {mean}")
print(f"\tMin cosine similarity: {min}")
print("==========================================================")
return (batch_sizes, avg_distances, avg_similarites, max_distances, min_similarities)
batch_sizes_list = [2**i for i in range(1, 9)]
batch_sizes, avg_distances, avg_similarities, max_distances, min_similarities = test(chunks, batch_sizes_list)
```
RESULTS:
```
Results for batch size: 2
Euclidean Distance:
Mean of euclidean distances: 0.0027100650691554615
Max euclidean distance: 0.003069791207141852
Cosine Similarity:
Mean of cosine similarites: 0.999996263071194
Min cosine similarity: 0.99999528818776
==========================================================
Results for batch size: 4
Euclidean Distance:
Mean of euclidean distances: 0.002698965850379388
Max euclidean distance: 0.0032083663351101474
Cosine Similarity:
Mean of cosine similarites: 0.9999962901587796
Min cosine similarity: 0.9999948531925777
==========================================================
Results for batch size: 8
Euclidean Distance:
Mean of euclidean distances: 0.003292175370207458
Max euclidean distance: 0.0038060000679778546
Cosine Similarity:
Mean of cosine similarites: 0.9999945197318343
Min cosine similarity: 0.999992757181494
==========================================================
Results for batch size: 16
Euclidean Distance:
Mean of euclidean distances: 0.11461230989305338
Max euclidean distance: 1.136748198080119
Cosine Similarity:
Mean of cosine similarites: 0.946342128810411
Min cosine similarity: 0.35390177365096614
==========================================================
Results for batch size: 32
Euclidean Distance:
Mean of euclidean distances: 0.08102131219835153
Max euclidean distance: 0.8772319282635773
Cosine Similarity:
Mean of cosine similarites: 0.9674320902167836
Min cosine similarity: 0.6152323203539565
==========================================================
Results for batch size: 64
Euclidean Distance:
Mean of euclidean distances: 0.09294858544026222
Max euclidean distance: 1.1095371609913112
Cosine Similarity:
Mean of cosine similarites: 0.960566610535093
Min cosine similarity: 0.38446375093677954
==========================================================
Results for batch size: 128
Euclidean Distance:
Mean of euclidean distances: 0.08298749768139443
Max euclidean distance: 0.9481241092937398
Cosine Similarity:
Mean of cosine similarites: 0.9696922749059266
Min cosine similarity: 0.5505303906957912
==========================================================
Results for batch size: 256
Euclidean Distance:
Mean of euclidean distances: 0.08726932907397295
Max euclidean distance: 1.0992560821737951
Cosine Similarity:
Mean of cosine similarites: 0.966701897336651
Min cosine similarity: 0.39581805237428414
==========================================================
```


### OS
Linux
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.3.4
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6262/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6262/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/3044
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3044/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3044/comments
|
https://api.github.com/repos/ollama/ollama/issues/3044/events
|
https://github.com/ollama/ollama/pull/3044
| 2,177,824,915
|
PR_kwDOJ0Z1Ps5pLJjA
| 3,044
|
convert: fix shape
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-03-10T17:44:26
| 2024-03-11T16:56:58
| 2024-03-11T16:56:57
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3044",
"html_url": "https://github.com/ollama/ollama/pull/3044",
"diff_url": "https://github.com/ollama/ollama/pull/3044.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3044.patch",
"merged_at": "2024-03-11T16:56:57"
}
|
This commit reverts 18979ad4a1d40d04e3b981a477fa6323a40304b6 which was merged in #3014
#3014 broke convert by setting dimensions to an array filled with 1s which is incorrect. while this is how the reader works, the writer only writes the array items if it's greater than zero[^1]. filling with 1s incorrectly potentially adds extra dimensions
the root cause is the way dimensions are encoded. using a slice is less error prone
[^1]: https://github.com/ollama/ollama/blob/main/llm/gguf.go#L427
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3044/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3044/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1127
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1127/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1127/comments
|
https://api.github.com/repos/ollama/ollama/issues/1127/events
|
https://github.com/ollama/ollama/pull/1127
| 1,993,341,766
|
PR_kwDOJ0Z1Ps5fcVEG
| 1,127
|
Move /generate format to optional parameters
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-11-14T18:44:30
| 2023-11-14T21:12:31
| 2023-11-14T21:12:30
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1127",
"html_url": "https://github.com/ollama/ollama/pull/1127",
"diff_url": "https://github.com/ollama/ollama/pull/1127.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1127.patch",
"merged_at": "2023-11-14T21:12:30"
}
|
This field is optional and should be under the `Advanced parameters` header
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1127/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1127/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3978
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3978/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3978/comments
|
https://api.github.com/repos/ollama/ollama/issues/3978/events
|
https://github.com/ollama/ollama/issues/3978
| 2,267,001,000
|
I_kwDOJ0Z1Ps6HH7Co
| 3,978
|
Error: Head "http://127.0.0.1:11434/": dial tcp 127.0.0.1:11434: connectex:
|
{
"login": "jannoname",
"id": 168279140,
"node_id": "U_kgDOCge8ZA",
"avatar_url": "https://avatars.githubusercontent.com/u/168279140?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jannoname",
"html_url": "https://github.com/jannoname",
"followers_url": "https://api.github.com/users/jannoname/followers",
"following_url": "https://api.github.com/users/jannoname/following{/other_user}",
"gists_url": "https://api.github.com/users/jannoname/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jannoname/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jannoname/subscriptions",
"organizations_url": "https://api.github.com/users/jannoname/orgs",
"repos_url": "https://api.github.com/users/jannoname/repos",
"events_url": "https://api.github.com/users/jannoname/events{/privacy}",
"received_events_url": "https://api.github.com/users/jannoname/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg",
"url": "https://api.github.com/repos/ollama/ollama/labels/windows",
"name": "windows",
"color": "0052CC",
"default": false,
"description": ""
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 9
| 2024-04-27T12:46:00
| 2024-10-17T12:26:51
| 2024-05-21T18:18:09
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
The last windows version of ollama didnt work on my laptop. It cant connect to 11434 anymore - even if it is free or block the port itself.
```
C:\Users\XXX>ollama list
Error: Head "http://127.0.0.1:11434/": dial tcp 127.0.0.1:11434: connectex: Es konnte keine Verbindung hergestellt werden, da der Zielcomputer die Verbindung verweigerte.
C:\Users\XXX>netstat -ano | find "11434"
C:\Users\XXX>ollama list
Error: Head "http://127.0.0.1:11434/": dial tcp 127.0.0.1:11434: connectex: Es konnte keine Verbindung hergestellt werden, da der Zielcomputer die Verbindung verweigerte.
```
The older version works ... until it selfupdatet.
The new ollama (0.1.32) didnt work in docker too.
The Chat in the console window works some times but server command never.
### OS
Windows
### GPU
AMD onboard
### CPU
Ryzen 7000 Mobile APU
### Ollama version
0.1.32 (non-docker)
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3978/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3978/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1153
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1153/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1153/comments
|
https://api.github.com/repos/ollama/ollama/issues/1153/events
|
https://github.com/ollama/ollama/issues/1153
| 1,996,751,591
|
I_kwDOJ0Z1Ps53BALn
| 1,153
|
CodeGPT extension cannot connect to locally served ollama Error: connect ECONNREFUSED ::1:11434
|
{
"login": "wahreChrist",
"id": 61061924,
"node_id": "MDQ6VXNlcjYxMDYxOTI0",
"avatar_url": "https://avatars.githubusercontent.com/u/61061924?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wahreChrist",
"html_url": "https://github.com/wahreChrist",
"followers_url": "https://api.github.com/users/wahreChrist/followers",
"following_url": "https://api.github.com/users/wahreChrist/following{/other_user}",
"gists_url": "https://api.github.com/users/wahreChrist/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wahreChrist/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wahreChrist/subscriptions",
"organizations_url": "https://api.github.com/users/wahreChrist/orgs",
"repos_url": "https://api.github.com/users/wahreChrist/repos",
"events_url": "https://api.github.com/users/wahreChrist/events{/privacy}",
"received_events_url": "https://api.github.com/users/wahreChrist/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 5
| 2023-11-16T12:28:31
| 2023-11-17T00:36:36
| 2023-11-17T00:36:35
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Im trying to make CodeGPT extension work, to interact with Ollama in VS code, but it gives me this error in devtools console:
```
[Extension Host] No active text editor found.
log.ts:441 ERR [Extension Host] Error: Error: connect ECONNREFUSED ::1:11434
at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1494:16)
```
Ollama itself works fine in CLI, http://127.0.0.1:11434/ also works and says that its up and running. Can't seem to figure out why Ollama refuses the connection. Can it have something to do with it not being able to locate default ssh key? It creates one every time when I run `ollama serve`, but then just exits with an error that port 11434 is already taken
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1153/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1153/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6244
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6244/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6244/comments
|
https://api.github.com/repos/ollama/ollama/issues/6244/events
|
https://github.com/ollama/ollama/issues/6244
| 2,454,467,251
|
I_kwDOJ0Z1Ps6STDKz
| 6,244
|
1001st Issue
|
{
"login": "gileneusz",
"id": 34601970,
"node_id": "MDQ6VXNlcjM0NjAxOTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/34601970?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gileneusz",
"html_url": "https://github.com/gileneusz",
"followers_url": "https://api.github.com/users/gileneusz/followers",
"following_url": "https://api.github.com/users/gileneusz/following{/other_user}",
"gists_url": "https://api.github.com/users/gileneusz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gileneusz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gileneusz/subscriptions",
"organizations_url": "https://api.github.com/users/gileneusz/orgs",
"repos_url": "https://api.github.com/users/gileneusz/repos",
"events_url": "https://api.github.com/users/gileneusz/events{/privacy}",
"received_events_url": "https://api.github.com/users/gileneusz/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-08-07T22:38:15
| 2024-08-07T22:40:16
| 2024-08-07T22:40:16
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
yay 😁
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6244/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6244/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5939
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5939/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5939/comments
|
https://api.github.com/repos/ollama/ollama/issues/5939/events
|
https://github.com/ollama/ollama/issues/5939
| 2,428,885,261
|
I_kwDOJ0Z1Ps6QxdkN
| 5,939
|
Error: invalid file magic when trying to import gte-Qwen2-7B-instruct gguf model to ollama instance
|
{
"login": "CHNVigny",
"id": 9402746,
"node_id": "MDQ6VXNlcjk0MDI3NDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9402746?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CHNVigny",
"html_url": "https://github.com/CHNVigny",
"followers_url": "https://api.github.com/users/CHNVigny/followers",
"following_url": "https://api.github.com/users/CHNVigny/following{/other_user}",
"gists_url": "https://api.github.com/users/CHNVigny/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CHNVigny/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CHNVigny/subscriptions",
"organizations_url": "https://api.github.com/users/CHNVigny/orgs",
"repos_url": "https://api.github.com/users/CHNVigny/repos",
"events_url": "https://api.github.com/users/CHNVigny/events{/privacy}",
"received_events_url": "https://api.github.com/users/CHNVigny/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 5
| 2024-07-25T03:13:37
| 2024-11-11T16:52:19
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
**I got this error:**
root@bccf6f1eb00f:/data/models# ollama create gte_qwen2:7b -f Modelfile
transferring model data
Error: invalid file magic
**This is my ModelFile:**
FROM gte_qwen2.gguf
TEMPLATE "{{ if .Prompt }}<|im_start|>user
{{ .Prompt }}<|im_end|>
{{ end }}<|im_start|>assistant
{{ .Response }}<|im_end|>
"
PARAMETER stop <|im_start|>
PARAMETER stop <|im_end|>
I transferred and quantized by the latest llama.cpp.
How can I import this model to ollama?
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.48
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5939/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5939/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/281
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/281/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/281/comments
|
https://api.github.com/repos/ollama/ollama/issues/281/events
|
https://github.com/ollama/ollama/issues/281
| 1,836,820,475
|
I_kwDOJ0Z1Ps5te6f7
| 281
|
Consider a non streaming api for `/api/generate`
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 4
| 2023-08-04T14:10:53
| 2023-10-11T16:54:28
| 2023-10-11T16:54:28
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
If `Content-Type: application/json` is set, we should consider returning a single large json object vs an event stream. This would be an elegant design as there are no new flags
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/281/reactions",
"total_count": 6,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/281/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7858
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7858/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7858/comments
|
https://api.github.com/repos/ollama/ollama/issues/7858/events
|
https://github.com/ollama/ollama/issues/7858
| 2,698,142,313
|
I_kwDOJ0Z1Ps6g0mJp
| 7,858
|
Can you make the normalize optional for embeddings?
|
{
"login": "BeNhNp",
"id": 33339730,
"node_id": "MDQ6VXNlcjMzMzM5NzMw",
"avatar_url": "https://avatars.githubusercontent.com/u/33339730?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BeNhNp",
"html_url": "https://github.com/BeNhNp",
"followers_url": "https://api.github.com/users/BeNhNp/followers",
"following_url": "https://api.github.com/users/BeNhNp/following{/other_user}",
"gists_url": "https://api.github.com/users/BeNhNp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BeNhNp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BeNhNp/subscriptions",
"organizations_url": "https://api.github.com/users/BeNhNp/orgs",
"repos_url": "https://api.github.com/users/BeNhNp/repos",
"events_url": "https://api.github.com/users/BeNhNp/events{/privacy}",
"received_events_url": "https://api.github.com/users/BeNhNp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 0
| 2024-11-27T11:02:28
| 2024-11-27T15:29:44
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
https://ollama.com/library/nomic-embed-text:v1.5
```shell
curl http://localhost:11434/api/embeddings -d '{
"model": "nomic-embed-text",
"prompt": "The sky is blue because of Rayleigh scattering"
}'
```
access "http://127.0.0.1:%d/embedding" is ok, [ollama_llama_server](https://github.com/ollama/ollama/blob/main/llm/server.go#L894) returns the right embedding, while the `/api/embed` returns a different embedding, can you make the `normalize` optional?
### OS
Linux
### GPU
_No response_
### CPU
_No response_
### Ollama version
_No response_
|
{
"login": "BeNhNp",
"id": 33339730,
"node_id": "MDQ6VXNlcjMzMzM5NzMw",
"avatar_url": "https://avatars.githubusercontent.com/u/33339730?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BeNhNp",
"html_url": "https://github.com/BeNhNp",
"followers_url": "https://api.github.com/users/BeNhNp/followers",
"following_url": "https://api.github.com/users/BeNhNp/following{/other_user}",
"gists_url": "https://api.github.com/users/BeNhNp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BeNhNp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BeNhNp/subscriptions",
"organizations_url": "https://api.github.com/users/BeNhNp/orgs",
"repos_url": "https://api.github.com/users/BeNhNp/repos",
"events_url": "https://api.github.com/users/BeNhNp/events{/privacy}",
"received_events_url": "https://api.github.com/users/BeNhNp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7858/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7858/timeline
| null |
reopened
| false
|
https://api.github.com/repos/ollama/ollama/issues/2352
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2352/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2352/comments
|
https://api.github.com/repos/ollama/ollama/issues/2352/events
|
https://github.com/ollama/ollama/issues/2352
| 2,117,262,900
|
I_kwDOJ0Z1Ps5-Mt40
| 2,352
|
API streaming and non streaming mode produces garbage output after the first query
|
{
"login": "nextdimension",
"id": 3390177,
"node_id": "MDQ6VXNlcjMzOTAxNzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3390177?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nextdimension",
"html_url": "https://github.com/nextdimension",
"followers_url": "https://api.github.com/users/nextdimension/followers",
"following_url": "https://api.github.com/users/nextdimension/following{/other_user}",
"gists_url": "https://api.github.com/users/nextdimension/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nextdimension/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nextdimension/subscriptions",
"organizations_url": "https://api.github.com/users/nextdimension/orgs",
"repos_url": "https://api.github.com/users/nextdimension/repos",
"events_url": "https://api.github.com/users/nextdimension/events{/privacy}",
"received_events_url": "https://api.github.com/users/nextdimension/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 3
| 2024-02-04T19:10:22
| 2024-03-12T16:55:11
| 2024-03-12T16:55:10
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
When I use the ollama API the first response works fine, then without changing anything, subsequent requests give a response as if its ignoring the system prompt and spits out garbage. Restarting the ollama service works, until the second query.
Latest version and one version previous had the same issue.
My setup:
I installed it using the install bash script on linux. Its running as a system service. Its running under my own user account and group as thats the only way I could get the OLLAMA_MODEL dir ENV VAR to work. GPU acceleration. NVIDIA 2X GPUS. 3090 && A6000. CPU - AMD Ryzen 9 5900X 12-Core. I can see the VRAM gets almost filled on the A6000, and the 3090 is half FULL. Then ollama hangs and GPU (A6000) is at 100%. The hang is caused by having Ollama-webui running under docker and then trying to use the API.
In the tests below I stopped the ollama-webui docker container and restarted the ollama-service before testing.
Example Query:
curl http://localhost:11434/api/generate -d '{
"model": "llama2",
"prompt": "C programming langugage",
"system": "You are poet write poems about topics given in the prompt",
"stream": false
}'
curl http://localhost:11434/api/generate -d '{
"model": "llama2",
"prompt": "C programming langugage",
"system": "You are poet write poems about topics given in the prompt",
"stream": true
}'
|
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2352/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2352/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2477
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2477/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2477/comments
|
https://api.github.com/repos/ollama/ollama/issues/2477/events
|
https://github.com/ollama/ollama/pull/2477
| 2,133,008,998
|
PR_kwDOJ0Z1Ps5myepF
| 2,477
|
Update README.md to include link to Ollama-ex Elixir library
|
{
"login": "lebrunel",
"id": 124721263,
"node_id": "U_kgDOB28Ybw",
"avatar_url": "https://avatars.githubusercontent.com/u/124721263?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lebrunel",
"html_url": "https://github.com/lebrunel",
"followers_url": "https://api.github.com/users/lebrunel/followers",
"following_url": "https://api.github.com/users/lebrunel/following{/other_user}",
"gists_url": "https://api.github.com/users/lebrunel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lebrunel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lebrunel/subscriptions",
"organizations_url": "https://api.github.com/users/lebrunel/orgs",
"repos_url": "https://api.github.com/users/lebrunel/repos",
"events_url": "https://api.github.com/users/lebrunel/events{/privacy}",
"received_events_url": "https://api.github.com/users/lebrunel/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-02-13T19:37:20
| 2024-02-13T19:40:51
| 2024-02-13T19:40:44
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/2477",
"html_url": "https://github.com/ollama/ollama/pull/2477",
"diff_url": "https://github.com/ollama/ollama/pull/2477.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2477.patch",
"merged_at": "2024-02-13T19:40:44"
}
| null |
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2477/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2477/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5869
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5869/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5869/comments
|
https://api.github.com/repos/ollama/ollama/issues/5869/events
|
https://github.com/ollama/ollama/issues/5869
| 2,424,561,280
|
I_kwDOJ0Z1Ps6Qg96A
| 5,869
|
`Error: file does not exist` but it exists
|
{
"login": "DevLLM",
"id": 131604629,
"node_id": "U_kgDOB9gglQ",
"avatar_url": "https://avatars.githubusercontent.com/u/131604629?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DevLLM",
"html_url": "https://github.com/DevLLM",
"followers_url": "https://api.github.com/users/DevLLM/followers",
"following_url": "https://api.github.com/users/DevLLM/following{/other_user}",
"gists_url": "https://api.github.com/users/DevLLM/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DevLLM/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DevLLM/subscriptions",
"organizations_url": "https://api.github.com/users/DevLLM/orgs",
"repos_url": "https://api.github.com/users/DevLLM/repos",
"events_url": "https://api.github.com/users/DevLLM/events{/privacy}",
"received_events_url": "https://api.github.com/users/DevLLM/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 5
| 2024-07-23T08:14:44
| 2024-11-18T01:17:32
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Hello, I want to push my model to ollama but I got the error
`retrieving manifest`
`Error: file does not exist `
but the problem is that I have the file, specifically "C:\Users\User\.ollama\models\manifests\registry.ollama.ai\_\mymodel\latest"
and my username is _ (link: [https://ollama.com/_](https://ollama.com/_ ) ), so and I can't change my username
"""
D:\ollama> ollama create _/mymodel:latest -f Modelfile
transferring model data
using existing layer sha256:617ba424eabae67d228cf4598d2b18d9656b73c1f8f5bfa974ead81485dad2a5
using existing layer sha256:f5dc666b38fce911ccd916bcb13ea78a8002803fd11d5bb6486c4dd76ab8223f
using existing layer sha256:3dddcbf82aec37d515d388e1141900e1530f74f20c5091f64567609a56fe8f43
using existing layer sha256:023c31c9015bbf14d78183c19eec819c3142e791c857bbc3989e53250f00561d
using existing layer sha256:c50ad1ef7469cb081d31e4c321e73562e1e657e890a325b4d7214f8988fd1678
using existing layer sha256:6a6636a5d2ef8c1f29444967fb0f17930369d2c53117d39bd3926760d1062230
writing manifest
success
D:\ollama> ollama list
NAME ID SIZE MODIFIED
_/mymodel:latest 37dad3f2b9d3 13 GB 18 seconds ago
mymodel:latest 37dad3f2b9d3 13 GB 23 minutes ago
D:\ollama> ollama push _/mymodel:latest
retrieving manifest
Error: file does not exist
"""
### OS
Windows, WSL2
### GPU
Nvidia, Intel
### CPU
Intel
### Ollama version
0.2.7
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5869/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5869/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/1095
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1095/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1095/comments
|
https://api.github.com/repos/ollama/ollama/issues/1095/events
|
https://github.com/ollama/ollama/pull/1095
| 1,989,220,342
|
PR_kwDOJ0Z1Ps5fOaW4
| 1,095
|
Add JSON mode to `ollama run`
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-11-12T03:29:21
| 2023-11-14T02:54:03
| 2023-11-14T02:54:02
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1095",
"html_url": "https://github.com/ollama/ollama/pull/1095",
"diff_url": "https://github.com/ollama/ollama/pull/1095.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1095.patch",
"merged_at": "2023-11-14T02:54:02"
}
|
Allow using JSON mode from the `ollama run` command line
* `--format json`: a new command line flag
* `/set format json`: in the interactive `ollama run` terminal
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1095/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1095/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/412
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/412/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/412/comments
|
https://api.github.com/repos/ollama/ollama/issues/412/events
|
https://github.com/ollama/ollama/pull/412
| 1,867,533,219
|
PR_kwDOJ0Z1Ps5Y0jCH
| 412
|
update README.md
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-08-25T18:45:02
| 2023-08-27T04:26:35
| 2023-08-27T04:26:34
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/412",
"html_url": "https://github.com/ollama/ollama/pull/412",
"diff_url": "https://github.com/ollama/ollama/pull/412.diff",
"patch_url": "https://github.com/ollama/ollama/pull/412.patch",
"merged_at": "2023-08-27T04:26:34"
}
| null |
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/412/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/412/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3598
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3598/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3598/comments
|
https://api.github.com/repos/ollama/ollama/issues/3598/events
|
https://github.com/ollama/ollama/issues/3598
| 2,237,947,026
|
I_kwDOJ0Z1Ps6FZFyS
| 3,598
|
Allow users the ability to manage website access without using terminal commands
|
{
"login": "dahjson",
"id": 8768601,
"node_id": "MDQ6VXNlcjg3Njg2MDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/8768601?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dahjson",
"html_url": "https://github.com/dahjson",
"followers_url": "https://api.github.com/users/dahjson/followers",
"following_url": "https://api.github.com/users/dahjson/following{/other_user}",
"gists_url": "https://api.github.com/users/dahjson/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dahjson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dahjson/subscriptions",
"organizations_url": "https://api.github.com/users/dahjson/orgs",
"repos_url": "https://api.github.com/users/dahjson/repos",
"events_url": "https://api.github.com/users/dahjson/events{/privacy}",
"received_events_url": "https://api.github.com/users/dahjson/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 3
| 2024-04-11T14:59:32
| 2024-11-15T17:15:27
| 2024-11-15T17:15:27
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What are you trying to do?
Improve the user experience for allowing website access to Ollama models. Currently, users are required to run terminal commands to get this working. If the user restarts their computer they have to run these commands again, which is not ideal.
### How should we solve this?
Add configurations directly within the Ollama app to manage which websites or Chrome extensions can have access to Ollama.
### What is the impact of not solving this?
Makes it much harder for the average person to figure out how to run Ollama models with a website or Chrome extension.
### Anything else?
I'm using Ollama with my web app and Chrome extension, JobJette (jobjette.com), an AI job search copilot. I have a dedicated page with setup instructions to get Ollama installed with an AI model and connected to both the website and Chrome extension. These required setup instructions are too complicated for the average user to accomplish because they require running terminal commands.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3598/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3598/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2823
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2823/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2823/comments
|
https://api.github.com/repos/ollama/ollama/issues/2823/events
|
https://github.com/ollama/ollama/issues/2823
| 2,160,213,223
|
I_kwDOJ0Z1Ps6Awjzn
| 2,823
|
rocm crashes on `Illegal seek for GPU arch : gfx1032`
|
{
"login": "turlapati",
"id": 4550654,
"node_id": "MDQ6VXNlcjQ1NTA2NTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/4550654?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/turlapati",
"html_url": "https://github.com/turlapati",
"followers_url": "https://api.github.com/users/turlapati/followers",
"following_url": "https://api.github.com/users/turlapati/following{/other_user}",
"gists_url": "https://api.github.com/users/turlapati/gists{/gist_id}",
"starred_url": "https://api.github.com/users/turlapati/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/turlapati/subscriptions",
"organizations_url": "https://api.github.com/users/turlapati/orgs",
"repos_url": "https://api.github.com/users/turlapati/repos",
"events_url": "https://api.github.com/users/turlapati/events{/privacy}",
"received_events_url": "https://api.github.com/users/turlapati/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6433346500,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf3UTxA",
"url": "https://api.github.com/repos/ollama/ollama/labels/amd",
"name": "amd",
"color": "000000",
"default": false,
"description": "Issues relating to AMD GPUs and ROCm"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 4
| 2024-02-29T02:09:53
| 2024-03-02T01:30:17
| 2024-03-02T01:30:16
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
user@HTML:~$ ollama run gemma
Error: Post "http://127.0.0.1:11434/api/chat": EOF
...
[crash_0.1.27_gemma_rcom.txt](https://github.com/ollama/ollama/files/14441945/crash_0.1.27_gemma_rcom.txt)
loading library /tmp/ollama3347055972/rocm_v6/libext_server.so
time=2024-02-28T20:59:58.907-05:00 level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: /tmp/ollama3347055972/rocm_v6/libext_server.so"
time=2024-02-28T20:59:58.907-05:00 level=INFO source=dyn_ext_server.go:150 msg="Initializing llama server"
rocBLAS error: Cannot read /opt/rocm/lib/rocblas/library/TensileLibrary.dat: Illegal seek for GPU arch : gfx1032
free(): invalid pointer
SIGABRT: abort
PC=0x7fcc860969fc m=16 sigcode=18446744073709551610
signal arrived during cgo execution
|
{
"login": "turlapati",
"id": 4550654,
"node_id": "MDQ6VXNlcjQ1NTA2NTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/4550654?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/turlapati",
"html_url": "https://github.com/turlapati",
"followers_url": "https://api.github.com/users/turlapati/followers",
"following_url": "https://api.github.com/users/turlapati/following{/other_user}",
"gists_url": "https://api.github.com/users/turlapati/gists{/gist_id}",
"starred_url": "https://api.github.com/users/turlapati/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/turlapati/subscriptions",
"organizations_url": "https://api.github.com/users/turlapati/orgs",
"repos_url": "https://api.github.com/users/turlapati/repos",
"events_url": "https://api.github.com/users/turlapati/events{/privacy}",
"received_events_url": "https://api.github.com/users/turlapati/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2823/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2823/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4128
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4128/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4128/comments
|
https://api.github.com/repos/ollama/ollama/issues/4128/events
|
https://github.com/ollama/ollama/issues/4128
| 2,277,793,272
|
I_kwDOJ0Z1Ps6HxF34
| 4,128
|
Normalization of output from embedding model
|
{
"login": "hagemon",
"id": 15187235,
"node_id": "MDQ6VXNlcjE1MTg3MjM1",
"avatar_url": "https://avatars.githubusercontent.com/u/15187235?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hagemon",
"html_url": "https://github.com/hagemon",
"followers_url": "https://api.github.com/users/hagemon/followers",
"following_url": "https://api.github.com/users/hagemon/following{/other_user}",
"gists_url": "https://api.github.com/users/hagemon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hagemon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hagemon/subscriptions",
"organizations_url": "https://api.github.com/users/hagemon/orgs",
"repos_url": "https://api.github.com/users/hagemon/repos",
"events_url": "https://api.github.com/users/hagemon/events{/privacy}",
"received_events_url": "https://api.github.com/users/hagemon/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 2
| 2024-05-03T14:24:45
| 2024-07-02T14:42:05
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
When I use Ollama Embedding together with Langchain Retriever's `get_relevant_documents`, I always get a score that around 200. However, when I use HuggingFaceEmbedding, this value is between 0 and 1.
So I continued to explore the reason and, according to the official documentation, used OllamaEmbedding to vectorize both query and documents. I found that their dot product still exceeds 100:
```python
from langchain_community.embeddings import OllamaEmbeddings
import numpy as np
ollama_emb = OllamaEmbeddings(
model="mxbai-embed-large:latest",
)
r1 = ollama_emb.embed_documents(
[
"Alpha is the first letter of Greek alphabet",
"Beta is the second letter of Greek alphabet",
]
)
r2 = ollama_emb.embed_query(
"What is the second letter of Greek alphabet"
)
print(np.dot(r1, r2))
# Output: array([196.91232687, 198.68434774])
```
Therefore, I assume that they are not normalized.
Some vector databases, such as Milvus, suggest normalizing the vectors before inserting them into the database. So, I wonder if OllamaEmbedding has plans to (or had already) support an option like HuggingFaceEmbedding's `encode_kwargs = {"normalize_embeddings": True}`, which allows the output vectors to be normalized without the need for me to manually implement this process.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4128/reactions",
"total_count": 3,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4128/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/1946
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1946/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1946/comments
|
https://api.github.com/repos/ollama/ollama/issues/1946/events
|
https://github.com/ollama/ollama/issues/1946
| 2,078,214,003
|
I_kwDOJ0Z1Ps573wdz
| 1,946
|
`SIGSEGV: segmentation violation` when shutting down server with ctrl+c
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-01-12T07:14:00
| 2024-03-12T18:14:33
| 2024-03-12T18:14:33
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
```
[GIN] 2024/01/12 - 12:38:39 | 200 | 5.985573917s | 127.0.0.1 | POST "/api/chat"
2024/01/12 12:38:52 ext_server_common.go:158: loaded 0 images
^Cggml_metal_free: deallocating
SIGSEGV: segmentation violation
```
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1946/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1946/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3275
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3275/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3275/comments
|
https://api.github.com/repos/ollama/ollama/issues/3275/events
|
https://github.com/ollama/ollama/issues/3275
| 2,198,497,833
|
I_kwDOJ0Z1Ps6DCmop
| 3,275
|
Resumable `ollama push`
|
{
"login": "sammcj",
"id": 862951,
"node_id": "MDQ6VXNlcjg2Mjk1MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/862951?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sammcj",
"html_url": "https://github.com/sammcj",
"followers_url": "https://api.github.com/users/sammcj/followers",
"following_url": "https://api.github.com/users/sammcj/following{/other_user}",
"gists_url": "https://api.github.com/users/sammcj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sammcj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sammcj/subscriptions",
"organizations_url": "https://api.github.com/users/sammcj/orgs",
"repos_url": "https://api.github.com/users/sammcj/repos",
"events_url": "https://api.github.com/users/sammcj/events{/privacy}",
"received_events_url": "https://api.github.com/users/sammcj/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 6677370291,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCVsw",
"url": "https://api.github.com/repos/ollama/ollama/labels/networking",
"name": "networking",
"color": "0B5368",
"default": false,
"description": "Issues relating to ollama pull and push"
}
] |
open
| false
|
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 1
| 2024-03-20T20:27:36
| 2024-09-04T04:43:05
| null |
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
When pushing models to ollama.com's registry, if they fail part way through - when you go to continue the push they start from scratch again.
This is quite painful when you've spent hours trying to upload from ~~Australian~~ a slow internet link as it can mean many hours of uploading and hoping for the best.
<img width="1678" alt="SCR-20240321-gzoz" src="https://github.com/ollama/ollama/assets/862951/f3dec809-4445-428e-8b70-3d4f7b9eb1da">
<img width="1675" alt="SCR-20240321-gzvk" src="https://github.com/ollama/ollama/assets/862951/d8d1e4ee-358d-4dd8-815c-c5b9bfc24dd6">
### What did you expect to see?
Errors happen, things fail - but they should be resumable (at least for say 24hrs).
I expected to run the push and have it pickup where it left off (around 8%).
### Steps to reproduce
- Come to Australia, enjoy our fine internet infrastructure.
- Push a large model to ollama.com
- Take pleasure in the chance game which is Australian internet reliability.
- When your upload fails, try to continue the upload by pushing again.
- Tears.
### Are there any recent changes that introduced the issue?
_No response_
### OS
Linux, macOS
### Architecture
arm64, amd64
### Platform
_No response_
### Ollama version
0.1.29
### GPU
Nvidia, Apple
### GPU info
N/A
### CPU
AMD, Apple
### Other software
_No response_
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3275/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3275/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/5359
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5359/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5359/comments
|
https://api.github.com/repos/ollama/ollama/issues/5359/events
|
https://github.com/ollama/ollama/issues/5359
| 2,380,263,043
|
I_kwDOJ0Z1Ps6N3-6D
| 5,359
|
Both Gemma2 model fail with cudaMalloc error despite available GPU memory, while other models run successfully.
|
{
"login": "chiragbharambe",
"id": 30945307,
"node_id": "MDQ6VXNlcjMwOTQ1MzA3",
"avatar_url": "https://avatars.githubusercontent.com/u/30945307?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chiragbharambe",
"html_url": "https://github.com/chiragbharambe",
"followers_url": "https://api.github.com/users/chiragbharambe/followers",
"following_url": "https://api.github.com/users/chiragbharambe/following{/other_user}",
"gists_url": "https://api.github.com/users/chiragbharambe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chiragbharambe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chiragbharambe/subscriptions",
"organizations_url": "https://api.github.com/users/chiragbharambe/orgs",
"repos_url": "https://api.github.com/users/chiragbharambe/repos",
"events_url": "https://api.github.com/users/chiragbharambe/events{/privacy}",
"received_events_url": "https://api.github.com/users/chiragbharambe/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 6
| 2024-06-28T11:55:21
| 2024-06-28T14:15:14
| 2024-06-28T14:15:14
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Archlinux 6.6.35-2-lts
Ollama version 0.1.47
latest ollama-cuda installed via pacman. Ollama system service is active. All other models I have work as expected. Both gemma2 9b and 27b giving me the same error. Ram is not issue. I can run mixtral8x7b.
Hardware
- CPU: 5800HS
- GPU: RTX 3050 mobile 4GB
- RAM: 40GB
- SWAP: 25GB
```
$ ollama run gemma2
Error: llama runner process has terminated: signal: aborted (core dumped) error:failed to create context with model '/var/lib/ollama/.ollama/models/blobs/sha256-e84ed7399c82fbf7dbd6cdef3f12d356c3cdb5512e5d8b2a9898080cbcdd72e5'
$ ollama run gemma2:27b
Error: llama runner process has terminated: signal: aborted (core dumped) error:failed to create context with model '/var/lib/ollama/.ollama/models/blobs/sha256-b6ee2328408ebc031359e9745973b09963df9269468d37e1ea7912862aadec72'
```
|
{
"login": "chiragbharambe",
"id": 30945307,
"node_id": "MDQ6VXNlcjMwOTQ1MzA3",
"avatar_url": "https://avatars.githubusercontent.com/u/30945307?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chiragbharambe",
"html_url": "https://github.com/chiragbharambe",
"followers_url": "https://api.github.com/users/chiragbharambe/followers",
"following_url": "https://api.github.com/users/chiragbharambe/following{/other_user}",
"gists_url": "https://api.github.com/users/chiragbharambe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chiragbharambe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chiragbharambe/subscriptions",
"organizations_url": "https://api.github.com/users/chiragbharambe/orgs",
"repos_url": "https://api.github.com/users/chiragbharambe/repos",
"events_url": "https://api.github.com/users/chiragbharambe/events{/privacy}",
"received_events_url": "https://api.github.com/users/chiragbharambe/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5359/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5359/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3409
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3409/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3409/comments
|
https://api.github.com/repos/ollama/ollama/issues/3409/events
|
https://github.com/ollama/ollama/issues/3409
| 2,215,805,241
|
I_kwDOJ0Z1Ps6EEoE5
| 3,409
|
API to terminate the running job before the completion
|
{
"login": "ansis-m",
"id": 78793148,
"node_id": "MDQ6VXNlcjc4NzkzMTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/78793148?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ansis-m",
"html_url": "https://github.com/ansis-m",
"followers_url": "https://api.github.com/users/ansis-m/followers",
"following_url": "https://api.github.com/users/ansis-m/following{/other_user}",
"gists_url": "https://api.github.com/users/ansis-m/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ansis-m/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ansis-m/subscriptions",
"organizations_url": "https://api.github.com/users/ansis-m/orgs",
"repos_url": "https://api.github.com/users/ansis-m/repos",
"events_url": "https://api.github.com/users/ansis-m/events{/privacy}",
"received_events_url": "https://api.github.com/users/ansis-m/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-03-29T18:22:36
| 2024-04-15T19:32:53
| 2024-04-15T19:32:53
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What are you trying to do?
I am using Ollama via REST api interface. Sometimes when model streams a long response (can be quite slow on my computer) I would like to terminate the process before the completion. I checked the API documentation and I did not find an option for this. Even if I unsubscribe from the stream (Java Spring AI interface) I see that the cpu/gpu usage remains high for extended period (untill the whole response has been generated).
### How should we solve this?
Implement REST API endpoint that allows to terminate the running job before the completion.
### What is the impact of not solving this?
Better user experience. Many users do not own super fast computers and response generation takes some time (0.5-2 minutes on my computer). Sometimes when you get half way through the answer you know that you are not getting what you want and you would like to reformulate a prompt without waiting for the completion. It would greatly improve the user experience if the running job could be stoped before the completion.
### Anything else?
Thank you for the awesome project!
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3409/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3409/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8065
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8065/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8065/comments
|
https://api.github.com/repos/ollama/ollama/issues/8065/events
|
https://github.com/ollama/ollama/issues/8065
| 2,735,086,935
|
I_kwDOJ0Z1Ps6jBh1X
| 8,065
|
dial tcp: lookup registry.ollama.ai on 127.0.0.53:53: server misbehaving
|
{
"login": "szzhh",
"id": 78521539,
"node_id": "MDQ6VXNlcjc4NTIxNTM5",
"avatar_url": "https://avatars.githubusercontent.com/u/78521539?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/szzhh",
"html_url": "https://github.com/szzhh",
"followers_url": "https://api.github.com/users/szzhh/followers",
"following_url": "https://api.github.com/users/szzhh/following{/other_user}",
"gists_url": "https://api.github.com/users/szzhh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/szzhh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/szzhh/subscriptions",
"organizations_url": "https://api.github.com/users/szzhh/orgs",
"repos_url": "https://api.github.com/users/szzhh/repos",
"events_url": "https://api.github.com/users/szzhh/events{/privacy}",
"received_events_url": "https://api.github.com/users/szzhh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 0
| 2024-12-12T07:30:04
| 2024-12-12T08:02:25
| 2024-12-12T08:02:25
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
when I ran "ollama pull mxbai-embed-large ", I got:
'''
pulling manifest
Error: pull model manifest: Get "https://registry.ollama.ai/v2/library/mxbai-embed-large/manifests/latest": dial tcp: lookup registry.ollama.ai on 127.0.0.53:53: server misbehaving
'''
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.5.1
|
{
"login": "szzhh",
"id": 78521539,
"node_id": "MDQ6VXNlcjc4NTIxNTM5",
"avatar_url": "https://avatars.githubusercontent.com/u/78521539?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/szzhh",
"html_url": "https://github.com/szzhh",
"followers_url": "https://api.github.com/users/szzhh/followers",
"following_url": "https://api.github.com/users/szzhh/following{/other_user}",
"gists_url": "https://api.github.com/users/szzhh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/szzhh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/szzhh/subscriptions",
"organizations_url": "https://api.github.com/users/szzhh/orgs",
"repos_url": "https://api.github.com/users/szzhh/repos",
"events_url": "https://api.github.com/users/szzhh/events{/privacy}",
"received_events_url": "https://api.github.com/users/szzhh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8065/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8065/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4993
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4993/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4993/comments
|
https://api.github.com/repos/ollama/ollama/issues/4993/events
|
https://github.com/ollama/ollama/issues/4993
| 2,347,847,973
|
I_kwDOJ0Z1Ps6L8VEl
| 4,993
|
AI Models stop working after few user only messages.
|
{
"login": "TheUntitledGoose",
"id": 75637597,
"node_id": "MDQ6VXNlcjc1NjM3NTk3",
"avatar_url": "https://avatars.githubusercontent.com/u/75637597?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TheUntitledGoose",
"html_url": "https://github.com/TheUntitledGoose",
"followers_url": "https://api.github.com/users/TheUntitledGoose/followers",
"following_url": "https://api.github.com/users/TheUntitledGoose/following{/other_user}",
"gists_url": "https://api.github.com/users/TheUntitledGoose/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TheUntitledGoose/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TheUntitledGoose/subscriptions",
"organizations_url": "https://api.github.com/users/TheUntitledGoose/orgs",
"repos_url": "https://api.github.com/users/TheUntitledGoose/repos",
"events_url": "https://api.github.com/users/TheUntitledGoose/events{/privacy}",
"received_events_url": "https://api.github.com/users/TheUntitledGoose/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 2
| 2024-06-12T05:34:51
| 2024-06-13T21:24:56
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I've tested this with `dolphin-llama3:latest` and `llama-dolphin:latest`. I believe this also might be an issue with other models.
This is less of a Ollama issue, but after supplying the messages field with a few user messages such as:
```js
const history = [
{ role: 'user', content: 'respond' },
{ role: 'user', content: 'a' },
{ role: 'user', content: 'la' },
{ role: 'user', content: 'hi' },
{ role: 'user', content: 'hallo' },
{ role: 'user', content: 'test' }
]
```
I guess it learns to just not speak? It returns just nothing in the content prompts. I was thinking of maybe combining all these into one user content, but then the AI starts speaking as if it were me.
I am sending this to the `/api/chat` endpoint.
### OS
Windows
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.1.43
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4993/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4993/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/2524
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2524/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2524/comments
|
https://api.github.com/repos/ollama/ollama/issues/2524/events
|
https://github.com/ollama/ollama/issues/2524
| 2,137,467,945
|
I_kwDOJ0Z1Ps5_Zywp
| 2,524
|
"CPU does not have AVX or AVX2, disabling GPU support"
|
{
"login": "khromov",
"id": 1207507,
"node_id": "MDQ6VXNlcjEyMDc1MDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1207507?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/khromov",
"html_url": "https://github.com/khromov",
"followers_url": "https://api.github.com/users/khromov/followers",
"following_url": "https://api.github.com/users/khromov/following{/other_user}",
"gists_url": "https://api.github.com/users/khromov/gists{/gist_id}",
"starred_url": "https://api.github.com/users/khromov/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/khromov/subscriptions",
"organizations_url": "https://api.github.com/users/khromov/orgs",
"repos_url": "https://api.github.com/users/khromov/repos",
"events_url": "https://api.github.com/users/khromov/events{/privacy}",
"received_events_url": "https://api.github.com/users/khromov/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 3
| 2024-02-15T21:19:27
| 2024-02-16T16:25:56
| 2024-02-16T16:25:56
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
👋 Just downloaded the latest Windows preview. Ollama does work, but GPU is not being used at all as per the title message. Using Windows 11, RTX 2070 and latest Nvidia game ready drivers.
Command:
```
ollama run llama2
>>> Hello!
...
```
Log:
```
time=2024-02-15T22:13:55.132+01:00 level=INFO source=images.go:706 msg="total blobs: 6"
time=2024-02-15T22:13:55.133+01:00 level=INFO source=images.go:713 msg="total unused blobs removed: 0"
time=2024-02-15T22:13:55.135+01:00 level=INFO source=routes.go:1014 msg="Listening on 127.0.0.1:11434 (version 0.1.25)"
time=2024-02-15T22:13:55.135+01:00 level=INFO source=payload_common.go:107 msg="Extracting dynamic libraries..."
time=2024-02-15T22:13:55.403+01:00 level=INFO source=payload_common.go:146 msg="Dynamic LLM libraries [cpu_avx cpu_avx2 cpu cuda_v11.3]"
time=2024-02-15T22:13:55.403+01:00 level=DEBUG source=payload_common.go:147 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY"
[GIN] 2024/02/15 - 22:13:55 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2024/02/15 - 22:13:55 | 200 | 1.0454ms | 127.0.0.1 | POST "/api/show"
[GIN] 2024/02/15 - 22:13:55 | 200 | 1.0465ms | 127.0.0.1 | POST "/api/show"
time=2024-02-15T22:13:56.808+01:00 level=INFO source=gpu.go:94 msg="Detecting GPU type"
time=2024-02-15T22:13:56.808+01:00 level=INFO source=gpu.go:262 msg="Searching for GPU management library nvml.dll"
time=2024-02-15T22:13:56.808+01:00 level=DEBUG source=gpu.go:280 msg="gpu management search paths: [c:\\Windows\\System32\\nvml.dll C:\\WINDOWS\\system32\\nvml.dll* C:\\WINDOWS\\nvml.dll* C:\\WINDOWS\\System32\\Wbem\\nvml.dll* C:\\WINDOWS\\System32\\WindowsPowerShell\\v1.0\\nvml.dll* C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvml.dll* C:\\Program Files\\Git\\cmd\\nvml.dll* C:\\Users\\k\\AppData\\Local\\Programs\\Python\\Python310\\Scripts\\nvml.dll* C:\\Users\\k\\AppData\\Local\\Programs\\Python\\Python310\\nvml.dll* C:\\Users\\k\\AppData\\Local\\Microsoft\\WindowsApps\\nvml.dll* C:\\Users\\k\\AppData\\Local\\GitHubDesktop\\bin\\nvml.dll* C:\\Users\\k\\AppData\\Local\\Programs\\Ollama\\nvml.dll*]"
time=2024-02-15T22:13:56.813+01:00 level=INFO source=gpu.go:308 msg="Discovered GPU libraries: [c:\\Windows\\System32\\nvml.dll C:\\WINDOWS\\system32\\nvml.dll]"
time=2024-02-15T22:13:56.833+01:00 level=INFO source=gpu.go:99 msg="Nvidia GPU detected"
time=2024-02-15T22:13:56.833+01:00 level=INFO source=cpu_common.go:18 msg="CPU does not have vector extensions"
time=2024-02-15T22:13:56.833+01:00 level=WARN source=gpu.go:128 msg="CPU does not have AVX or AVX2, disabling GPU support."
time=2024-02-15T22:13:56.833+01:00 level=INFO source=cpu_common.go:18 msg="CPU does not have vector extensions"
time=2024-02-15T22:13:56.833+01:00 level=WARN source=gpu.go:128 msg="CPU does not have AVX or AVX2, disabling GPU support."
time=2024-02-15T22:13:56.833+01:00 level=INFO source=llm.go:77 msg="GPU not available, falling back to CPU"
time=2024-02-15T22:13:56.833+01:00 level=DEBUG source=payload_common.go:93 msg="ordered list of LLM libraries to try [C:\\Users\\k\\AppData\\Local\\Temp\\ollama451307992\\cpu\\ext_server.dll]"
time=2024-02-15T22:13:56.833+01:00 level=INFO source=dyn_ext_server.go:380 msg="Updating PATH to C:\\Users\\k\\AppData\\Local\\Temp\\ollama451307992\\cpu;C:\\WINDOWS\\system32;C:\\WINDOWS;C:\\WINDOWS\\System32\\Wbem;C:\\WINDOWS\\System32\\WindowsPowerShell\\v1.0\\;C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common;C:\\Program Files\\Git\\cmd;C:\\Users\\k\\AppData\\Local\\Programs\\Python\\Python310\\Scripts\\;C:\\Users\\k\\AppData\\Local\\Programs\\Python\\Python310\\;C:\\Users\\k\\AppData\\Local\\Microsoft\\WindowsApps;C:\\Users\\k\\AppData\\Local\\GitHubDesktop\\bin;C:\\Users\\k\\AppData\\Local\\Programs\\Ollama"
time=2024-02-15T22:13:56.844+01:00 level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: C:\\Users\\k\\AppData\\Local\\Temp\\ollama451307992\\cpu\\ext_server.dll"
time=2024-02-15T22:13:56.844+01:00 level=INFO source=dyn_ext_server.go:145 msg="Initializing llama server"
[1708031636] system info: AVX = 0 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 0 | SSSE3 = 0 | VSX = 0 | MATMUL_INT8 = 0 |
llama_model_loader: loaded meta data with 23 key-value pairs and 291 tensors from C:\Users\k\.ollama\models\blobs\sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.name str = LLaMA v2
llama_model_loader: - kv 2: llama.context_length u32 = 4096
llama_model_loader: - kv 3: llama.embedding_length u32 = 4096
llama_model_loader: - kv 4: llama.block_count u32 = 32
llama_model_loader: - kv 5: llama.feed_forward_length u32 = 11008
llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 7: llama.attention.head_count u32 = 32
llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 32
llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 10: general.file_type u32 = 2
llama_model_loader: - kv 11: tokenizer.ggml.model str = llama
llama_model_loader: - kv 12: tokenizer.ggml.tokens arr[str,32000] = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv 13: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv 14: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv 15: tokenizer.ggml.merges arr[str,61249] = ["▁ t", "e r", "i n", "▁ a", "e n...
llama_model_loader: - kv 16: tokenizer.ggml.bos_token_id u32 = 1
llama_model_loader: - kv 17: tokenizer.ggml.eos_token_id u32 = 2
llama_model_loader: - kv 18: tokenizer.ggml.unknown_token_id u32 = 0
llama_model_loader: - kv 19: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 20: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 21: tokenizer.chat_template str = {% if messages[0]['role'] == 'system'...
llama_model_loader: - kv 22: general.quantization_version u32 = 2
llama_model_loader: - type f32: 65 tensors
llama_model_loader: - type q4_0: 225 tensors
llama_model_loader: - type q6_K: 1 tensors
llm_load_vocab: special tokens definition check successful ( 259/32000 ).
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = SPM
llm_load_print_meta: n_vocab = 32000
llm_load_print_meta: n_merges = 0
llm_load_print_meta: n_ctx_train = 4096
llm_load_print_meta: n_embd = 4096
llm_load_print_meta: n_head = 32
llm_load_print_meta: n_head_kv = 32
llm_load_print_meta: n_layer = 32
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 1
llm_load_print_meta: n_embd_k_gqa = 4096
llm_load_print_meta: n_embd_v_gqa = 4096
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: n_ff = 11008
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx = 4096
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: model type = 7B
llm_load_print_meta: model ftype = Q4_0
llm_load_print_meta: model params = 6.74 B
llm_load_print_meta: model size = 3.56 GiB (4.54 BPW)
llm_load_print_meta: general.name = LLaMA v2
llm_load_print_meta: BOS token = 1 '<s>'
llm_load_print_meta: EOS token = 2 '</s>'
llm_load_print_meta: UNK token = 0 '<unk>'
llm_load_print_meta: LF token = 13 '<0x0A>'
llm_load_tensors: ggml ctx size = 0.11 MiB
llm_load_tensors: CPU buffer size = 3647.87 MiB
..................................................................................................
llama_new_context_with_model: n_ctx = 2048
llama_new_context_with_model: freq_base = 10000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: CPU KV buffer size = 1024.00 MiB
llama_new_context_with_model: KV self size = 1024.00 MiB, K (f16): 512.00 MiB, V (f16): 512.00 MiB
llama_new_context_with_model: CPU input buffer size = 13.01 MiB
llama_new_context_with_model: CPU compute buffer size = 160.00 MiB
llama_new_context_with_model: graph splits (measure): 1
[1708031637] warming up the model with an empty run
[1708031643] Available slots:
[1708031643] -> Slot 0 - max context: 2048
time=2024-02-15T22:14:03.225+01:00 level=INFO source=dyn_ext_server.go:156 msg="Starting llama main loop"
time=2024-02-15T22:14:03.225+01:00 level=DEBUG source=prompt.go:175 msg="prompt now fits in context window" required=1 window=2048
[GIN] 2024/02/15 - 22:14:03 | 200 | 7.6937168s | 127.0.0.1 | POST "/api/chat"
[1708031643] llama server main loop starting
[1708031643] all slots are idle and system prompt is empty, clear the KV cache
```
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2524/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2524/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3511
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3511/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3511/comments
|
https://api.github.com/repos/ollama/ollama/issues/3511/events
|
https://github.com/ollama/ollama/issues/3511
| 2,229,135,074
|
I_kwDOJ0Z1Ps6E3ebi
| 3,511
|
On Windows, launching ollama from the shortcut or executable by clicking causes very slow tokens generation, but launching from commandline is fast
|
{
"login": "lrq3000",
"id": 1118942,
"node_id": "MDQ6VXNlcjExMTg5NDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1118942?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lrq3000",
"html_url": "https://github.com/lrq3000",
"followers_url": "https://api.github.com/users/lrq3000/followers",
"following_url": "https://api.github.com/users/lrq3000/following{/other_user}",
"gists_url": "https://api.github.com/users/lrq3000/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lrq3000/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lrq3000/subscriptions",
"organizations_url": "https://api.github.com/users/lrq3000/orgs",
"repos_url": "https://api.github.com/users/lrq3000/repos",
"events_url": "https://api.github.com/users/lrq3000/events{/privacy}",
"received_events_url": "https://api.github.com/users/lrq3000/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg",
"url": "https://api.github.com/repos/ollama/ollama/labels/windows",
"name": "windows",
"color": "0052CC",
"default": false,
"description": ""
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 41
| 2024-04-06T08:08:35
| 2024-10-17T16:58:52
| 2024-09-21T23:54:50
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Since I installed ollama (v0.1.30) on Windows 11 Pro, I run into a peculiar issue. When I launch ollama from the installed shortcut, which launches "ollama app.exe", or when I boot up my OS (which also starts up the same shortcut as configured by the ollama installer), ollama is extremely slow. If I do a "ollama run deepseek-coder", the model startup takes a very long time, several minutes, and when I type any input (eg, Hello), it takes again several minutes to generate each token (instead of 200-500ms/T with the workarounds).
However, I could fix the issue by simply closing the systray icon, and then either:
* type `ollama serve` in a terminal, but then I need to keep this open and I don't get the ollama systray icon.
* type `ollama run deepseek-coder` (or any other model), which will then also launch the ollama systray icon, just like launching `ollama app.exe`, but this time it works flawlessly, just like `ollama serve`.
I can confirm I can easily reproduce the bug simply by launching `ollama app.exe` manually, and the bug is not present with `ollama serve` and `ollama run <model>` (once `ollama app.exe` is first closed of course).
I read the logs but I did not find anything particularly telling. I will post a trace soon.
/EDIT: Here are the logs for when I launch `ollama app.exe` and it's slower (I launched `ollama app.exe` from the Windows shortcut then `ollama run deepseek-coder:6.7b-instruct-q8_0` then I type `Hello` as a prompt, then CTRL-C to stop generation that was too long after 2 tokens):
[app.log](https://github.com/ollama/ollama/files/14892997/app.log)
[server.log](https://github.com/ollama/ollama/files/14892998/server.log)
Here are the logs for when I launch `ollama run deepseek-coder:6.7b-instruct-q8_0` directly when `ollama app.exe` is killed:
[app.log](https://github.com/ollama/ollama/files/14893003/app.log)
[server.log](https://github.com/ollama/ollama/files/14893002/server.log)
### What did you expect to see?
200-500ms/T generation speed and much faster model initialization, instead of several minutes for each.
### Steps to reproduce
Launch `ollama app.exe` on Windows, this will be much slower than `ollama serve` or `ollama run <model>`.
### Are there any recent changes that introduced the issue?
I don't know, I never used ollama before (since it was not available on Windows until recently).
### OS
Windows
### Architecture
x86
### Platform
_No response_
### Ollama version
0.1.30
### GPU
Nvidia
### GPU info
Nvidia GeForce 3060 Laptop
### CPU
Intel
### Other software
Intel i7-12700h
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3511/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3511/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5828
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5828/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5828/comments
|
https://api.github.com/repos/ollama/ollama/issues/5828/events
|
https://github.com/ollama/ollama/issues/5828
| 2,421,327,515
|
I_kwDOJ0Z1Ps6QUoab
| 5,828
|
Will paged attention be added when OLLAMA_NUM_PARALLEL is set higher than 1?
|
{
"login": "b-Snaas",
"id": 117536828,
"node_id": "U_kgDOBwF4PA",
"avatar_url": "https://avatars.githubusercontent.com/u/117536828?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/b-Snaas",
"html_url": "https://github.com/b-Snaas",
"followers_url": "https://api.github.com/users/b-Snaas/followers",
"following_url": "https://api.github.com/users/b-Snaas/following{/other_user}",
"gists_url": "https://api.github.com/users/b-Snaas/gists{/gist_id}",
"starred_url": "https://api.github.com/users/b-Snaas/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/b-Snaas/subscriptions",
"organizations_url": "https://api.github.com/users/b-Snaas/orgs",
"repos_url": "https://api.github.com/users/b-Snaas/repos",
"events_url": "https://api.github.com/users/b-Snaas/events{/privacy}",
"received_events_url": "https://api.github.com/users/b-Snaas/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 1
| 2024-07-21T10:04:29
| 2024-10-09T13:15:46
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I experimented with ollama_num_parallel on GPUs with a large amount of VRAM, but I could not get a real benefit in terms of total aggregated tokens per second when posting 10 requests at the same time. I assume this is due to ollama not having pagedattention. Are there plans to optimize inference for large amount of concurrent requests?
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5828/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/ollama/ollama/issues/5828/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/6650
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6650/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6650/comments
|
https://api.github.com/repos/ollama/ollama/issues/6650/events
|
https://github.com/ollama/ollama/issues/6650
| 2,506,719,889
|
I_kwDOJ0Z1Ps6VaYKR
| 6,650
|
ollama serve does not finished after long waiting
|
{
"login": "lifelongeeek",
"id": 127937907,
"node_id": "U_kgDOB6Atcw",
"avatar_url": "https://avatars.githubusercontent.com/u/127937907?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lifelongeeek",
"html_url": "https://github.com/lifelongeeek",
"followers_url": "https://api.github.com/users/lifelongeeek/followers",
"following_url": "https://api.github.com/users/lifelongeeek/following{/other_user}",
"gists_url": "https://api.github.com/users/lifelongeeek/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lifelongeeek/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lifelongeeek/subscriptions",
"organizations_url": "https://api.github.com/users/lifelongeeek/orgs",
"repos_url": "https://api.github.com/users/lifelongeeek/repos",
"events_url": "https://api.github.com/users/lifelongeeek/events{/privacy}",
"received_events_url": "https://api.github.com/users/lifelongeeek/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-09-05T04:05:30
| 2024-09-05T04:25:52
| 2024-09-05T04:25:52
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I tried `ollama serve` in a container. But it does not completed after waiting for a very long time. Could anyone suggest related solution to this?
Here is the log.
```
root@d39fcb3d6754: # ollama serve
Couldn't find '/root/.ollama/id_ed25519'. Generating new private key.
Your new public key is:
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPaNPlXFSbC0urQ1ESCmmOMdA/yq1Dem4qWNwPtKDbuc
2024/09/04 12:54:30 routes.go:1125: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
time=2024-09-05T03:54:30.399Z level=INFO source=images.go:753 msg="total blobs: 0"
time=2024-09-05T03:54:30.399Z level=INFO source=images.go:760 msg="total unused blobs removed: 0"
time=2024-09-05T03:54:30.399Z level=INFO source=routes.go:1172 msg="Listening on 127.0.0.1:11434 (version 0.3.9)"
time=2024-09-05T03:54:30.408Z level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama844009318/runners
time=2024-09-05T03:54:41.802Z level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11 cuda_v12 rocm_v60102]"
time=2024-09-05T03:54:41.802Z level=INFO source=gpu.go:200 msg="looking for compatible GPUs"
time=2024-09-05T03:54:42.152Z level=INFO source=types.go:107 msg="inference compute" id=GPU-0c8e7b28-0c88-e46b-0269-e472c7044e62 library=cuda variant=v12 compute=8.0 driver=12.3 name="NVIDIA A100 80GB PCIe" total="79.2 GiB" available="78.7 GiB"
```
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.3.9
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6650/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6650/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7240
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7240/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7240/comments
|
https://api.github.com/repos/ollama/ollama/issues/7240/events
|
https://github.com/ollama/ollama/issues/7240
| 2,594,920,904
|
I_kwDOJ0Z1Ps6aq1nI
| 7,240
|
Pull Private Huggingface Model
|
{
"login": "DaddyCodesAlot",
"id": 176133641,
"node_id": "U_kgDOCn-WCQ",
"avatar_url": "https://avatars.githubusercontent.com/u/176133641?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DaddyCodesAlot",
"html_url": "https://github.com/DaddyCodesAlot",
"followers_url": "https://api.github.com/users/DaddyCodesAlot/followers",
"following_url": "https://api.github.com/users/DaddyCodesAlot/following{/other_user}",
"gists_url": "https://api.github.com/users/DaddyCodesAlot/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DaddyCodesAlot/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DaddyCodesAlot/subscriptions",
"organizations_url": "https://api.github.com/users/DaddyCodesAlot/orgs",
"repos_url": "https://api.github.com/users/DaddyCodesAlot/repos",
"events_url": "https://api.github.com/users/DaddyCodesAlot/events{/privacy}",
"received_events_url": "https://api.github.com/users/DaddyCodesAlot/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 2
| 2024-10-17T14:31:26
| 2024-11-22T14:14:54
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi, so I believe it's now possible to pull huggingface models directly by prepending hf.co to the pull statement. I would just like to get clarity on how this works with private models? I have my huggingface token set as an environment variable, but I can't seem to pull a private model.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7240/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/ollama/ollama/issues/7240/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/5423
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5423/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5423/comments
|
https://api.github.com/repos/ollama/ollama/issues/5423/events
|
https://github.com/ollama/ollama/issues/5423
| 2,384,900,396
|
I_kwDOJ0Z1Ps6OJrEs
| 5,423
|
`ollama create` progress
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
|
{
"login": "joshyan1",
"id": 76125168,
"node_id": "MDQ6VXNlcjc2MTI1MTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/76125168?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joshyan1",
"html_url": "https://github.com/joshyan1",
"followers_url": "https://api.github.com/users/joshyan1/followers",
"following_url": "https://api.github.com/users/joshyan1/following{/other_user}",
"gists_url": "https://api.github.com/users/joshyan1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joshyan1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joshyan1/subscriptions",
"organizations_url": "https://api.github.com/users/joshyan1/orgs",
"repos_url": "https://api.github.com/users/joshyan1/repos",
"events_url": "https://api.github.com/users/joshyan1/events{/privacy}",
"received_events_url": "https://api.github.com/users/joshyan1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "joshyan1",
"id": 76125168,
"node_id": "MDQ6VXNlcjc2MTI1MTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/76125168?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joshyan1",
"html_url": "https://github.com/joshyan1",
"followers_url": "https://api.github.com/users/joshyan1/followers",
"following_url": "https://api.github.com/users/joshyan1/following{/other_user}",
"gists_url": "https://api.github.com/users/joshyan1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joshyan1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joshyan1/subscriptions",
"organizations_url": "https://api.github.com/users/joshyan1/orgs",
"repos_url": "https://api.github.com/users/joshyan1/repos",
"events_url": "https://api.github.com/users/joshyan1/events{/privacy}",
"received_events_url": "https://api.github.com/users/joshyan1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 0
| 2024-07-01T23:23:05
| 2024-07-16T23:49:29
| null |
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Ollama create doesn't report back any progress similar to `ollama pull` or `ollama push`.
- [x] Copying files (transferring model data)
- [x] Quantization
- [ ] Converting
Note, a full progress bar isn't required, it can be as simple as adding a percentage:
```
% ollama create -f Modelfile test
transferring model data 5% ⠇
```
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5423/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5423/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/486
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/486/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/486/comments
|
https://api.github.com/repos/ollama/ollama/issues/486/events
|
https://github.com/ollama/ollama/pull/486
| 1,886,428,605
|
PR_kwDOJ0Z1Ps5Zz6Fe
| 486
|
fix: retry push on expired token
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-09-07T19:04:36
| 2023-09-07T20:58:35
| 2023-09-07T20:58:34
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/486",
"html_url": "https://github.com/ollama/ollama/pull/486",
"diff_url": "https://github.com/ollama/ollama/pull/486.diff",
"patch_url": "https://github.com/ollama/ollama/pull/486.patch",
"merged_at": "2023-09-07T20:58:34"
}
|
There's two bug that need to be fixed:
1. `makeRequest` to the `redirectURL` should not supply `regOpts` since it's not the registry. This erroneously overrides the `Authorization` Header making the request invalid.
2. The upload chunk was not resetting the section correctly. It also should to interrupt the goroutine writing into the pipe
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/486/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/486/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2997
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2997/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2997/comments
|
https://api.github.com/repos/ollama/ollama/issues/2997/events
|
https://github.com/ollama/ollama/issues/2997
| 2,175,155,254
|
I_kwDOJ0Z1Ps6Bpjw2
| 2,997
|
Can I force ollama to produce shorter responses?
|
{
"login": "Anirudh257",
"id": 16001446,
"node_id": "MDQ6VXNlcjE2MDAxNDQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/16001446?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Anirudh257",
"html_url": "https://github.com/Anirudh257",
"followers_url": "https://api.github.com/users/Anirudh257/followers",
"following_url": "https://api.github.com/users/Anirudh257/following{/other_user}",
"gists_url": "https://api.github.com/users/Anirudh257/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Anirudh257/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Anirudh257/subscriptions",
"organizations_url": "https://api.github.com/users/Anirudh257/orgs",
"repos_url": "https://api.github.com/users/Anirudh257/repos",
"events_url": "https://api.github.com/users/Anirudh257/events{/privacy}",
"received_events_url": "https://api.github.com/users/Anirudh257/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 3
| 2024-03-08T02:25:55
| 2024-03-13T17:25:20
| 2024-03-11T22:21:18
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi,
I want to use the llama2 model available in Ollama to produce shorter outputs. I want to use ``max_new_tokens``, ``max_length``
parameters in https://huggingface.co/docs/transformers/en/main_classes/text_generation. Can I prompt the LLM to generate shorter sequences while keeping the meaning same?
There are some approaches given in https://www.reddit.com/r/LocalLLaMA/comments/14k7f5w/any_way_to_limit_the_output_to_a_specific_line/ but they don't work well.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2997/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2997/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7732
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7732/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7732/comments
|
https://api.github.com/repos/ollama/ollama/issues/7732/events
|
https://github.com/ollama/ollama/issues/7732
| 2,670,576,134
|
I_kwDOJ0Z1Ps6fLcIG
| 7,732
|
Why is the generated content missing when reader 1.5b processes html
|
{
"login": "gubinjie",
"id": 37869445,
"node_id": "MDQ6VXNlcjM3ODY5NDQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/37869445?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gubinjie",
"html_url": "https://github.com/gubinjie",
"followers_url": "https://api.github.com/users/gubinjie/followers",
"following_url": "https://api.github.com/users/gubinjie/following{/other_user}",
"gists_url": "https://api.github.com/users/gubinjie/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gubinjie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gubinjie/subscriptions",
"organizations_url": "https://api.github.com/users/gubinjie/orgs",
"repos_url": "https://api.github.com/users/gubinjie/repos",
"events_url": "https://api.github.com/users/gubinjie/events{/privacy}",
"received_events_url": "https://api.github.com/users/gubinjie/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 0
| 2024-11-19T03:05:29
| 2024-11-20T02:46:34
| 2024-11-20T02:46:34
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
```
info = '''<body class="body-color">
<div class="p14-product-2-list">
<dl>
<dt>
<a href="https://www." target="_blank" title="651524 200mah " src="https://www.16b1_140.jpg" title="651524"> </a>
</dt>
<dd>
<h4><a href="https://w524.html" target="_blank" title="651524 20</a></h4>
<div class="p14-product-2-desc">
test
</div>
</dd>
</dl>
...........
</div> </body>'''
import requests
url = "http://127.0.0.1:1434/api/generate"
data = {
"model": "reader-lm:1.5b",
"prompt": info,
"stream": False,
"options": {
"temperature": 0
}
}
try:
response = requests.post(url, json=data)
if response.status_code == 200:
print("Response:", response.json())
else:
print("Failed with status code:", response.status_code)
print("Error:", response.text)
except Exception as e:
print("Error occurred:", e)
```
There are four DL lists in the incoming HTML, but after the conversion, only fewer DL lists appear
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
ollama version is 0.4.1
|
{
"login": "gubinjie",
"id": 37869445,
"node_id": "MDQ6VXNlcjM3ODY5NDQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/37869445?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gubinjie",
"html_url": "https://github.com/gubinjie",
"followers_url": "https://api.github.com/users/gubinjie/followers",
"following_url": "https://api.github.com/users/gubinjie/following{/other_user}",
"gists_url": "https://api.github.com/users/gubinjie/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gubinjie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gubinjie/subscriptions",
"organizations_url": "https://api.github.com/users/gubinjie/orgs",
"repos_url": "https://api.github.com/users/gubinjie/repos",
"events_url": "https://api.github.com/users/gubinjie/events{/privacy}",
"received_events_url": "https://api.github.com/users/gubinjie/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7732/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7732/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4928
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4928/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4928/comments
|
https://api.github.com/repos/ollama/ollama/issues/4928/events
|
https://github.com/ollama/ollama/issues/4928
| 2,341,543,572
|
I_kwDOJ0Z1Ps6LkR6U
| 4,928
|
Support for Qwen2-7B-Instruct
|
{
"login": "Leroy-X",
"id": 13515498,
"node_id": "MDQ6VXNlcjEzNTE1NDk4",
"avatar_url": "https://avatars.githubusercontent.com/u/13515498?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Leroy-X",
"html_url": "https://github.com/Leroy-X",
"followers_url": "https://api.github.com/users/Leroy-X/followers",
"following_url": "https://api.github.com/users/Leroy-X/following{/other_user}",
"gists_url": "https://api.github.com/users/Leroy-X/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Leroy-X/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Leroy-X/subscriptions",
"organizations_url": "https://api.github.com/users/Leroy-X/orgs",
"repos_url": "https://api.github.com/users/Leroy-X/repos",
"events_url": "https://api.github.com/users/Leroy-X/events{/privacy}",
"received_events_url": "https://api.github.com/users/Leroy-X/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
closed
| false
| null |
[] | null | 0
| 2024-06-08T08:11:22
| 2024-06-08T10:44:15
| 2024-06-08T10:44:15
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
[https://huggingface.co/Qwen/Qwen2-7B-Instruct](https://huggingface.co/Qwen/Qwen2-7B-Instruct)
Thanks.
|
{
"login": "Leroy-X",
"id": 13515498,
"node_id": "MDQ6VXNlcjEzNTE1NDk4",
"avatar_url": "https://avatars.githubusercontent.com/u/13515498?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Leroy-X",
"html_url": "https://github.com/Leroy-X",
"followers_url": "https://api.github.com/users/Leroy-X/followers",
"following_url": "https://api.github.com/users/Leroy-X/following{/other_user}",
"gists_url": "https://api.github.com/users/Leroy-X/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Leroy-X/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Leroy-X/subscriptions",
"organizations_url": "https://api.github.com/users/Leroy-X/orgs",
"repos_url": "https://api.github.com/users/Leroy-X/repos",
"events_url": "https://api.github.com/users/Leroy-X/events{/privacy}",
"received_events_url": "https://api.github.com/users/Leroy-X/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4928/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4928/timeline
| null |
not_planned
| false
|
https://api.github.com/repos/ollama/ollama/issues/1425
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1425/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1425/comments
|
https://api.github.com/repos/ollama/ollama/issues/1425/events
|
https://github.com/ollama/ollama/pull/1425
| 2,031,727,648
|
PR_kwDOJ0Z1Ps5heYcg
| 1,425
|
fix: restore modelfile system in prompt template
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-12-08T00:28:05
| 2023-12-08T19:20:20
| 2023-12-08T19:20:19
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1425",
"html_url": "https://github.com/ollama/ollama/pull/1425",
"diff_url": "https://github.com/ollama/ollama/pull/1425.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1425.patch",
"merged_at": "2023-12-08T19:20:19"
}
|
In #1244 this line which sets the modelfile system variable in the template got removed. It must still be there to apply the system template from the modelfile.
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1425/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1425/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5834
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5834/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5834/comments
|
https://api.github.com/repos/ollama/ollama/issues/5834/events
|
https://github.com/ollama/ollama/issues/5834
| 2,421,599,819
|
I_kwDOJ0Z1Ps6QVq5L
| 5,834
|
Windows Client: Provide a way to allow connections to Ollama from web browser origins other than localhost and 0.0.0.0
|
{
"login": "Dinkh",
"id": 658372,
"node_id": "MDQ6VXNlcjY1ODM3Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/658372?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Dinkh",
"html_url": "https://github.com/Dinkh",
"followers_url": "https://api.github.com/users/Dinkh/followers",
"following_url": "https://api.github.com/users/Dinkh/following{/other_user}",
"gists_url": "https://api.github.com/users/Dinkh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Dinkh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Dinkh/subscriptions",
"organizations_url": "https://api.github.com/users/Dinkh/orgs",
"repos_url": "https://api.github.com/users/Dinkh/repos",
"events_url": "https://api.github.com/users/Dinkh/events{/privacy}",
"received_events_url": "https://api.github.com/users/Dinkh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 7706482389,
"node_id": "LA_kwDOJ0Z1Ps8AAAABy1eW1Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/api",
"name": "api",
"color": "bfdadc",
"default": false,
"description": ""
}
] |
open
| false
| null |
[] | null | 4
| 2024-07-21T21:01:46
| 2024-11-06T01:03:08
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Running my WebApp on my machine works.
```
import ollama from "ollama/browser"
ollama.list().then(...)
// => http:127.0.0.1:11434/api/tags
```
Running it from my web host does not work
```
ollama.list().then(...)
// options => 204
// get => GET http://127.0.0.1:11434/api/tags net::ERR_FAILED
Access to fetch at 'http://127.0.0.1:11434/api/tags' from origin 'https://myWebSpace.com' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Private-Network' header was present in the preflight response for this private network request targeting the `local` address space.
```
Ollama installation:
Windows client 0.2.7
I have the server running as follows (without that the OPTIONS call doesn't work):
```
OLLAMA_ORIGINS=https://myWebSpace.com *
```
It would be great if this would work with Windows too.
(seems to work for other platforms https://github.com/ollama/ollama/issues/300)
After running `ollama serve` I can see that the OPTIONS calls, but not the other calls.
```
[GIN] 2024/07/21 - 23:06:59 | 204 | 0s | 127.0.0.1 | OPTIONS "/api/tags"
```
And the settings from the serve call
```
OLLAMA_DEBUG:true
OLLAMA_FLASH_ATTENTION:false
OLLAMA_HOST:http://127.0.0.1:11434
OLLAMA_INTEL_GPU:false
OLLAMA_KEEP_ALIVE:5m0s
OLLAMA_LLM_LIBRARY:
OLLAMA_MAX_LOADED_MODELS:0
OLLAMA_MAX_QUEUE:512
OLLAMA_MAX_VRAM:0
OLLAMA_MODELS:d:\\ollama\\models
OLLAMA_NOHISTORY:false
OLLAMA_NOPRUNE:false
OLLAMA_NUM_PARALLEL:0
OLLAMA_ORIGINS:[http://myWebSpace.com * http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*]
OLLAMA_RUNNERS_DIR:C:\\Users\\LLM\\AppData\\Local\\Programs\\Ollama\\ollama_runners
OLLAMA_SCHED_SPREAD:false
OLLAMA_TMPDIR:d:\\ollama\\tmpDir
```
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5834/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5834/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/7663
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7663/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7663/comments
|
https://api.github.com/repos/ollama/ollama/issues/7663/events
|
https://github.com/ollama/ollama/issues/7663
| 2,657,672,454
|
I_kwDOJ0Z1Ps6eaN0G
| 7,663
|
Ollama API Multiple thread error reporting, abnormal thread time consumption
|
{
"login": "jamine2024",
"id": 168888350,
"node_id": "U_kgDOChEIHg",
"avatar_url": "https://avatars.githubusercontent.com/u/168888350?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jamine2024",
"html_url": "https://github.com/jamine2024",
"followers_url": "https://api.github.com/users/jamine2024/followers",
"following_url": "https://api.github.com/users/jamine2024/following{/other_user}",
"gists_url": "https://api.github.com/users/jamine2024/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jamine2024/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jamine2024/subscriptions",
"organizations_url": "https://api.github.com/users/jamine2024/orgs",
"repos_url": "https://api.github.com/users/jamine2024/repos",
"events_url": "https://api.github.com/users/jamine2024/events{/privacy}",
"received_events_url": "https://api.github.com/users/jamine2024/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-11-14T05:42:36
| 2024-11-14T18:20:23
| 2024-11-14T18:20:22
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Error:
msg="failed to decode batch" error="could not find a KV slot for the batch - try reducing the size of the batch or increase the context. code: 1"
time=2024-11-13T20:41:50.757+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
USE PYTHON API Find some error with server.log
`url = f'{ollama_api}/api/generate'
headers = {'Content-Type': 'application/json'}
data = {
"model": ollama_mode,
"prompt": f'{title} {config.get("out", "content")}'
}
response = requests.post(url, headers=headers, json=data, timeout=120)
responses = ''.join(json.loads(line)["response"] for line in response.text.split('\n') if line)`
How to deal with multithreading will lead to the API response delay, and eventually will appear as raised at the beginning!
Is there a scheme to end the thread?

### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.4.1
|
{
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users/jessegross/followers",
"following_url": "https://api.github.com/users/jessegross/following{/other_user}",
"gists_url": "https://api.github.com/users/jessegross/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jessegross/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jessegross/subscriptions",
"organizations_url": "https://api.github.com/users/jessegross/orgs",
"repos_url": "https://api.github.com/users/jessegross/repos",
"events_url": "https://api.github.com/users/jessegross/events{/privacy}",
"received_events_url": "https://api.github.com/users/jessegross/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7663/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7663/timeline
| null |
not_planned
| false
|
https://api.github.com/repos/ollama/ollama/issues/4158
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4158/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4158/comments
|
https://api.github.com/repos/ollama/ollama/issues/4158/events
|
https://github.com/ollama/ollama/issues/4158
| 2,279,264,039
|
I_kwDOJ0Z1Ps6H2s8n
| 4,158
|
On Windows , with version 0.1.33 assembling two models creates a path error. Version 0.1.32 works correctly.
|
{
"login": "amonpaike",
"id": 884282,
"node_id": "MDQ6VXNlcjg4NDI4Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/884282?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amonpaike",
"html_url": "https://github.com/amonpaike",
"followers_url": "https://api.github.com/users/amonpaike/followers",
"following_url": "https://api.github.com/users/amonpaike/following{/other_user}",
"gists_url": "https://api.github.com/users/amonpaike/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amonpaike/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amonpaike/subscriptions",
"organizations_url": "https://api.github.com/users/amonpaike/orgs",
"repos_url": "https://api.github.com/users/amonpaike/repos",
"events_url": "https://api.github.com/users/amonpaike/events{/privacy}",
"received_events_url": "https://api.github.com/users/amonpaike/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg",
"url": "https://api.github.com/repos/ollama/ollama/labels/windows",
"name": "windows",
"color": "0052CC",
"default": false,
"description": ""
}
] |
closed
| false
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 1
| 2024-05-05T00:55:59
| 2024-05-06T20:01:40
| 2024-05-06T20:01:40
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
On Windows with version 0.1.33 assembling two models creates a path error.
Version 0.1.32 works correctly.
You can manifest the bug by assembling this model: (there is also the modelfile for ollama) [llava-llama-3-8b-v1_1-gguf](https://huggingface.co/xtuner/llava-llama-3-8b-v1_1-gguf/tree/main)
```
FROM ./llava-llama-3-8b-v1_1-int4.gguf
FROM ./llava-llama-3-8b-v1_1-mmproj-f16.gguf
TEMPLATE """{{ if .System }}<|start_header_id|>system<|end_header_id|>
{{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|>
{{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|>
{{ .Response }}<|eot_id|>"""
PARAMETER stop "<|start_header_id|>"
PARAMETER stop "<|end_header_id|>"
PARAMETER stop "<|eot_id|>"
PARAMETER num_keep 4
PARAMETER num_ctx 4096
```
I'm reporting a slightly different example because it's the second time the bug has appeared and I decided to report it as it reappeared.
```
ollama create LLava3_nvidia -f Llava_Llama3_Nvidia_ChatQ1_5_Q5.txt
transferring model data
panic: regexp: Compile(`(?im)^(from)\s+D:\AI_APPS\Llama3-ChatQA-1.5-8B.Q5_K_M.gguf\s*$`): error parsing regexp: invalid escape sequence: `\L`
goroutine 1 [running]:
regexp.MustCompile({0xc000391880, 0x3e})
regexp/regexp.go:317 +0xb4
github.com/ollama/ollama/cmd.CreateHandler(0xc000451208, {0xc0006f45d0, 0x1, 0x1a3aa63?})
github.com/ollama/ollama/cmd/cmd.go:122 +0x839
github.com/spf13/cobra.(*Command).execute(0xc000451208, {0xc0006f4570, 0x3, 0x3})
github.com/spf13/cobra@v1.7.0/command.go:940 +0x882
github.com/spf13/cobra.(*Command).ExecuteC(0xc000450f08)
github.com/spf13/cobra@v1.7.0/command.go:1068 +0x3a5
github.com/spf13/cobra.(*Command).Execute(...)
github.com/spf13/cobra@v1.7.0/command.go:992
github.com/spf13/cobra.(*Command).ExecuteContext(...)
github.com/spf13/cobra@v1.7.0/command.go:985
main.main()
github.com/ollama/ollama/main.go:11 +0x4d
```
### OS
Windows
### GPU
Nvidia
### CPU
_No response_
### Ollama version
0.1.33
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4158/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4158/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8656
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8656/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8656/comments
|
https://api.github.com/repos/ollama/ollama/issues/8656/events
|
https://github.com/ollama/ollama/pull/8656
| 2,818,099,446
|
PR_kwDOJ0Z1Ps6JWxCt
| 8,656
|
Add DeepSeek R1 in README
|
{
"login": "zakk616",
"id": 26119949,
"node_id": "MDQ6VXNlcjI2MTE5OTQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/26119949?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zakk616",
"html_url": "https://github.com/zakk616",
"followers_url": "https://api.github.com/users/zakk616/followers",
"following_url": "https://api.github.com/users/zakk616/following{/other_user}",
"gists_url": "https://api.github.com/users/zakk616/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zakk616/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zakk616/subscriptions",
"organizations_url": "https://api.github.com/users/zakk616/orgs",
"repos_url": "https://api.github.com/users/zakk616/repos",
"events_url": "https://api.github.com/users/zakk616/events{/privacy}",
"received_events_url": "https://api.github.com/users/zakk616/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 2
| 2025-01-29T12:38:21
| 2025-01-30T05:37:47
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/8656",
"html_url": "https://github.com/ollama/ollama/pull/8656",
"diff_url": "https://github.com/ollama/ollama/pull/8656.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8656.patch",
"merged_at": null
}
| null | null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8656/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8656/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/8100
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8100/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8100/comments
|
https://api.github.com/repos/ollama/ollama/issues/8100/events
|
https://github.com/ollama/ollama/issues/8100
| 2,740,106,490
|
I_kwDOJ0Z1Ps6jUrT6
| 8,100
|
Ollama in docker container returns empty content on api/chat stream request made with http POST
|
{
"login": "MMaicki",
"id": 46030081,
"node_id": "MDQ6VXNlcjQ2MDMwMDgx",
"avatar_url": "https://avatars.githubusercontent.com/u/46030081?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MMaicki",
"html_url": "https://github.com/MMaicki",
"followers_url": "https://api.github.com/users/MMaicki/followers",
"following_url": "https://api.github.com/users/MMaicki/following{/other_user}",
"gists_url": "https://api.github.com/users/MMaicki/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MMaicki/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MMaicki/subscriptions",
"organizations_url": "https://api.github.com/users/MMaicki/orgs",
"repos_url": "https://api.github.com/users/MMaicki/repos",
"events_url": "https://api.github.com/users/MMaicki/events{/privacy}",
"received_events_url": "https://api.github.com/users/MMaicki/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 5
| 2024-12-14T19:16:12
| 2024-12-14T22:37:01
| 2024-12-14T22:37:00
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
When HTTP POST is done with Headers "Content-Type" "application/json" and a JSON body the message content is empty:
```
STREAM {:cached nil, :request-time 39709, :repeatable? false, :protocol-version {:name HTTP, :major 1, :minor 1}, :streaming? true, :http-client #object[org.apache.http.impl.client.InternalHttpClient 0x5bceb2df org.apache.http.impl.client.InternalHttpClient@5bceb2df], :chunked? true, :reason-phrase OK, :headers {Content-Type application/x-ndjson, Date Sat, 14 Dec 2024 19:08:37 GMT, Connection close, Transfer-Encoding chunked}, :orig-content-encoding nil, :status 200, :length -1, :body #object[clj_http.core.proxy$java.io.FilterInputStream$ff19274a 0x24ed5877 clj_http.core.proxy$java.io.FilterInputStream$ff19274a@24ed5877], :trace-redirects []}
Stream opened successfully
CHUNK {"model":"llama3.2","created_at":"2024-12-14T19:08:37.034166114Z","message":{"role":"assistant","content":""},"done_reason":"stop","done":true,"total_duration":39701814719,"load_duration":17734968,"prompt_eval_count":370,"prompt_eval_duration":39673000000,"eval_count":1,"eval_duration":1000000}
```
At the same time the curl requests are capable of getting the stream data.
### OS
Windows, WSL2
### GPU
Intel
### CPU
Intel
### Ollama version
3.2
|
{
"login": "MMaicki",
"id": 46030081,
"node_id": "MDQ6VXNlcjQ2MDMwMDgx",
"avatar_url": "https://avatars.githubusercontent.com/u/46030081?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MMaicki",
"html_url": "https://github.com/MMaicki",
"followers_url": "https://api.github.com/users/MMaicki/followers",
"following_url": "https://api.github.com/users/MMaicki/following{/other_user}",
"gists_url": "https://api.github.com/users/MMaicki/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MMaicki/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MMaicki/subscriptions",
"organizations_url": "https://api.github.com/users/MMaicki/orgs",
"repos_url": "https://api.github.com/users/MMaicki/repos",
"events_url": "https://api.github.com/users/MMaicki/events{/privacy}",
"received_events_url": "https://api.github.com/users/MMaicki/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8100/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8100/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4603
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4603/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4603/comments
|
https://api.github.com/repos/ollama/ollama/issues/4603/events
|
https://github.com/ollama/ollama/issues/4603
| 2,314,239,286
|
I_kwDOJ0Z1Ps6J8H02
| 4,603
|
Import module faild: pip install -r llm/llama.cpp/requirements.txt
|
{
"login": "HougeLangley",
"id": 1161594,
"node_id": "MDQ6VXNlcjExNjE1OTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1161594?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HougeLangley",
"html_url": "https://github.com/HougeLangley",
"followers_url": "https://api.github.com/users/HougeLangley/followers",
"following_url": "https://api.github.com/users/HougeLangley/following{/other_user}",
"gists_url": "https://api.github.com/users/HougeLangley/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HougeLangley/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HougeLangley/subscriptions",
"organizations_url": "https://api.github.com/users/HougeLangley/orgs",
"repos_url": "https://api.github.com/users/HougeLangley/repos",
"events_url": "https://api.github.com/users/HougeLangley/events{/privacy}",
"received_events_url": "https://api.github.com/users/HougeLangley/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 1
| 2024-05-24T03:04:15
| 2024-05-26T12:35:13
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Archlinux, python3.12
```
(ollama) ╭─hougelangley at Arch-Legion in ~/ollama on main✘✘✘ 24-05-24 - 11:00:23
╰─(ollama) ⠠⠵ pip install -r llm/llama.cpp/requirements.txt
Collecting numpy~=1.24.4 (from -r llm/llama.cpp/./requirements/requirements-convert.txt (line 1))
Downloading numpy-1.24.4.tar.gz (10.9 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 10.9/10.9 MB 14.0 MB/s eta 0:00:00
Installing build dependencies ... done
Getting requirements to build wheel ... done
ERROR: Exception:
Traceback (most recent call last):
File "/home/hougelangley/ollama/lib/python3.12/site-packages/pip/_internal/cli/base_command.py", line 180, in exc_logging_wrapper
status = run_func(*args)
^^^^^^^^^^^^^^^
File "/home/hougelangley/ollama/lib/python3.12/site-packages/pip/_internal/cli/req_command.py", line 245, in wrapper
return func(self, options, args)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hougelangley/ollama/lib/python3.12/site-packages/pip/_internal/commands/install.py", line 377, in run
requirement_set = resolver.resolve(
^^^^^^^^^^^^^^^^^
File "/home/hougelangley/ollama/lib/python3.12/site-packages/pip/_internal/resolution/resolvelib/resolver.py", line 95, in resolve
result = self._result = resolver.resolve(
^^^^^^^^^^^^^^^^^
File "/home/hougelangley/ollama/lib/python3.12/site-packages/pip/_vendor/resolvelib/resolvers.py", line 546, in resolve
state = resolution.resolve(requirements, max_rounds=max_rounds)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hougelangley/ollama/lib/python3.12/site-packages/pip/_vendor/resolvelib/resolvers.py", line 397, in resolve
self._add_to_criteria(self.state.criteria, r, parent=None)
File "/home/hougelangley/ollama/lib/python3.12/site-packages/pip/_vendor/resolvelib/resolvers.py", line 173, in _add_to_criteria
if not criterion.candidates:
^^^^^^^^^^^^^^^^^^^^
File "/home/hougelangley/ollama/lib/python3.12/site-packages/pip/_vendor/resolvelib/structs.py", line 156, in __bool__
return bool(self._sequence)
^^^^^^^^^^^^^^^^^^^^
File "/home/hougelangley/ollama/lib/python3.12/site-packages/pip/_internal/resolution/resolvelib/found_candidates.py", line 155, in __bool__
return any(self)
^^^^^^^^^
File "/home/hougelangley/ollama/lib/python3.12/site-packages/pip/_internal/resolution/resolvelib/found_candidates.py", line 143, in <genexpr>
return (c for c in iterator if id(c) not in self._incompatible_ids)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hougelangley/ollama/lib/python3.12/site-packages/pip/_internal/resolution/resolvelib/found_candidates.py", line 47, in _iter_built
candidate = func()
^^^^^^
File "/home/hougelangley/ollama/lib/python3.12/site-packages/pip/_internal/resolution/resolvelib/factory.py", line 182, in _make_candidate_from_link
base: Optional[BaseCandidate] = self._make_base_candidate_from_link(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hougelangley/ollama/lib/python3.12/site-packages/pip/_internal/resolution/resolvelib/factory.py", line 228, in _make_base_candidate_from_link
self._link_candidate_cache[link] = LinkCandidate(
^^^^^^^^^^^^^^
File "/home/hougelangley/ollama/lib/python3.12/site-packages/pip/_internal/resolution/resolvelib/candidates.py", line 290, in __init__
super().__init__(
File "/home/hougelangley/ollama/lib/python3.12/site-packages/pip/_internal/resolution/resolvelib/candidates.py", line 156, in __init__
self.dist = self._prepare()
^^^^^^^^^^^^^^^
File "/home/hougelangley/ollama/lib/python3.12/site-packages/pip/_internal/resolution/resolvelib/candidates.py", line 222, in _prepare
dist = self._prepare_distribution()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hougelangley/ollama/lib/python3.12/site-packages/pip/_internal/resolution/resolvelib/candidates.py", line 301, in _prepare_distribution
return preparer.prepare_linked_requirement(self._ireq, parallel_builds=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hougelangley/ollama/lib/python3.12/site-packages/pip/_internal/operations/prepare.py", line 525, in prepare_linked_requirement
return self._prepare_linked_requirement(req, parallel_builds)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hougelangley/ollama/lib/python3.12/site-packages/pip/_internal/operations/prepare.py", line 640, in _prepare_linked_requirement
dist = _get_prepared_distribution(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hougelangley/ollama/lib/python3.12/site-packages/pip/_internal/operations/prepare.py", line 71, in _get_prepared_distribution
abstract_dist.prepare_distribution_metadata(
File "/home/hougelangley/ollama/lib/python3.12/site-packages/pip/_internal/distributions/sdist.py", line 54, in prepare_distribution_metadata
self._install_build_reqs(finder)
File "/home/hougelangley/ollama/lib/python3.12/site-packages/pip/_internal/distributions/sdist.py", line 124, in _install_build_reqs
build_reqs = self._get_build_requires_wheel()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hougelangley/ollama/lib/python3.12/site-packages/pip/_internal/distributions/sdist.py", line 101, in _get_build_requires_wheel
return backend.get_requires_for_build_wheel()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hougelangley/ollama/lib/python3.12/site-packages/pip/_internal/utils/misc.py", line 745, in get_requires_for_build_wheel
return super().get_requires_for_build_wheel(config_settings=cs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hougelangley/ollama/lib/python3.12/site-packages/pip/_vendor/pyproject_hooks/_impl.py", line 166, in get_requires_for_build_wheel
return self._call_hook('get_requires_for_build_wheel', {
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hougelangley/ollama/lib/python3.12/site-packages/pip/_vendor/pyproject_hooks/_impl.py", line 321, in _call_hook
raise BackendUnavailable(data.get('traceback', ''))
pip._vendor.pyproject_hooks._impl.BackendUnavailable: Traceback (most recent call last):
File "/home/hougelangley/ollama/lib/python3.12/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 77, in _build_backend
obj = import_module(mod_path)
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/importlib/__init__.py", line 90, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen importlib._bootstrap>", line 1387, in _gcd_import
File "<frozen importlib._bootstrap>", line 1360, in _find_and_load
File "<frozen importlib._bootstrap>", line 1310, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 488, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 1387, in _gcd_import
File "<frozen importlib._bootstrap>", line 1360, in _find_and_load
File "<frozen importlib._bootstrap>", line 1331, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 935, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 995, in exec_module
File "<frozen importlib._bootstrap>", line 488, in _call_with_frames_removed
File "/tmp/pip-build-env-qkw6zb38/overlay/lib/python3.12/site-packages/setuptools/__init__.py", line 10, in <module>
import distutils.core
ModuleNotFoundError: No module named 'distutils'
```
### OS
Linux
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.0.0 git version
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4603/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4603/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/8621
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8621/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8621/comments
|
https://api.github.com/repos/ollama/ollama/issues/8621/events
|
https://github.com/ollama/ollama/pull/8621
| 2,814,253,630
|
PR_kwDOJ0Z1Ps6JJqW1
| 8,621
|
Small typo in api.md
|
{
"login": "KeerthiNingegowda",
"id": 31515752,
"node_id": "MDQ6VXNlcjMxNTE1NzUy",
"avatar_url": "https://avatars.githubusercontent.com/u/31515752?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KeerthiNingegowda",
"html_url": "https://github.com/KeerthiNingegowda",
"followers_url": "https://api.github.com/users/KeerthiNingegowda/followers",
"following_url": "https://api.github.com/users/KeerthiNingegowda/following{/other_user}",
"gists_url": "https://api.github.com/users/KeerthiNingegowda/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KeerthiNingegowda/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KeerthiNingegowda/subscriptions",
"organizations_url": "https://api.github.com/users/KeerthiNingegowda/orgs",
"repos_url": "https://api.github.com/users/KeerthiNingegowda/repos",
"events_url": "https://api.github.com/users/KeerthiNingegowda/events{/privacy}",
"received_events_url": "https://api.github.com/users/KeerthiNingegowda/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2025-01-27T23:13:57
| 2025-01-28T16:14:31
| 2025-01-28T06:17:48
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/8621",
"html_url": "https://github.com/ollama/ollama/pull/8621",
"diff_url": "https://github.com/ollama/ollama/pull/8621.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8621.patch",
"merged_at": null
}
|
The /api/chat endpoint has 'messages' as a parameter - the subsequent description of 'messages' object is misspelled as 'message'.
|
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8621/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8621/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5812
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5812/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5812/comments
|
https://api.github.com/repos/ollama/ollama/issues/5812/events
|
https://github.com/ollama/ollama/issues/5812
| 2,420,943,738
|
I_kwDOJ0Z1Ps6QTKt6
| 5,812
|
Mistral-Nemo support/bug
|
{
"login": "phalexo",
"id": 4603365,
"node_id": "MDQ6VXNlcjQ2MDMzNjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/4603365?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/phalexo",
"html_url": "https://github.com/phalexo",
"followers_url": "https://api.github.com/users/phalexo/followers",
"following_url": "https://api.github.com/users/phalexo/following{/other_user}",
"gists_url": "https://api.github.com/users/phalexo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/phalexo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/phalexo/subscriptions",
"organizations_url": "https://api.github.com/users/phalexo/orgs",
"repos_url": "https://api.github.com/users/phalexo/repos",
"events_url": "https://api.github.com/users/phalexo/events{/privacy}",
"received_events_url": "https://api.github.com/users/phalexo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-07-20T15:26:13
| 2024-07-23T18:03:45
| 2024-07-23T18:03:45
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
(Pythogora) developer@ai:~/PROJECTS$ ~/ollama/ollama run mistral-Nemo-Instruct-2407-f16:latest
Error: llama runner process has terminated: signal: aborted (core dumped) error loading model: check_tensor_dims: tensor 'blk.0.attn_q.weight' has wrong shape; expected 5120, 5120, got 5120, 4096, 1, 1
llama_load_model_from_file: exception loading model
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
latest from source.
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5812/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5812/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6066
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6066/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6066/comments
|
https://api.github.com/repos/ollama/ollama/issues/6066/events
|
https://github.com/ollama/ollama/pull/6066
| 2,436,617,604
|
PR_kwDOJ0Z1Ps52z8GT
| 6,066
|
Patch for Tool Stream Compatibility
|
{
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjhan/followers",
"following_url": "https://api.github.com/users/royjhan/following{/other_user}",
"gists_url": "https://api.github.com/users/royjhan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/royjhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/royjhan/subscriptions",
"organizations_url": "https://api.github.com/users/royjhan/orgs",
"repos_url": "https://api.github.com/users/royjhan/repos",
"events_url": "https://api.github.com/users/royjhan/events{/privacy}",
"received_events_url": "https://api.github.com/users/royjhan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 4
| 2024-07-30T00:00:15
| 2024-09-27T09:29:12
| 2024-08-12T17:31:18
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | true
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6066",
"html_url": "https://github.com/ollama/ollama/pull/6066",
"diff_url": "https://github.com/ollama/ollama/pull/6066.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6066.patch",
"merged_at": null
}
|
!! not real streaming, but pretty much dumps stream of objects at complete time
```
{"model":"mistral","created_at":"2024-07-30T00:11:33.125585Z","message":{"role":"assistant","content":"","tool_calls":[{"function":{"name":"get_current_weather","arguments":{"format":"celsius","location":"Paris, France"}}}]},"done":false}
{"model":"mistral","created_at":"2024-07-30T00:11:33.125585Z","message":{"role":"assistant","content":"","tool_calls":[{"function":{"name":"get_current_weather","arguments":{"format":"celsius","location":"Tokyo, Japan"}}}]},"done":false}
{"model":"mistral","created_at":"2024-07-30T00:11:33.125585Z","message":{"role":"assistant","content":"","tool_calls":[{"function":{"name":"get_current_weather","arguments":{"format":"fahrenheit","location":"Boston, MA"}}}]},"done":false}
{"model":"mistral","created_at":"2024-07-30T00:11:33.125585Z","message":{"role":"assistant","content":""},"done_reason":"tool_calls","done":true,"total_duration":3788799791,"load_duration":1521528291,"prompt_eval_count":138,"prompt_eval_duration":224863000,"eval_count":130,"eval_duration":2038857000}
```
```
curl http://localhost:11434/api/chat -d '{
"model": "mistral",
"messages": [
{
"role": "user",
"content": "What is the weather today in Paris, Tokyo, and Boston?"
}
],
"stream": true,
"options": {
"temperature": 0
},
"tools": [
{
"type": "function",
"function": {
"name": "get_current_weather",
"description": "Get the current weather for a location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The location to get the weather for, e.g. San Francisco, CA"
},
"format": {
"type": "string",
"description": "The format to return the weather in, e.g. 'celsius' or 'fahrenheit'",
"enum": ["celsius", "fahrenheit"]
}
},
"required": ["location", "format"]
}
}
}
]
}'
```
#5915
#5993
#5989
#5796
|
{
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjhan/followers",
"following_url": "https://api.github.com/users/royjhan/following{/other_user}",
"gists_url": "https://api.github.com/users/royjhan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/royjhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/royjhan/subscriptions",
"organizations_url": "https://api.github.com/users/royjhan/orgs",
"repos_url": "https://api.github.com/users/royjhan/repos",
"events_url": "https://api.github.com/users/royjhan/events{/privacy}",
"received_events_url": "https://api.github.com/users/royjhan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6066/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 3
}
|
https://api.github.com/repos/ollama/ollama/issues/6066/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2129
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2129/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2129/comments
|
https://api.github.com/repos/ollama/ollama/issues/2129/events
|
https://github.com/ollama/ollama/issues/2129
| 2,092,814,524
|
I_kwDOJ0Z1Ps58vdC8
| 2,129
|
High CPU and GPU usage, even when noone is interacting with ollama
|
{
"login": "ThatCoffeeGuy",
"id": 24213618,
"node_id": "MDQ6VXNlcjI0MjEzNjE4",
"avatar_url": "https://avatars.githubusercontent.com/u/24213618?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ThatCoffeeGuy",
"html_url": "https://github.com/ThatCoffeeGuy",
"followers_url": "https://api.github.com/users/ThatCoffeeGuy/followers",
"following_url": "https://api.github.com/users/ThatCoffeeGuy/following{/other_user}",
"gists_url": "https://api.github.com/users/ThatCoffeeGuy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ThatCoffeeGuy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ThatCoffeeGuy/subscriptions",
"organizations_url": "https://api.github.com/users/ThatCoffeeGuy/orgs",
"repos_url": "https://api.github.com/users/ThatCoffeeGuy/repos",
"events_url": "https://api.github.com/users/ThatCoffeeGuy/events{/privacy}",
"received_events_url": "https://api.github.com/users/ThatCoffeeGuy/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 5
| 2024-01-21T22:39:34
| 2024-06-21T12:12:43
| 2024-03-11T17:38:42
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hey.
I have used ollama a few hours ago... only to notice now, that the CPU usage is quite high and the GPU usage is around 30% while the model and web are doing absolutely nothing.
lsof is showing 1.8k open files and the processes keep renewing their PIDs, it's impossible to strace them. What's going on?

|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2129/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2129/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/759
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/759/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/759/comments
|
https://api.github.com/repos/ollama/ollama/issues/759/events
|
https://github.com/ollama/ollama/pull/759
| 1,938,616,682
|
PR_kwDOJ0Z1Ps5cj4k_
| 759
|
deprecate modelfile embed command
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 4
| 2023-10-11T19:53:25
| 2023-10-18T08:02:57
| 2023-10-16T15:07:37
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/759",
"html_url": "https://github.com/ollama/ollama/pull/759",
"diff_url": "https://github.com/ollama/ollama/pull/759.diff",
"patch_url": "https://github.com/ollama/ollama/pull/759.patch",
"merged_at": "2023-10-16T15:07:37"
}
|
Embeddings in Modelfiles are a convenient idea, allowing the model to be packaged with embeddings created for it specifically, but the user-experience of this implementation isn't up to par.
This change leaves the `/embed` endpoint, but deprecates `EMBED` in the modelfile.
- Ollama doesn't have any models designed for embedding generation
- Generating embeddings from modelfile is slow, error prone, and only supports single line text inputs
- Retrieving embeddings was too slow to be useful, and there was no mechanism to connect an external vector database
Instead of using the Modelfile the right way to do this is with an external tool such as PrivateGPT or LlamaIndex that uses Ollama as the runner.
New behavior:
On create a modelfile with the embed command:
```
$ ollama create embed-test -f /Users/bruce/modelfiles/embedded/Modelfile
⠋ parsing modelfile Error: deprecated command: EMBED
```
On running a modelfile with the embed command:
```
2023/10/11 15:46:52 images.go:190: WARNING: model contains embeddings, but embeddings in modelfiles have been deprecated and will be ignored.
```
On running a modelfile with the embed in the template:
```
$ ollama run embed-test
⠦ Error: template: :5:7: executing "" at <.Embed>: can't evaluate field Embed in type struct { First bool; System string; Prompt string; Context []int }
```
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/759/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/759/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1938
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1938/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1938/comments
|
https://api.github.com/repos/ollama/ollama/issues/1938/events
|
https://github.com/ollama/ollama/issues/1938
| 2,077,794,956
|
I_kwDOJ0Z1Ps572KKM
| 1,938
|
ollama --version 0.1.20 not working
|
{
"login": "PhilipAmadasun",
"id": 55031054,
"node_id": "MDQ6VXNlcjU1MDMxMDU0",
"avatar_url": "https://avatars.githubusercontent.com/u/55031054?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PhilipAmadasun",
"html_url": "https://github.com/PhilipAmadasun",
"followers_url": "https://api.github.com/users/PhilipAmadasun/followers",
"following_url": "https://api.github.com/users/PhilipAmadasun/following{/other_user}",
"gists_url": "https://api.github.com/users/PhilipAmadasun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PhilipAmadasun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PhilipAmadasun/subscriptions",
"organizations_url": "https://api.github.com/users/PhilipAmadasun/orgs",
"repos_url": "https://api.github.com/users/PhilipAmadasun/repos",
"events_url": "https://api.github.com/users/PhilipAmadasun/events{/privacy}",
"received_events_url": "https://api.github.com/users/PhilipAmadasun/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 14
| 2024-01-11T23:58:02
| 2024-02-16T16:55:14
| 2024-02-16T16:55:14
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Our ollama no longer works once upgrading to version `0.1.20`. All the commands, for instance:
```
curl http://localhost:11434/api/chat -d '{
> "model": "llama2",
> "messages": [
> {
> "role": "user",
> "content": "why is the sky blue?"
> }
> ]
> }'
```
Just gets stuck and doesn't run. What's going on? I believe this is the latest version, is the version not stable? Do we have to downgrade ollama? If so how do we go about doing that?
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1938/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/ollama/ollama/issues/1938/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8034
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8034/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8034/comments
|
https://api.github.com/repos/ollama/ollama/issues/8034/events
|
https://github.com/ollama/ollama/pull/8034
| 2,731,497,287
|
PR_kwDOJ0Z1Ps6Ex7WN
| 8,034
|
cmd: Add --base2 option to ps to show model sizes in KiB/MiB/GiB
|
{
"login": "theasp",
"id": 7775024,
"node_id": "MDQ6VXNlcjc3NzUwMjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/7775024?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/theasp",
"html_url": "https://github.com/theasp",
"followers_url": "https://api.github.com/users/theasp/followers",
"following_url": "https://api.github.com/users/theasp/following{/other_user}",
"gists_url": "https://api.github.com/users/theasp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/theasp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/theasp/subscriptions",
"organizations_url": "https://api.github.com/users/theasp/orgs",
"repos_url": "https://api.github.com/users/theasp/repos",
"events_url": "https://api.github.com/users/theasp/events{/privacy}",
"received_events_url": "https://api.github.com/users/theasp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 0
| 2024-12-11T00:04:55
| 2024-12-11T00:04:55
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/8034",
"html_url": "https://github.com/ollama/ollama/pull/8034",
"diff_url": "https://github.com/ollama/ollama/pull/8034.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8034.patch",
"merged_at": null
}
|
Add `--base2` option to ps to show model sizes in KiB/MiB/GiB. It also shows a decimal place, but I consider this a feature.
```
industrial:~/projects/ollama-src$ ollama ps
NAME ID SIZE PROCESSOR UNTIL
DEFAULT/mistral-small-2409-22b:latest 671ad04c21ce 26 GB 7%/93% CPU/GPU Forever
industrial:~/projects/ollama-src$ ollama ps --base2
NAME ID SIZE PROCESSOR UNTIL
DEFAULT/mistral-small-2409-22b:latest 671ad04c21ce 24.4 GiB 7%/93% CPU/GPU Forever
```
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8034/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8034/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5137
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5137/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5137/comments
|
https://api.github.com/repos/ollama/ollama/issues/5137/events
|
https://github.com/ollama/ollama/issues/5137
| 2,361,755,199
|
I_kwDOJ0Z1Ps6MxYY_
| 5,137
|
A problem with "ollama create"
|
{
"login": "Udacv",
"id": 126667614,
"node_id": "U_kgDOB4zLXg",
"avatar_url": "https://avatars.githubusercontent.com/u/126667614?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Udacv",
"html_url": "https://github.com/Udacv",
"followers_url": "https://api.github.com/users/Udacv/followers",
"following_url": "https://api.github.com/users/Udacv/following{/other_user}",
"gists_url": "https://api.github.com/users/Udacv/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Udacv/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Udacv/subscriptions",
"organizations_url": "https://api.github.com/users/Udacv/orgs",
"repos_url": "https://api.github.com/users/Udacv/repos",
"events_url": "https://api.github.com/users/Udacv/events{/privacy}",
"received_events_url": "https://api.github.com/users/Udacv/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-06-19T08:44:51
| 2024-10-17T20:42:01
| 2024-06-29T23:20:47
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
If I want to create a model with two part of gguf, how can I write my Modelfile.

### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.44
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5137/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5137/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7891
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7891/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7891/comments
|
https://api.github.com/repos/ollama/ollama/issues/7891/events
|
https://github.com/ollama/ollama/issues/7891
| 2,707,109,591
|
I_kwDOJ0Z1Ps6hWzbX
| 7,891
|
Ubuntu Server 22.04 with `Out of memory` boot failure.
|
{
"login": "vahid67",
"id": 10948576,
"node_id": "MDQ6VXNlcjEwOTQ4NTc2",
"avatar_url": "https://avatars.githubusercontent.com/u/10948576?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vahid67",
"html_url": "https://github.com/vahid67",
"followers_url": "https://api.github.com/users/vahid67/followers",
"following_url": "https://api.github.com/users/vahid67/following{/other_user}",
"gists_url": "https://api.github.com/users/vahid67/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vahid67/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vahid67/subscriptions",
"organizations_url": "https://api.github.com/users/vahid67/orgs",
"repos_url": "https://api.github.com/users/vahid67/repos",
"events_url": "https://api.github.com/users/vahid67/events{/privacy}",
"received_events_url": "https://api.github.com/users/vahid67/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 4
| 2024-11-30T10:03:21
| 2024-12-04T07:23:39
| 2024-12-04T07:23:39
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Hello,
This bug is related to a boot failure on Ubuntu Server 22.04 with an `Out of memory` error.
I'm trying to install Ollama on Ubuntu Server 22.04 to run a local dedicated server and the specs are Corei9 12900 + 32GB DDR5 + 3080 RTX.
I can install nvidia driver using this method:
`sudo ubuntu-drivers install --gpgpu`
This command came from this page: https://ubuntu.com/server/docs/nvidia-drivers-installation
Everything works and I can reboot the server.
Now If I install Ollama using the official command it works perfectly and all the step works and I can run Ollama, pull a model then communicate with that and it perfectly uses GPU for the process but as soon as a reboot I can't boot to linux with the mentioned error and even I can't boot using the recovery mode!
I formatted the drive and installed Ubuntu again but this time I changed the installation order, first I installed Ollama but it failed to install Cuda drivers but Ollama itself installed then I manually installed CUDA drivers from this page https://developer.nvidia.com/cuda-downloads?target_os=Linux&target_arch=x86_64&Distribution=Ubuntu&target_version=22.04&target_type=deb_local
It was installed without any problem but again after reboot, the `Out of memory` error came back.
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
v0.4.6 and v0.4.5
|
{
"login": "vahid67",
"id": 10948576,
"node_id": "MDQ6VXNlcjEwOTQ4NTc2",
"avatar_url": "https://avatars.githubusercontent.com/u/10948576?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vahid67",
"html_url": "https://github.com/vahid67",
"followers_url": "https://api.github.com/users/vahid67/followers",
"following_url": "https://api.github.com/users/vahid67/following{/other_user}",
"gists_url": "https://api.github.com/users/vahid67/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vahid67/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vahid67/subscriptions",
"organizations_url": "https://api.github.com/users/vahid67/orgs",
"repos_url": "https://api.github.com/users/vahid67/repos",
"events_url": "https://api.github.com/users/vahid67/events{/privacy}",
"received_events_url": "https://api.github.com/users/vahid67/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7891/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7891/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7364
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7364/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7364/comments
|
https://api.github.com/repos/ollama/ollama/issues/7364/events
|
https://github.com/ollama/ollama/issues/7364
| 2,614,953,776
|
I_kwDOJ0Z1Ps6b3Qcw
| 7,364
|
Data persistence
|
{
"login": "multiplicity-16",
"id": 186337493,
"node_id": "U_kgDOCxtI1Q",
"avatar_url": "https://avatars.githubusercontent.com/u/186337493?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/multiplicity-16",
"html_url": "https://github.com/multiplicity-16",
"followers_url": "https://api.github.com/users/multiplicity-16/followers",
"following_url": "https://api.github.com/users/multiplicity-16/following{/other_user}",
"gists_url": "https://api.github.com/users/multiplicity-16/gists{/gist_id}",
"starred_url": "https://api.github.com/users/multiplicity-16/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/multiplicity-16/subscriptions",
"organizations_url": "https://api.github.com/users/multiplicity-16/orgs",
"repos_url": "https://api.github.com/users/multiplicity-16/repos",
"events_url": "https://api.github.com/users/multiplicity-16/events{/privacy}",
"received_events_url": "https://api.github.com/users/multiplicity-16/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-10-25T19:35:03
| 2024-12-02T14:44:32
| 2024-12-02T14:44:32
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I love that I can load extensive public domain resources directly from the internet into the sessions and add hundreds of thousands of data point. I can then run knowledge graph optimizations, as well as precision config changes all direct in the session. However I unable to get any of this data to persist. Other engine options don't have the ability to pull from live online data sources.
I would ollama to support my local model copies having additional persistent data and indexes. For example I've ingested 20GB of additional public data, tuned the accuracy and prompt methodologies, run all the optional optimization tasks within the Llama3.2 live model, but as soon as a stop that model that is gone. Exporting the conversation manually isn't realistic as a regeneration method,
Data persistence would be a game changing. I'm not a Python programmer so trying to do wrappers is beyond me, but there seem to be key live data capabilities in ollama that GPT4All doesn't have.
Side notes: It is also a bit annoying, though separate that save_config, load_config, and auto_save_conversations are non-functional.
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7364/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7364/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6909
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6909/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6909/comments
|
https://api.github.com/repos/ollama/ollama/issues/6909/events
|
https://github.com/ollama/ollama/issues/6909
| 2,541,079,767
|
I_kwDOJ0Z1Ps6XdczX
| 6,909
|
InternVL 2.0 models
|
{
"login": "ddpasa",
"id": 112642920,
"node_id": "U_kgDOBrbLaA",
"avatar_url": "https://avatars.githubusercontent.com/u/112642920?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ddpasa",
"html_url": "https://github.com/ddpasa",
"followers_url": "https://api.github.com/users/ddpasa/followers",
"following_url": "https://api.github.com/users/ddpasa/following{/other_user}",
"gists_url": "https://api.github.com/users/ddpasa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ddpasa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ddpasa/subscriptions",
"organizations_url": "https://api.github.com/users/ddpasa/orgs",
"repos_url": "https://api.github.com/users/ddpasa/repos",
"events_url": "https://api.github.com/users/ddpasa/events{/privacy}",
"received_events_url": "https://api.github.com/users/ddpasa/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
open
| false
| null |
[] | null | 4
| 2024-09-22T13:23:06
| 2025-01-28T13:33:35
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
The models are listed here:
https://huggingface.co/collections/OpenGVLab/internvl-20-667d3961ab5eb12c7ed1463e
1B: https://huggingface.co/OpenGVLab/InternVL2-1B
2B: https://huggingface.co/OpenGVLab/InternVL2-2B
4B: https://huggingface.co/OpenGVLab/InternVL2-4B
8B: https://huggingface.co/OpenGVLab/InternVL2-8B
26B: https://huggingface.co/OpenGVLab/InternVL2-26B
40B: https://huggingface.co/OpenGVLab/InternVL2-40B
76B: https://huggingface.co/OpenGVLab/InternVL2-Llama3-76B
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6909/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6909/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/2324
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2324/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2324/comments
|
https://api.github.com/repos/ollama/ollama/issues/2324/events
|
https://github.com/ollama/ollama/issues/2324
| 2,114,629,685
|
I_kwDOJ0Z1Ps5-CrA1
| 2,324
|
Running Ollama with mixtral on Macbook pro m1 pro is incredibly slow
|
{
"login": "azurwastaken",
"id": 30268138,
"node_id": "MDQ6VXNlcjMwMjY4MTM4",
"avatar_url": "https://avatars.githubusercontent.com/u/30268138?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/azurwastaken",
"html_url": "https://github.com/azurwastaken",
"followers_url": "https://api.github.com/users/azurwastaken/followers",
"following_url": "https://api.github.com/users/azurwastaken/following{/other_user}",
"gists_url": "https://api.github.com/users/azurwastaken/gists{/gist_id}",
"starred_url": "https://api.github.com/users/azurwastaken/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/azurwastaken/subscriptions",
"organizations_url": "https://api.github.com/users/azurwastaken/orgs",
"repos_url": "https://api.github.com/users/azurwastaken/repos",
"events_url": "https://api.github.com/users/azurwastaken/events{/privacy}",
"received_events_url": "https://api.github.com/users/azurwastaken/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 5
| 2024-02-02T10:42:15
| 2024-03-11T23:45:53
| 2024-03-11T23:45:53
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hello, I tried to install ollama on my macbook today and give it a try but the model is taking 10+ min just to answer to an Hello.
Did i missed something in config ?
|
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2324/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2324/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6143
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6143/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6143/comments
|
https://api.github.com/repos/ollama/ollama/issues/6143/events
|
https://github.com/ollama/ollama/issues/6143
| 2,445,334,161
|
I_kwDOJ0Z1Ps6RwNaR
| 6,143
|
Support for AWS Neuron Inferentia GPU
|
{
"login": "mavwolverine",
"id": 316111,
"node_id": "MDQ6VXNlcjMxNjExMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/316111?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mavwolverine",
"html_url": "https://github.com/mavwolverine",
"followers_url": "https://api.github.com/users/mavwolverine/followers",
"following_url": "https://api.github.com/users/mavwolverine/following{/other_user}",
"gists_url": "https://api.github.com/users/mavwolverine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mavwolverine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mavwolverine/subscriptions",
"organizations_url": "https://api.github.com/users/mavwolverine/orgs",
"repos_url": "https://api.github.com/users/mavwolverine/repos",
"events_url": "https://api.github.com/users/mavwolverine/events{/privacy}",
"received_events_url": "https://api.github.com/users/mavwolverine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 5
| 2024-08-02T16:17:47
| 2024-08-08T20:25:18
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
This would add the ability to run ollama on inf2 instance types in AWS.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6143/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6143/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/2571
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2571/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2571/comments
|
https://api.github.com/repos/ollama/ollama/issues/2571/events
|
https://github.com/ollama/ollama/issues/2571
| 2,140,803,288
|
I_kwDOJ0Z1Ps5_mhDY
| 2,571
|
Storing models on external drive
|
{
"login": "shersoni610",
"id": 57876250,
"node_id": "MDQ6VXNlcjU3ODc2MjUw",
"avatar_url": "https://avatars.githubusercontent.com/u/57876250?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shersoni610",
"html_url": "https://github.com/shersoni610",
"followers_url": "https://api.github.com/users/shersoni610/followers",
"following_url": "https://api.github.com/users/shersoni610/following{/other_user}",
"gists_url": "https://api.github.com/users/shersoni610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shersoni610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shersoni610/subscriptions",
"organizations_url": "https://api.github.com/users/shersoni610/orgs",
"repos_url": "https://api.github.com/users/shersoni610/repos",
"events_url": "https://api.github.com/users/shersoni610/events{/privacy}",
"received_events_url": "https://api.github.com/users/shersoni610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 5
| 2024-02-18T07:59:48
| 2024-06-23T13:12:01
| 2024-04-12T22:23:32
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hello,
I have limited memory on the OS hard drive. So I want to store all the models
in /usr/share/ollama/.ollama/models/blobs on an external drive. After downloading
the models, I made a softlink as:
sudo ln -s ~/Disk2/Models/Ollama/blob /usr/share/ollama/.ollama/models/blobs
but when I rurn the code, I get the message:
Error: mkdir /usr/share/ollama/.ollama/models/blobs: file exists
I do not understand why ollama i trying to perform "mkdir". Can someone help?
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2571/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2571/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/235
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/235/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/235/comments
|
https://api.github.com/repos/ollama/ollama/issues/235/events
|
https://github.com/ollama/ollama/pull/235
| 1,826,960,797
|
PR_kwDOJ0Z1Ps5Wrv8Z
| 235
|
remove io/ioutil import
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-07-28T19:07:44
| 2023-07-28T19:19:07
| 2023-07-28T19:19:06
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/235",
"html_url": "https://github.com/ollama/ollama/pull/235",
"diff_url": "https://github.com/ollama/ollama/pull/235.diff",
"patch_url": "https://github.com/ollama/ollama/pull/235.patch",
"merged_at": "2023-07-28T19:19:06"
}
|
ioutil is deprecated
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/235/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/235/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4688
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4688/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4688/comments
|
https://api.github.com/repos/ollama/ollama/issues/4688/events
|
https://github.com/ollama/ollama/issues/4688
| 2,321,906,519
|
I_kwDOJ0Z1Ps6KZXtX
| 4,688
|
Can't down the Ollama .exe file for Windows
|
{
"login": "Tarhex",
"id": 56320309,
"node_id": "MDQ6VXNlcjU2MzIwMzA5",
"avatar_url": "https://avatars.githubusercontent.com/u/56320309?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Tarhex",
"html_url": "https://github.com/Tarhex",
"followers_url": "https://api.github.com/users/Tarhex/followers",
"following_url": "https://api.github.com/users/Tarhex/following{/other_user}",
"gists_url": "https://api.github.com/users/Tarhex/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Tarhex/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Tarhex/subscriptions",
"organizations_url": "https://api.github.com/users/Tarhex/orgs",
"repos_url": "https://api.github.com/users/Tarhex/repos",
"events_url": "https://api.github.com/users/Tarhex/events{/privacy}",
"received_events_url": "https://api.github.com/users/Tarhex/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-05-28T20:55:02
| 2024-05-28T21:47:09
| 2024-05-28T21:47:09
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I can't download the .exe file for windows. I have tried all I could but no success.
The link: https://ollama.com/download/OllamaSetup.exe doesn't work.
### OS
Windows
### GPU
_No response_
### CPU
_No response_
### Ollama version
_No response_
|
{
"login": "Tarhex",
"id": 56320309,
"node_id": "MDQ6VXNlcjU2MzIwMzA5",
"avatar_url": "https://avatars.githubusercontent.com/u/56320309?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Tarhex",
"html_url": "https://github.com/Tarhex",
"followers_url": "https://api.github.com/users/Tarhex/followers",
"following_url": "https://api.github.com/users/Tarhex/following{/other_user}",
"gists_url": "https://api.github.com/users/Tarhex/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Tarhex/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Tarhex/subscriptions",
"organizations_url": "https://api.github.com/users/Tarhex/orgs",
"repos_url": "https://api.github.com/users/Tarhex/repos",
"events_url": "https://api.github.com/users/Tarhex/events{/privacy}",
"received_events_url": "https://api.github.com/users/Tarhex/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4688/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4688/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4087
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4087/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4087/comments
|
https://api.github.com/repos/ollama/ollama/issues/4087/events
|
https://github.com/ollama/ollama/pull/4087
| 2,274,042,292
|
PR_kwDOJ0Z1Ps5uR6MZ
| 4,087
|
types/model: fix name for hostport
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-05-01T19:15:25
| 2024-05-01T19:42:08
| 2024-05-01T19:42:07
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4087",
"html_url": "https://github.com/ollama/ollama/pull/4087",
"diff_url": "https://github.com/ollama/ollama/pull/4087.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4087.patch",
"merged_at": "2024-05-01T19:42:07"
}
| null |
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4087/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4087/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/915
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/915/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/915/comments
|
https://api.github.com/repos/ollama/ollama/issues/915/events
|
https://github.com/ollama/ollama/issues/915
| 1,964,007,421
|
I_kwDOJ0Z1Ps51EF_9
| 915
|
Cannot download models behind a proxy
|
{
"login": "beettlle",
"id": 428052,
"node_id": "MDQ6VXNlcjQyODA1Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/428052?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/beettlle",
"html_url": "https://github.com/beettlle",
"followers_url": "https://api.github.com/users/beettlle/followers",
"following_url": "https://api.github.com/users/beettlle/following{/other_user}",
"gists_url": "https://api.github.com/users/beettlle/gists{/gist_id}",
"starred_url": "https://api.github.com/users/beettlle/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/beettlle/subscriptions",
"organizations_url": "https://api.github.com/users/beettlle/orgs",
"repos_url": "https://api.github.com/users/beettlle/repos",
"events_url": "https://api.github.com/users/beettlle/events{/privacy}",
"received_events_url": "https://api.github.com/users/beettlle/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 14
| 2023-10-26T17:14:02
| 2024-04-03T06:17:10
| 2023-11-17T00:00:17
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Seems like #769 doesn't catch all the corner cases when users are behind a proxy. Both @reactivetype and I can reproduce in `0.1.3` and `0.1.5`.
```
$ ollama -v
ollama version 0.1.5
$ ollama pull llama2
pulling manifest
Error: pull model manifest: Get "https://registry.ollama.ai/v2/library/llama2/manifests/latest": dial tcp: lookup registry.ollama.ai on xxx.xxx.xxx.xxx:53: read udp xxx.xxx.xxx.xxx:49613->xxxx.xxx.xxx.xxx:53: i/o timeout
$ curl https://registry.ollama.ai/v2/library/llama2/manifests/latest
{"errors":[{"code":"MANIFEST_INVALID","message":"manifest invalid","detail":{}}]}
```
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/915/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/915/timeline
| null |
completed
| false
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.