url
stringlengths 51
54
| repository_url
stringclasses 1
value | labels_url
stringlengths 65
68
| comments_url
stringlengths 60
63
| events_url
stringlengths 58
61
| html_url
stringlengths 39
44
| id
int64 1.78B
2.82B
| node_id
stringlengths 18
19
| number
int64 1
8.69k
| title
stringlengths 1
382
| user
dict | labels
listlengths 0
5
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
2
| milestone
null | comments
int64 0
323
| created_at
timestamp[s] | updated_at
timestamp[s] | closed_at
timestamp[s] | author_association
stringclasses 4
values | sub_issues_summary
dict | active_lock_reason
null | draft
bool 2
classes | pull_request
dict | body
stringlengths 2
118k
⌀ | closed_by
dict | reactions
dict | timeline_url
stringlengths 60
63
| performed_via_github_app
null | state_reason
stringclasses 4
values | is_pull_request
bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/ollama/ollama/issues/6756
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6756/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6756/comments
|
https://api.github.com/repos/ollama/ollama/issues/6756/events
|
https://github.com/ollama/ollama/issues/6756
| 2,520,219,958
|
I_kwDOJ0Z1Ps6WN4E2
| 6,756
|
Yet another "segmentation fault" issue with AMD GPU
|
{
"login": "remon-nashid",
"id": 1994818,
"node_id": "MDQ6VXNlcjE5OTQ4MTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/1994818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/remon-nashid",
"html_url": "https://github.com/remon-nashid",
"followers_url": "https://api.github.com/users/remon-nashid/followers",
"following_url": "https://api.github.com/users/remon-nashid/following{/other_user}",
"gists_url": "https://api.github.com/users/remon-nashid/gists{/gist_id}",
"starred_url": "https://api.github.com/users/remon-nashid/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/remon-nashid/subscriptions",
"organizations_url": "https://api.github.com/users/remon-nashid/orgs",
"repos_url": "https://api.github.com/users/remon-nashid/repos",
"events_url": "https://api.github.com/users/remon-nashid/events{/privacy}",
"received_events_url": "https://api.github.com/users/remon-nashid/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5755339642,
"node_id": "LA_kwDOJ0Z1Ps8AAAABVwuDeg",
"url": "https://api.github.com/repos/ollama/ollama/labels/linux",
"name": "linux",
"color": "516E70",
"default": false,
"description": ""
},
{
"id": 6433346500,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf3UTxA",
"url": "https://api.github.com/repos/ollama/ollama/labels/amd",
"name": "amd",
"color": "000000",
"default": false,
"description": "Issues relating to AMD GPUs and ROCm"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 57
| 2024-09-11T16:39:22
| 2024-10-20T22:29:08
| 2024-10-12T16:56:50
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
`Error: llama runner process has terminated: signal: segmentation fault (core dumped)`. It occurs while loading larger models, that are still within the VRAM capacity. Here I'm trying to load `command-r:35b-08-2024-q4_K_M` (**19GB**), on an RX 7900 XTX with **24GB** of VRAM. Smaller models load fine.
**Edit**: even with `gemma2:27b-instruct-q4_K_M` `16 GB` I still get the error. It seems that max model size that can be loaded is `13 GB`, eg `codestral:22b-v0.1-q4_K_M`.
From the logs: ollama clearly says that available vram is `23.5 GiB`
`Sep 11 09:41:57 computer ollama[71334]: time=2024-09-11T09:41:57.987-06:00 level=INFO source=types.go:107 msg="inference compute" id=0 library=rocm variant="" compute=gfx1100 driver=0.0 name=1002:744c total="24.0 GiB" available="23.5 GiB"`
Error:
`Sep 11 09:43:18 computer ollama[71334]: time=2024-09-11T09:43:18.408-06:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server not responding"
Sep 11 09:43:26 computer ollama[71334]: time=2024-09-11T09:43:26.324-06:00 level=ERROR source=sched.go:456 msg="error loading llama server" error="llama runner process has terminated: signal: seg`
I could've swore that models of that size used to load just fine on older ollama version, but unfortunately, not sure which was the latest ollama version that worked.
### OS
Linux
### GPU
AMD
### CPU
AMD
### Ollama version
0.3.10
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6756/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 3
}
|
https://api.github.com/repos/ollama/ollama/issues/6756/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/696
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/696/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/696/comments
|
https://api.github.com/repos/ollama/ollama/issues/696/events
|
https://github.com/ollama/ollama/issues/696
| 1,925,653,139
|
I_kwDOJ0Z1Ps5yxyKT
| 696
|
Offline Installation and Model Download
|
{
"login": "OguzcanOzdemir",
"id": 24637523,
"node_id": "MDQ6VXNlcjI0NjM3NTIz",
"avatar_url": "https://avatars.githubusercontent.com/u/24637523?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/OguzcanOzdemir",
"html_url": "https://github.com/OguzcanOzdemir",
"followers_url": "https://api.github.com/users/OguzcanOzdemir/followers",
"following_url": "https://api.github.com/users/OguzcanOzdemir/following{/other_user}",
"gists_url": "https://api.github.com/users/OguzcanOzdemir/gists{/gist_id}",
"starred_url": "https://api.github.com/users/OguzcanOzdemir/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/OguzcanOzdemir/subscriptions",
"organizations_url": "https://api.github.com/users/OguzcanOzdemir/orgs",
"repos_url": "https://api.github.com/users/OguzcanOzdemir/repos",
"events_url": "https://api.github.com/users/OguzcanOzdemir/events{/privacy}",
"received_events_url": "https://api.github.com/users/OguzcanOzdemir/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 12
| 2023-10-04T08:15:26
| 2024-12-05T12:13:27
| 2023-10-04T17:41:00
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hello,
I'm trying to install ollama on an offline Ubuntu computer, Due to the lack of an internet connection, I need guidance on how to perform this installation offline. Additionally, I would like to understand how to download and utilize models on this offline Ubuntu machine.
Here are the specific questions and challenges I'm facing:
Offline Installation:
Is it possible to download all the necessary installation files and dependencies on an online machine and then transfer them to the offline Ubuntu computer?
Can you provide step-by-step instructions for manually installing the software offline?
Are there any specific dependencies or libraries that I need to be aware of for the installation?
Offline Model Usage:
How can I download pre-trained models or data sets for the software offline?
Once the models are downloaded, how can I integrate them with the software and use them?
I would greatly appreciate any guidance or assistance you can provide to help me with this offline installation and model usage.
Thank you in advance for your help!
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/696/reactions",
"total_count": 7,
"+1": 7,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/696/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2633
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2633/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2633/comments
|
https://api.github.com/repos/ollama/ollama/issues/2633/events
|
https://github.com/ollama/ollama/issues/2633
| 2,146,630,251
|
I_kwDOJ0Z1Ps5_8vpr
| 2,633
|
How to update all models
|
{
"login": "meminens",
"id": 42714627,
"node_id": "MDQ6VXNlcjQyNzE0NjI3",
"avatar_url": "https://avatars.githubusercontent.com/u/42714627?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/meminens",
"html_url": "https://github.com/meminens",
"followers_url": "https://api.github.com/users/meminens/followers",
"following_url": "https://api.github.com/users/meminens/following{/other_user}",
"gists_url": "https://api.github.com/users/meminens/gists{/gist_id}",
"starred_url": "https://api.github.com/users/meminens/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/meminens/subscriptions",
"organizations_url": "https://api.github.com/users/meminens/orgs",
"repos_url": "https://api.github.com/users/meminens/repos",
"events_url": "https://api.github.com/users/meminens/events{/privacy}",
"received_events_url": "https://api.github.com/users/meminens/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] |
closed
| false
| null |
[] | null | 7
| 2024-02-21T12:25:55
| 2024-09-22T15:07:28
| 2024-03-11T21:27:04
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Do I have tun run `ollama pull <model name>` for each model downloaded? Is there a more automatic way to update all models at once?
|
{
"login": "hoyyeva",
"id": 63033505,
"node_id": "MDQ6VXNlcjYzMDMzNTA1",
"avatar_url": "https://avatars.githubusercontent.com/u/63033505?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hoyyeva",
"html_url": "https://github.com/hoyyeva",
"followers_url": "https://api.github.com/users/hoyyeva/followers",
"following_url": "https://api.github.com/users/hoyyeva/following{/other_user}",
"gists_url": "https://api.github.com/users/hoyyeva/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hoyyeva/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hoyyeva/subscriptions",
"organizations_url": "https://api.github.com/users/hoyyeva/orgs",
"repos_url": "https://api.github.com/users/hoyyeva/repos",
"events_url": "https://api.github.com/users/hoyyeva/events{/privacy}",
"received_events_url": "https://api.github.com/users/hoyyeva/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2633/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2633/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8001
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8001/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8001/comments
|
https://api.github.com/repos/ollama/ollama/issues/8001/events
|
https://github.com/ollama/ollama/issues/8001
| 2,725,701,792
|
I_kwDOJ0Z1Ps6iduig
| 8,001
|
Add option to disable auto-completion
|
{
"login": "codeMonkey-shin",
"id": 80636401,
"node_id": "MDQ6VXNlcjgwNjM2NDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/80636401?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/codeMonkey-shin",
"html_url": "https://github.com/codeMonkey-shin",
"followers_url": "https://api.github.com/users/codeMonkey-shin/followers",
"following_url": "https://api.github.com/users/codeMonkey-shin/following{/other_user}",
"gists_url": "https://api.github.com/users/codeMonkey-shin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/codeMonkey-shin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/codeMonkey-shin/subscriptions",
"organizations_url": "https://api.github.com/users/codeMonkey-shin/orgs",
"repos_url": "https://api.github.com/users/codeMonkey-shin/repos",
"events_url": "https://api.github.com/users/codeMonkey-shin/events{/privacy}",
"received_events_url": "https://api.github.com/users/codeMonkey-shin/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 0
| 2024-12-09T01:08:58
| 2024-12-09T01:09:41
| 2024-12-09T01:09:41
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
It would be very helpful to have an option to disable the auto-completion feature. In some cases, it can be more of a hindrance than a help. Please consider adding a setting to turn it off.
Thanks!
|
{
"login": "codeMonkey-shin",
"id": 80636401,
"node_id": "MDQ6VXNlcjgwNjM2NDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/80636401?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/codeMonkey-shin",
"html_url": "https://github.com/codeMonkey-shin",
"followers_url": "https://api.github.com/users/codeMonkey-shin/followers",
"following_url": "https://api.github.com/users/codeMonkey-shin/following{/other_user}",
"gists_url": "https://api.github.com/users/codeMonkey-shin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/codeMonkey-shin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/codeMonkey-shin/subscriptions",
"organizations_url": "https://api.github.com/users/codeMonkey-shin/orgs",
"repos_url": "https://api.github.com/users/codeMonkey-shin/repos",
"events_url": "https://api.github.com/users/codeMonkey-shin/events{/privacy}",
"received_events_url": "https://api.github.com/users/codeMonkey-shin/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8001/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8001/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5409
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5409/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5409/comments
|
https://api.github.com/repos/ollama/ollama/issues/5409/events
|
https://github.com/ollama/ollama/pull/5409
| 2,384,281,872
|
PR_kwDOJ0Z1Ps50GCpV
| 5,409
|
convert: only extract large files
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-07-01T16:44:07
| 2024-07-31T21:32:14
| 2024-07-31T21:32:11
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5409",
"html_url": "https://github.com/ollama/ollama/pull/5409",
"diff_url": "https://github.com/ollama/ollama/pull/5409.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5409.patch",
"merged_at": null
}
|
many of the files needed during conversion don't need to be extracted and can be read directly from the zip. the only exception are the model weights. while it's possible to read these directly from the zip, the impact on performance is unacceptable (2m30s vs. 30s for gemma 2b). for these files, extract them when needed
anything larger than the limit (default 32 MiB) will be extracted to disk
the performance difference of this change (Mistral 7B \~= 53.828s) is negligible compared to the baseline (\~= 54.23s) (n=1)
a follow up would be to remove the extracted file once they're no longer used so only one large file exists on disk at any given time
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5409/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5409/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4192
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4192/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4192/comments
|
https://api.github.com/repos/ollama/ollama/issues/4192/events
|
https://github.com/ollama/ollama/pull/4192
| 2,279,925,563
|
PR_kwDOJ0Z1Ps5ulis0
| 4,192
|
feat: support registry basic auth
|
{
"login": "qcu266",
"id": 11624864,
"node_id": "MDQ6VXNlcjExNjI0ODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/11624864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qcu266",
"html_url": "https://github.com/qcu266",
"followers_url": "https://api.github.com/users/qcu266/followers",
"following_url": "https://api.github.com/users/qcu266/following{/other_user}",
"gists_url": "https://api.github.com/users/qcu266/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qcu266/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qcu266/subscriptions",
"organizations_url": "https://api.github.com/users/qcu266/orgs",
"repos_url": "https://api.github.com/users/qcu266/repos",
"events_url": "https://api.github.com/users/qcu266/events{/privacy}",
"received_events_url": "https://api.github.com/users/qcu266/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 5
| 2024-05-06T03:02:16
| 2025-01-19T18:39:49
| 2024-12-27T06:06:39
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4192",
"html_url": "https://github.com/ollama/ollama/pull/4192",
"diff_url": "https://github.com/ollama/ollama/pull/4192.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4192.patch",
"merged_at": null
}
|
support pull/push model from/to private oci registry.
relates: https://github.com/ollama/ollama/issues/2745
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4192/reactions",
"total_count": 6,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4192/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6043
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6043/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6043/comments
|
https://api.github.com/repos/ollama/ollama/issues/6043/events
|
https://github.com/ollama/ollama/issues/6043
| 2,435,055,382
|
I_kwDOJ0Z1Ps6RI_8W
| 6,043
|
Removing models from Ollama reverts the "last updated" tag
|
{
"login": "DuckyBlender",
"id": 42645784,
"node_id": "MDQ6VXNlcjQyNjQ1Nzg0",
"avatar_url": "https://avatars.githubusercontent.com/u/42645784?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DuckyBlender",
"html_url": "https://github.com/DuckyBlender",
"followers_url": "https://api.github.com/users/DuckyBlender/followers",
"following_url": "https://api.github.com/users/DuckyBlender/following{/other_user}",
"gists_url": "https://api.github.com/users/DuckyBlender/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DuckyBlender/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DuckyBlender/subscriptions",
"organizations_url": "https://api.github.com/users/DuckyBlender/orgs",
"repos_url": "https://api.github.com/users/DuckyBlender/repos",
"events_url": "https://api.github.com/users/DuckyBlender/events{/privacy}",
"received_events_url": "https://api.github.com/users/DuckyBlender/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6573197867,
"node_id": "LA_kwDOJ0Z1Ps8AAAABh8sKKw",
"url": "https://api.github.com/repos/ollama/ollama/labels/ollama.com",
"name": "ollama.com",
"color": "ffffff",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] | null | 1
| 2024-07-29T10:34:18
| 2024-07-31T17:18:06
| 2024-07-31T17:18:06
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?

Uploaded the model 8 hours ago, deleted it just now. It shows last updated 12 days ago.
### OS
_No response_
### GPU
_No response_
### CPU
_No response_
### Ollama version
_No response_
|
{
"login": "hoyyeva",
"id": 63033505,
"node_id": "MDQ6VXNlcjYzMDMzNTA1",
"avatar_url": "https://avatars.githubusercontent.com/u/63033505?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hoyyeva",
"html_url": "https://github.com/hoyyeva",
"followers_url": "https://api.github.com/users/hoyyeva/followers",
"following_url": "https://api.github.com/users/hoyyeva/following{/other_user}",
"gists_url": "https://api.github.com/users/hoyyeva/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hoyyeva/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hoyyeva/subscriptions",
"organizations_url": "https://api.github.com/users/hoyyeva/orgs",
"repos_url": "https://api.github.com/users/hoyyeva/repos",
"events_url": "https://api.github.com/users/hoyyeva/events{/privacy}",
"received_events_url": "https://api.github.com/users/hoyyeva/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6043/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6043/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6354
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6354/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6354/comments
|
https://api.github.com/repos/ollama/ollama/issues/6354/events
|
https://github.com/ollama/ollama/issues/6354
| 2,465,256,062
|
I_kwDOJ0Z1Ps6S8NJ-
| 6,354
|
Embedding interface routing
|
{
"login": "xuzeyu91",
"id": 26290929,
"node_id": "MDQ6VXNlcjI2MjkwOTI5",
"avatar_url": "https://avatars.githubusercontent.com/u/26290929?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xuzeyu91",
"html_url": "https://github.com/xuzeyu91",
"followers_url": "https://api.github.com/users/xuzeyu91/followers",
"following_url": "https://api.github.com/users/xuzeyu91/following{/other_user}",
"gists_url": "https://api.github.com/users/xuzeyu91/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xuzeyu91/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xuzeyu91/subscriptions",
"organizations_url": "https://api.github.com/users/xuzeyu91/orgs",
"repos_url": "https://api.github.com/users/xuzeyu91/repos",
"events_url": "https://api.github.com/users/xuzeyu91/events{/privacy}",
"received_events_url": "https://api.github.com/users/xuzeyu91/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 3
| 2024-08-14T08:46:52
| 2024-08-18T13:05:12
| 2024-08-14T16:54:18
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
The chat interface is currently compatible with OpenAI in terms of interface routing.
Would you consider making Embedding's interface routing also compatible with OpenAI's format, so that it will be more user-friendly when called by third-party applications? Otherwise, we need to focus on
```
http://host/v1/embeddings
```
Perform separate processing
```
http://host/embeddings
```
If it's compatible, that would be great
|
{
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjhan/followers",
"following_url": "https://api.github.com/users/royjhan/following{/other_user}",
"gists_url": "https://api.github.com/users/royjhan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/royjhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/royjhan/subscriptions",
"organizations_url": "https://api.github.com/users/royjhan/orgs",
"repos_url": "https://api.github.com/users/royjhan/repos",
"events_url": "https://api.github.com/users/royjhan/events{/privacy}",
"received_events_url": "https://api.github.com/users/royjhan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6354/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6354/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1690
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1690/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1690/comments
|
https://api.github.com/repos/ollama/ollama/issues/1690/events
|
https://github.com/ollama/ollama/pull/1690
| 2,054,820,785
|
PR_kwDOJ0Z1Ps5isvHt
| 1,690
|
Added LangChain4j links
|
{
"login": "langchain4j",
"id": 132277850,
"node_id": "O_kgDOB-JmWg",
"avatar_url": "https://avatars.githubusercontent.com/u/132277850?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/langchain4j",
"html_url": "https://github.com/langchain4j",
"followers_url": "https://api.github.com/users/langchain4j/followers",
"following_url": "https://api.github.com/users/langchain4j/following{/other_user}",
"gists_url": "https://api.github.com/users/langchain4j/gists{/gist_id}",
"starred_url": "https://api.github.com/users/langchain4j/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/langchain4j/subscriptions",
"organizations_url": "https://api.github.com/users/langchain4j/orgs",
"repos_url": "https://api.github.com/users/langchain4j/repos",
"events_url": "https://api.github.com/users/langchain4j/events{/privacy}",
"received_events_url": "https://api.github.com/users/langchain4j/received_events",
"type": "Organization",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2023-12-23T15:30:30
| 2024-02-22T19:09:09
| 2024-02-22T19:09:08
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1690",
"html_url": "https://github.com/ollama/ollama/pull/1690",
"diff_url": "https://github.com/ollama/ollama/pull/1690.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1690.patch",
"merged_at": "2024-02-22T19:09:08"
}
|
Hi, I would appreciate a lot if you could add LangChain4j links to your README, we have a nice integration with Ollama!
|
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1690/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1690/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2451
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2451/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2451/comments
|
https://api.github.com/repos/ollama/ollama/issues/2451/events
|
https://github.com/ollama/ollama/issues/2451
| 2,129,162,086
|
I_kwDOJ0Z1Ps5-6G9m
| 2,451
|
[FEATURE] Add support for Intel Xeon (Sapphire and Emerald Rapids) accelerators and AI features such as AMX and AVX 512.
|
{
"login": "scouzi1966",
"id": 58265937,
"node_id": "MDQ6VXNlcjU4MjY1OTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/58265937?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/scouzi1966",
"html_url": "https://github.com/scouzi1966",
"followers_url": "https://api.github.com/users/scouzi1966/followers",
"following_url": "https://api.github.com/users/scouzi1966/following{/other_user}",
"gists_url": "https://api.github.com/users/scouzi1966/gists{/gist_id}",
"starred_url": "https://api.github.com/users/scouzi1966/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/scouzi1966/subscriptions",
"organizations_url": "https://api.github.com/users/scouzi1966/orgs",
"repos_url": "https://api.github.com/users/scouzi1966/repos",
"events_url": "https://api.github.com/users/scouzi1966/events{/privacy}",
"received_events_url": "https://api.github.com/users/scouzi1966/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 4
| 2024-02-11T20:46:18
| 2024-05-11T00:37:16
| 2024-05-11T00:37:16
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Note that Intel is trying to demystify AVX512 with a AVX 10 standard. But they are the same.
AVX512
https://www.intel.com/content/www/us/en/architecture-and-technology/avx-512-overview.html
AMX
https://www.intel.com/content/www/us/en/products/docs/accelerator-engines/advanced-matrix-extensions/overview.html
AVX512 is also being fully implemented by AMD
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2451/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2451/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8182
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8182/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8182/comments
|
https://api.github.com/repos/ollama/ollama/issues/8182/events
|
https://github.com/ollama/ollama/issues/8182
| 2,752,630,584
|
I_kwDOJ0Z1Ps6kEc84
| 8,182
|
{"error":"POST predict: Post \"http://127.0.0.1:33603/completion\": EOF"}
|
{
"login": "forReason",
"id": 12736950,
"node_id": "MDQ6VXNlcjEyNzM2OTUw",
"avatar_url": "https://avatars.githubusercontent.com/u/12736950?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/forReason",
"html_url": "https://github.com/forReason",
"followers_url": "https://api.github.com/users/forReason/followers",
"following_url": "https://api.github.com/users/forReason/following{/other_user}",
"gists_url": "https://api.github.com/users/forReason/gists{/gist_id}",
"starred_url": "https://api.github.com/users/forReason/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/forReason/subscriptions",
"organizations_url": "https://api.github.com/users/forReason/orgs",
"repos_url": "https://api.github.com/users/forReason/repos",
"events_url": "https://api.github.com/users/forReason/events{/privacy}",
"received_events_url": "https://api.github.com/users/forReason/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-12-20T12:21:10
| 2025-01-13T01:42:50
| 2025-01-13T01:42:50
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I get the following error:
{"error":"POST predict: Post \"http://127.0.0.1:33603/completion\": EOF"}
this seems to be happening on longer context lengths.
I think I can circumvent this using options["use_mlock"] = false; at the cost of roughly 4x or more speed loss
It seems like the gpu v mem runs full, eventhough a lot is offloaded to ram already...
[ollama_logs.txt](https://github.com/user-attachments/files/18210623/ollama_logs.txt)
### OS
Linux
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.5.4
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8182/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8182/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3867
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3867/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3867/comments
|
https://api.github.com/repos/ollama/ollama/issues/3867/events
|
https://github.com/ollama/ollama/issues/3867
| 2,260,317,111
|
I_kwDOJ0Z1Ps6GubO3
| 3,867
|
Ctrl+D to exit is not stopping service
|
{
"login": "nishithshowri006",
"id": 58651995,
"node_id": "MDQ6VXNlcjU4NjUxOTk1",
"avatar_url": "https://avatars.githubusercontent.com/u/58651995?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nishithshowri006",
"html_url": "https://github.com/nishithshowri006",
"followers_url": "https://api.github.com/users/nishithshowri006/followers",
"following_url": "https://api.github.com/users/nishithshowri006/following{/other_user}",
"gists_url": "https://api.github.com/users/nishithshowri006/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nishithshowri006/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nishithshowri006/subscriptions",
"organizations_url": "https://api.github.com/users/nishithshowri006/orgs",
"repos_url": "https://api.github.com/users/nishithshowri006/repos",
"events_url": "https://api.github.com/users/nishithshowri006/events{/privacy}",
"received_events_url": "https://api.github.com/users/nishithshowri006/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-04-24T04:56:11
| 2024-05-04T23:53:05
| 2024-05-04T23:52:44
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I have observed that when we Cntrl+D or exit the chat interface when we run a model it is not stopping the ollama process. This inturn is blocking RAM and VRAM for other tasks. I observed this behavior in wsl and windows versions of ollama.
### OS
Windows, WSL2
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.32
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3867/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3867/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/781
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/781/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/781/comments
|
https://api.github.com/repos/ollama/ollama/issues/781/events
|
https://github.com/ollama/ollama/pull/781
| 1,942,392,202
|
PR_kwDOJ0Z1Ps5cwxib
| 781
|
improve api error handling
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-10-13T17:54:56
| 2023-10-13T20:57:11
| 2023-10-13T20:57:10
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/781",
"html_url": "https://github.com/ollama/ollama/pull/781",
"diff_url": "https://github.com/ollama/ollama/pull/781.diff",
"patch_url": "https://github.com/ollama/ollama/pull/781.patch",
"merged_at": "2023-10-13T20:57:10"
}
|
- remove new lines from llama.cpp error messages relayed to client
- check api option types and return error on wrong type
- change num layers from 95% VRAM to 92% VRAM
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/781/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/781/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/234
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/234/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/234/comments
|
https://api.github.com/repos/ollama/ollama/issues/234/events
|
https://github.com/ollama/ollama/pull/234
| 1,826,927,325
|
PR_kwDOJ0Z1Ps5WroYw
| 234
|
use max scan token size to hold large objects
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-07-28T18:44:44
| 2023-07-28T19:03:52
| 2023-07-28T19:03:51
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/234",
"html_url": "https://github.com/ollama/ollama/pull/234",
"diff_url": "https://github.com/ollama/ollama/pull/234.diff",
"patch_url": "https://github.com/ollama/ollama/pull/234.patch",
"merged_at": "2023-07-28T19:03:51"
}
|
The internal buffer used by scanner is too small to hold Meta's license so allocate the maximum size set in bufio. It can be potentially higher but it's not necessary right now
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/234/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/234/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3504
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3504/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3504/comments
|
https://api.github.com/repos/ollama/ollama/issues/3504/events
|
https://github.com/ollama/ollama/issues/3504
| 2,228,152,315
|
I_kwDOJ0Z1Ps6Ezuf7
| 3,504
|
I can't pull any models
|
{
"login": "jsrcode",
"id": 139555610,
"node_id": "U_kgDOCFFzGg",
"avatar_url": "https://avatars.githubusercontent.com/u/139555610?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jsrcode",
"html_url": "https://github.com/jsrcode",
"followers_url": "https://api.github.com/users/jsrcode/followers",
"following_url": "https://api.github.com/users/jsrcode/following{/other_user}",
"gists_url": "https://api.github.com/users/jsrcode/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jsrcode/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jsrcode/subscriptions",
"organizations_url": "https://api.github.com/users/jsrcode/orgs",
"repos_url": "https://api.github.com/users/jsrcode/repos",
"events_url": "https://api.github.com/users/jsrcode/events{/privacy}",
"received_events_url": "https://api.github.com/users/jsrcode/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 44
| 2024-04-05T14:18:57
| 2025-01-29T17:42:50
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
C:\Users\18164>ollama run qwen:0.5b
pulling manifest
Error: pull model manifest: Get "https://ollama.com/token?nonce=pa9U-g8eXWKfTiK3NN_FdQ&scope=repository%!A(MISSING)library%!F(MISSING)qwen%!A(MISSING)pull&service=ollama.com&ts=1712324131": net/http: TLS handshake timeout
### What did you expect to see?
Pull the model
### Steps to reproduce
Pull the model
### Are there any recent changes that introduced the issue?
No
### OS
Windows
### Architecture
x86
### Platform
Docker
### Ollama version
0.1.30
### GPU
Intel
### GPU info
_No response_
### CPU
Intel
### Other software
_No response_
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3504/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3504/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/7756
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7756/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7756/comments
|
https://api.github.com/repos/ollama/ollama/issues/7756/events
|
https://github.com/ollama/ollama/pull/7756
| 2,674,410,211
|
PR_kwDOJ0Z1Ps6CeV0j
| 7,756
|
Update README.md
|
{
"login": "jonathanhecl",
"id": 1691623,
"node_id": "MDQ6VXNlcjE2OTE2MjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/1691623?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jonathanhecl",
"html_url": "https://github.com/jonathanhecl",
"followers_url": "https://api.github.com/users/jonathanhecl/followers",
"following_url": "https://api.github.com/users/jonathanhecl/following{/other_user}",
"gists_url": "https://api.github.com/users/jonathanhecl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jonathanhecl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jonathanhecl/subscriptions",
"organizations_url": "https://api.github.com/users/jonathanhecl/orgs",
"repos_url": "https://api.github.com/users/jonathanhecl/repos",
"events_url": "https://api.github.com/users/jonathanhecl/events{/privacy}",
"received_events_url": "https://api.github.com/users/jonathanhecl/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-11-20T05:04:44
| 2024-11-20T05:31:43
| 2024-11-20T05:31:43
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7756",
"html_url": "https://github.com/ollama/ollama/pull/7756",
"diff_url": "https://github.com/ollama/ollama/pull/7756.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7756.patch",
"merged_at": "2024-11-20T05:31:43"
}
|
Gollama Library
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7756/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7756/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1744
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1744/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1744/comments
|
https://api.github.com/repos/ollama/ollama/issues/1744/events
|
https://github.com/ollama/ollama/issues/1744
| 2,060,803,482
|
I_kwDOJ0Z1Ps561V2a
| 1,744
|
💡 Idea/Suggestion: Rich API Documentation
|
{
"login": "amithkoujalgi",
"id": 1876165,
"node_id": "MDQ6VXNlcjE4NzYxNjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/1876165?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amithkoujalgi",
"html_url": "https://github.com/amithkoujalgi",
"followers_url": "https://api.github.com/users/amithkoujalgi/followers",
"following_url": "https://api.github.com/users/amithkoujalgi/following{/other_user}",
"gists_url": "https://api.github.com/users/amithkoujalgi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amithkoujalgi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amithkoujalgi/subscriptions",
"organizations_url": "https://api.github.com/users/amithkoujalgi/orgs",
"repos_url": "https://api.github.com/users/amithkoujalgi/repos",
"events_url": "https://api.github.com/users/amithkoujalgi/events{/privacy}",
"received_events_url": "https://api.github.com/users/amithkoujalgi/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396191,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aXw",
"url": "https://api.github.com/repos/ollama/ollama/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
},
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 7706482389,
"node_id": "LA_kwDOJ0Z1Ps8AAAABy1eW1Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/api",
"name": "api",
"color": "bfdadc",
"default": false,
"description": ""
}
] |
open
| false
| null |
[] | null | 1
| 2023-12-30T17:13:28
| 2024-11-06T19:03:58
| null |
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hello @jmorganca.
First of all, thank you for your amazing work! 🤩 I have been using Ollama for a while now and I'm really enjoying it.
I was wondering if we could introduce a API documentation website (right from GitHub using GH Pages). Along with this we could also have a GitHub action workflow setup to auto-build and deploy the API documentation when a release is created.
I think it would be greatly helpful for someone to get started with basics (including setup, how and why Ollama is used, etc) and then progressively navigate through the documentation to explore more and try more advanced stuff via a more user-friendly UI.
I was thinking if I could setup a skeleton structure for the documentation using [docusaurus](https://docusaurus.io/). That way, we can have well structured comprehensive API documentation such as [this](https://amithkoujalgi.github.io/ollama4j/docs/intro).
Let me know what you think. Thanks!
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1744/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1744/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/6538
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6538/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6538/comments
|
https://api.github.com/repos/ollama/ollama/issues/6538/events
|
https://github.com/ollama/ollama/pull/6538
| 2,490,608,317
|
PR_kwDOJ0Z1Ps55ox0g
| 6,538
|
throw an error when encountering unsupport tensor sizes
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-08-28T00:31:38
| 2024-08-28T00:54:06
| 2024-08-28T00:54:04
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6538",
"html_url": "https://github.com/ollama/ollama/pull/6538",
"diff_url": "https://github.com/ollama/ollama/pull/6538.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6538.patch",
"merged_at": "2024-08-28T00:54:04"
}
|
The `bitsandbytes` package creates an 8 bit quantized version of a model which is unsupported by the llama.cpp back end. It does this by creating two tensors for each of the layers which look like:
```
model.layers.0.mlp.down_proj.weight dtype=I8 shape=[4096, 14336]
model.layers.0.mlp.down_proj.weight_format dtype=U8 shape=[]
```
This change just looks to see if there is a tensor with no shape and returns an error. Right now the server will panic instead.
We already support quantizing directly from the Safetensors model, so users should use the `--quantize` flag w/ `ollama create` instead of relying on `bitsandbytes`.
Fixes #6357
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6538/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6538/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4117
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4117/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4117/comments
|
https://api.github.com/repos/ollama/ollama/issues/4117/events
|
https://github.com/ollama/ollama/issues/4117
| 2,276,837,880
|
I_kwDOJ0Z1Ps6Htcn4
| 4,117
|
0.1.33 on Windows not using GPU
|
{
"login": "Eisaichen",
"id": 12467320,
"node_id": "MDQ6VXNlcjEyNDY3MzIw",
"avatar_url": "https://avatars.githubusercontent.com/u/12467320?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Eisaichen",
"html_url": "https://github.com/Eisaichen",
"followers_url": "https://api.github.com/users/Eisaichen/followers",
"following_url": "https://api.github.com/users/Eisaichen/following{/other_user}",
"gists_url": "https://api.github.com/users/Eisaichen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Eisaichen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Eisaichen/subscriptions",
"organizations_url": "https://api.github.com/users/Eisaichen/orgs",
"repos_url": "https://api.github.com/users/Eisaichen/repos",
"events_url": "https://api.github.com/users/Eisaichen/events{/privacy}",
"received_events_url": "https://api.github.com/users/Eisaichen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677745918,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgZQ_g",
"url": "https://api.github.com/repos/ollama/ollama/labels/gpu",
"name": "gpu",
"color": "76C49E",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] | null | 3
| 2024-05-03T03:42:57
| 2024-05-04T21:23:14
| 2024-05-03T04:15:07
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
After upgrading to [v0.1.33](https://github.com/ollama/ollama/releases/tag/v0.1.33), Ollama no longer using my GPU, CPU will be used instead.
On the same PC, I tried to run 0.1.33 and older 0.1.32 side by side, 0.1.32 can run on GPU just fine while 0.1.33 is not.
After investigating the log, it seems 0.1.33 is not determining the CUDA level of the GPU correctly, causing the GPU ignored.
`time=2024-05-02T19:41:48.667-07:00 level=INFO source=gpu.go:148 msg="[0] CUDA GPU is too old. Compute Capability detected: 1.0"`
OS: Windows11
GPU: RTX 3090
<details><summary>0.1.33 logs</summary>
```
❯ .\ollama.exe serve
time=2024-05-02T20:39:24.533-07:00 level=INFO source=images.go:828 msg="total blobs: 5"
time=2024-05-02T20:39:24.533-07:00 level=INFO source=images.go:835 msg="total unused blobs removed: 0"
time=2024-05-02T20:39:24.534-07:00 level=INFO source=routes.go:1071 msg="Listening on [::]:11434 (version 0.1.33)"
time=2024-05-02T20:39:24.534-07:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cuda_v11.3 rocm_v5.7 cpu cpu_avx cpu_avx2]"
time=2024-05-02T20:39:24.534-07:00 level=INFO source=gpu.go:96 msg="Detecting GPUs"
time=2024-05-02T20:39:24.550-07:00 level=INFO source=gpu.go:101 msg="detected GPUs" library="C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\cudart64_65.dll" count=1
time=2024-05-02T20:39:24.550-07:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-05-02T20:39:24.607-07:00 level=INFO source=gpu.go:148 msg="[0] CUDA GPU is too old. Compute Capability detected: 1.0"
time=2024-05-02T20:39:24.614-07:00 level=INFO source=amd_windows.go:39 msg="AMD Driver: 50731541"
time=2024-05-02T20:39:24.616-07:00 level=INFO source=amd_windows.go:68 msg="detected hip devices" count=1
time=2024-05-02T20:39:24.616-07:00 level=INFO source=amd_windows.go:88 msg="hip device" id=0 name="AMD Radeon(TM) Graphics" gfx=gfx1036
time=2024-05-02T20:39:24.616-07:00 level=INFO source=amd_windows.go:99 msg="iGPU detected skipping" id=0
[GIN] 2024/05/02 - 20:39:32 | 200 | 399.5µs | 192.168.10.201 | GET "/api/version"
[GIN] 2024/05/02 - 20:39:32 | 200 | 1.0279ms | 192.168.10.201 | GET "/api/tags"
[GIN] 2024/05/02 - 20:39:33 | 200 | 696.2µs | 192.168.10.201 | GET "/api/tags"
time=2024-05-02T20:39:45.674-07:00 level=INFO source=gpu.go:96 msg="Detecting GPUs"
time=2024-05-02T20:39:45.677-07:00 level=INFO source=gpu.go:101 msg="detected GPUs" library="C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\cudart64_65.dll" count=1
time=2024-05-02T20:39:45.677-07:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-05-02T20:39:45.677-07:00 level=INFO source=gpu.go:148 msg="[0] CUDA GPU is too old. Compute Capability detected: 1.0"
time=2024-05-02T20:39:45.684-07:00 level=INFO source=amd_windows.go:39 msg="AMD Driver: 50731541"
time=2024-05-02T20:39:45.686-07:00 level=INFO source=amd_windows.go:68 msg="detected hip devices" count=1
time=2024-05-02T20:39:45.686-07:00 level=INFO source=amd_windows.go:88 msg="hip device" id=0 name="AMD Radeon(TM) Graphics" gfx=gfx1036
time=2024-05-02T20:39:45.686-07:00 level=INFO source=amd_windows.go:99 msg="iGPU detected skipping" id=0
time=2024-05-02T20:39:46.113-07:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-05-02T20:39:46.122-07:00 level=INFO source=server.go:289 msg="starting llama server" cmd="C:\\Users\\local\\Desktop\\ollama-windows-amd64\\ollama_runners\\cpu_avx2\\ollama_llama_server.exe --model C:\\Users\\local\\.ollama\\models\\blobs\\sha256-e8a35b5937a5e6d5c35d1f2a15f161e07eefe5e5bb0a3cdd42998ee79b057730 --ctx-size 2048 --batch-size 512 --embedding --log-disable --parallel 1 --port 61324"
time=2024-05-02T20:39:46.125-07:00 level=INFO source=sched.go:340 msg="loaded runners" count=1
time=2024-05-02T20:39:46.125-07:00 level=INFO source=server.go:432 msg="waiting for llama runner to start responding"
{"function":"server_params_parse","level":"INFO","line":2606,"msg":"logging to file is disabled.","tid":"100504","timestamp":1714707586}
{"build":2770,"commit":"952d03d","function":"wmain","level":"INFO","line":2823,"msg":"build info","tid":"100504","timestamp":1714707586}
{"function":"wmain","level":"INFO","line":2830,"msg":"system info","n_threads":16,"n_threads_batch":-1,"system_info":"AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 0 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | ","tid":"100504","timestamp":1714707586,"total_threads":32}
llama_model_loader: loaded meta data with 24 key-value pairs and 291 tensors from C:\Users\local\.ollama\models\blobs\sha256-e8a35b5937a5e6d5c35d1f2a15f161e07eefe5e5bb0a3cdd42998ee79b057730 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.name str = mistralai
llama_model_loader: - kv 2: llama.context_length u32 = 32768
llama_model_loader: - kv 3: llama.embedding_length u32 = 4096
llama_model_loader: - kv 4: llama.block_count u32 = 32
llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336
llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 7: llama.attention.head_count u32 = 32
llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 10: llama.rope.freq_base f32 = 1000000.000000
llama_model_loader: - kv 11: general.file_type u32 = 2
llama_model_loader: - kv 12: tokenizer.ggml.model str = llama
llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,32000] = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv 14: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv 16: tokenizer.ggml.merges arr[str,58980] = ["▁ t", "i n", "e r", "▁ a", "h e...
llama_model_loader: - kv 17: tokenizer.ggml.bos_token_id u32 = 1
llama_model_loader: - kv 18: tokenizer.ggml.eos_token_id u32 = 2
llama_model_loader: - kv 19: tokenizer.ggml.unknown_token_id u32 = 0
llama_model_loader: - kv 20: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 21: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 22: tokenizer.chat_template str = {{ bos_token }}{% for message in mess...
llama_model_loader: - kv 23: general.quantization_version u32 = 2
llama_model_loader: - type f32: 65 tensors
llama_model_loader: - type q4_0: 225 tensors
llama_model_loader: - type q6_K: 1 tensors
llm_load_vocab: special tokens definition check successful ( 259/32000 ).
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = SPM
llm_load_print_meta: n_vocab = 32000
llm_load_print_meta: n_merges = 0
llm_load_print_meta: n_ctx_train = 32768
llm_load_print_meta: n_embd = 4096
llm_load_print_meta: n_head = 32
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_layer = 32
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 4
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 14336
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx = 32768
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: model type = 8B
llm_load_print_meta: model ftype = Q4_0
llm_load_print_meta: model params = 7.24 B
llm_load_print_meta: model size = 3.83 GiB (4.54 BPW)
llm_load_print_meta: general.name = mistralai
llm_load_print_meta: BOS token = 1 '<s>'
llm_load_print_meta: EOS token = 2 '</s>'
llm_load_print_meta: UNK token = 0 '<unk>'
llm_load_print_meta: LF token = 13 '<0x0A>'
llm_load_tensors: ggml ctx size = 0.15 MiB
llm_load_tensors: CPU buffer size = 3917.87 MiB
..................................................................................................
llama_new_context_with_model: n_ctx = 2048
llama_new_context_with_model: n_batch = 512
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: freq_base = 1000000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: CPU KV buffer size = 256.00 MiB
llama_new_context_with_model: KV self size = 256.00 MiB, K (f16): 128.00 MiB, V (f16): 128.00 MiB
llama_new_context_with_model: CPU output buffer size = 0.14 MiB
llama_new_context_with_model: CPU compute buffer size = 164.01 MiB
llama_new_context_with_model: graph nodes = 1030
llama_new_context_with_model: graph splits = 1
{"function":"initialize","level":"INFO","line":448,"msg":"initializing slots","n_slots":1,"tid":"100504","timestamp":1714707586}
{"function":"initialize","level":"INFO","line":460,"msg":"new slot","n_ctx_slot":2048,"slot_id":0,"tid":"100504","timestamp":1714707586}
{"function":"wmain","level":"INFO","line":3067,"msg":"model loaded","tid":"100504","timestamp":1714707586}
{"function":"wmain","hostname":"127.0.0.1","level":"INFO","line":3270,"msg":"HTTP server listening","n_threads_http":"31","port":"61324","tid":"100504","timestamp":1714707586}
{"function":"update_slots","level":"INFO","line":1581,"msg":"all slots are idle and system prompt is empty, clear the KV cache","tid":"100504","timestamp":1714707586}
{"function":"process_single_task","level":"INFO","line":1513,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":1,"tid":"100504","timestamp":1714707586}
...
```
</details>
<details><summary>0.1.32 logs</summary>
```
❯ .\ollama.exe serve
time=2024-05-02T20:29:14.601-07:00 level=INFO source=images.go:817 msg="total blobs: 5"
time=2024-05-02T20:29:14.602-07:00 level=INFO source=images.go:824 msg="total unused blobs removed: 0"
time=2024-05-02T20:29:14.602-07:00 level=INFO source=routes.go:1143 msg="Listening on [::]:11435 (version 0.1.32)"
time=2024-05-02T20:29:14.603-07:00 level=INFO source=payload.go:28 msg="extracting embedded files" dir=C:\Users\local\AppData\Local\Temp\ollama603992022\runners
time=2024-05-02T20:29:14.760-07:00 level=INFO source=payload.go:41 msg="Dynamic LLM libraries [rocm_v5.7 cpu cpu_avx cpu_avx2 cuda_v11.3]"
[GIN] 2024/05/02 - 20:29:21 | 200 | 0s | 192.168.10.201 | GET "/api/version"
[GIN] 2024/05/02 - 20:29:22 | 200 | 1.029ms | 192.168.10.201 | GET "/api/tags"
[GIN] 2024/05/02 - 20:29:24 | 200 | 504µs | 192.168.10.201 | GET "/api/tags"
[GIN] 2024/05/02 - 20:29:25 | 200 | 502.9µs | 192.168.10.201 | GET "/api/tags"
[GIN] 2024/05/02 - 20:29:25 | 200 | 0s | 192.168.10.201 | GET "/api/version"
time=2024-05-02T20:29:40.960-07:00 level=INFO source=gpu.go:121 msg="Detecting GPU type"
time=2024-05-02T20:29:40.960-07:00 level=INFO source=gpu.go:268 msg="Searching for GPU management library cudart64_*.dll"
time=2024-05-02T20:29:40.963-07:00 level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\cudart64_65.dll]"
time=2024-05-02T20:29:40.988-07:00 level=INFO source=gpu.go:126 msg="Nvidia GPU detected via cudart"
time=2024-05-02T20:29:40.989-07:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-05-02T20:29:41.088-07:00 level=INFO source=gpu.go:202 msg="[cudart] CUDART CUDA Compute Capability detected: 8.6"
time=2024-05-02T20:29:41.089-07:00 level=INFO source=gpu.go:121 msg="Detecting GPU type"
time=2024-05-02T20:29:41.089-07:00 level=INFO source=gpu.go:268 msg="Searching for GPU management library cudart64_*.dll"
time=2024-05-02T20:29:41.091-07:00 level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\cudart64_65.dll]"
time=2024-05-02T20:29:41.092-07:00 level=INFO source=gpu.go:126 msg="Nvidia GPU detected via cudart"
time=2024-05-02T20:29:41.092-07:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-05-02T20:29:41.092-07:00 level=INFO source=gpu.go:202 msg="[cudart] CUDART CUDA Compute Capability detected: 8.6"
time=2024-05-02T20:29:41.092-07:00 level=INFO source=server.go:127 msg="offload to gpu" reallayers=33 layers=33 required="4724.5 MiB" used="4724.5 MiB" available="23306.0 MiB" kv="256.0 MiB" fulloffload="164.0 MiB" partialoffload="181.0 MiB"
time=2024-05-02T20:29:41.092-07:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-05-02T20:29:41.102-07:00 level=INFO source=server.go:264 msg="starting llama server" cmd="C:\\Users\\local\\AppData\\Local\\Temp\\ollama603992022\\runners\\cuda_v11.3\\ollama_llama_server.exe --model C:\\Users\\local\\.ollama\\models\\blobs\\sha256-e8a35b5937a5e6d5c35d1f2a15f161e07eefe5e5bb0a3cdd42998ee79b057730 --ctx-size 2048 --batch-size 512 --embedding --log-disable --n-gpu-layers 33 --port 60854"
time=2024-05-02T20:29:41.123-07:00 level=INFO source=server.go:389 msg="waiting for llama runner to start responding"
{"function":"server_params_parse","level":"INFO","line":2603,"msg":"logging to file is disabled.","tid":"80428","timestamp":1714706981}
{"build":2679,"commit":"7593639","function":"wmain","level":"INFO","line":2820,"msg":"build info","tid":"80428","timestamp":1714706981}
{"function":"wmain","level":"INFO","line":2827,"msg":"system info","n_threads":16,"n_threads_batch":-1,"system_info":"AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | SSSE3 = 0 | VSX = 0 | MATMUL_INT8 = 0 | ","tid":"80428","timestamp":1714706981,"total_threads":32}
llama_model_loader: loaded meta data with 24 key-value pairs and 291 tensors from C:\Users\local\.ollama\models\blobs\sha256-e8a35b5937a5e6d5c35d1f2a15f161e07eefe5e5bb0a3cdd42998ee79b057730 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.name str = mistralai
llama_model_loader: - kv 2: llama.context_length u32 = 32768
llama_model_loader: - kv 3: llama.embedding_length u32 = 4096
llama_model_loader: - kv 4: llama.block_count u32 = 32
llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336
llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 7: llama.attention.head_count u32 = 32
llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 10: llama.rope.freq_base f32 = 1000000.000000
llama_model_loader: - kv 11: general.file_type u32 = 2
llama_model_loader: - kv 12: tokenizer.ggml.model str = llama
llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,32000] = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv 14: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv 16: tokenizer.ggml.merges arr[str,58980] = ["▁ t", "i n", "e r", "▁ a", "h e...
llama_model_loader: - kv 17: tokenizer.ggml.bos_token_id u32 = 1
llama_model_loader: - kv 18: tokenizer.ggml.eos_token_id u32 = 2
llama_model_loader: - kv 19: tokenizer.ggml.unknown_token_id u32 = 0
llama_model_loader: - kv 20: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 21: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 22: tokenizer.chat_template str = {{ bos_token }}{% for message in mess...
llama_model_loader: - kv 23: general.quantization_version u32 = 2
llama_model_loader: - type f32: 65 tensors
llama_model_loader: - type q4_0: 225 tensors
llama_model_loader: - type q6_K: 1 tensors
llm_load_vocab: special tokens definition check successful ( 259/32000 ).
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = SPM
llm_load_print_meta: n_vocab = 32000
llm_load_print_meta: n_merges = 0
llm_load_print_meta: n_ctx_train = 32768
llm_load_print_meta: n_embd = 4096
llm_load_print_meta: n_head = 32
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_layer = 32
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 4
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 14336
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx = 32768
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: model type = 7B
llm_load_print_meta: model ftype = Q4_0
llm_load_print_meta: model params = 7.24 B
llm_load_print_meta: model size = 3.83 GiB (4.54 BPW)
llm_load_print_meta: general.name = mistralai
llm_load_print_meta: BOS token = 1 '<s>'
llm_load_print_meta: EOS token = 2 '</s>'
llm_load_print_meta: UNK token = 0 '<unk>'
llm_load_print_meta: LF token = 13 '<0x0A>'
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: CUDA_USE_TENSOR_CORES: yes
ggml_cuda_init: found 1 CUDA devices:
Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
llm_load_tensors: ggml ctx size = 0.22 MiB
llm_load_tensors: offloading 32 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 33/33 layers to GPU
llm_load_tensors: CPU buffer size = 70.31 MiB
llm_load_tensors: CUDA0 buffer size = 3847.55 MiB
..................................................................................................
llama_new_context_with_model: n_ctx = 2048
llama_new_context_with_model: n_batch = 512
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: freq_base = 1000000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: CUDA0 KV buffer size = 256.00 MiB
llama_new_context_with_model: KV self size = 256.00 MiB, K (f16): 128.00 MiB, V (f16): 128.00 MiB
llama_new_context_with_model: CUDA_Host output buffer size = 0.14 MiB
llama_new_context_with_model: CUDA0 compute buffer size = 164.00 MiB
llama_new_context_with_model: CUDA_Host compute buffer size = 12.01 MiB
llama_new_context_with_model: graph nodes = 1030
llama_new_context_with_model: graph splits = 2
{"function":"initialize","level":"INFO","line":448,"msg":"initializing slots","n_slots":1,"tid":"80428","timestamp":1714706983}
{"function":"initialize","level":"INFO","line":460,"msg":"new slot","n_ctx_slot":2048,"slot_id":0,"tid":"80428","timestamp":1714706983}
{"function":"wmain","level":"INFO","line":3064,"msg":"model loaded","tid":"80428","timestamp":1714706983}
{"function":"wmain","hostname":"127.0.0.1","level":"INFO","line":3267,"msg":"HTTP server listening","n_threads_http":"31","port":"60854","tid":"80428","timestamp":1714706983}
{"function":"update_slots","level":"INFO","line":1578,"msg":"all slots are idle and system prompt is empty, clear the KV cache","tid":"80428","timestamp":1714706983}
{"function":"process_single_task","level":"INFO","line":1510,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":2,"tid":"80428","timestamp":1714706983}
{"function":"process_single_task","level":"INFO","line":1510,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":0,"tid":"80428","timestamp":1714706983}
...
```
</details>
<img width="1717" alt="image" src="https://github.com/ollama/ollama/assets/12467320/3f5d62fc-3ba8-4917-b190-f3f5b5955922">
Left 0.1.33, Right 0.1.32
### OS
Windows
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.1.33
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4117/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4117/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/233
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/233/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/233/comments
|
https://api.github.com/repos/ollama/ollama/issues/233/events
|
https://github.com/ollama/ollama/issues/233
| 1,825,401,154
|
I_kwDOJ0Z1Ps5szWlC
| 233
|
Descriptions for 3rd Party Imports
|
{
"login": "eagleEggs",
"id": 29800532,
"node_id": "MDQ6VXNlcjI5ODAwNTMy",
"avatar_url": "https://avatars.githubusercontent.com/u/29800532?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eagleEggs",
"html_url": "https://github.com/eagleEggs",
"followers_url": "https://api.github.com/users/eagleEggs/followers",
"following_url": "https://api.github.com/users/eagleEggs/following{/other_user}",
"gists_url": "https://api.github.com/users/eagleEggs/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eagleEggs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eagleEggs/subscriptions",
"organizations_url": "https://api.github.com/users/eagleEggs/orgs",
"repos_url": "https://api.github.com/users/eagleEggs/repos",
"events_url": "https://api.github.com/users/eagleEggs/events{/privacy}",
"received_events_url": "https://api.github.com/users/eagleEggs/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 1
| 2023-07-27T23:56:15
| 2023-08-30T21:40:30
| 2023-08-30T21:40:29
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Just wanted to share that it would be nice to have in the docs a list of the imported 3rd party libraries for transparency - Really just for the smaller obscure ones. One of them threw me down a rabbit hole looking over their code as it was a small repo with not much activity. Just a suggestion.
|
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/233/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/233/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4446
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4446/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4446/comments
|
https://api.github.com/repos/ollama/ollama/issues/4446/events
|
https://github.com/ollama/ollama/issues/4446
| 2,297,222,153
|
I_kwDOJ0Z1Ps6I7NQJ
| 4,446
|
JSON Mode + Streaming + OpenAI API + Llama3 = never sends STOP, and a lot of whitespace after the JSON
|
{
"login": "odrobnik",
"id": 333270,
"node_id": "MDQ6VXNlcjMzMzI3MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/333270?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/odrobnik",
"html_url": "https://github.com/odrobnik",
"followers_url": "https://api.github.com/users/odrobnik/followers",
"following_url": "https://api.github.com/users/odrobnik/following{/other_user}",
"gists_url": "https://api.github.com/users/odrobnik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/odrobnik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/odrobnik/subscriptions",
"organizations_url": "https://api.github.com/users/odrobnik/orgs",
"repos_url": "https://api.github.com/users/odrobnik/repos",
"events_url": "https://api.github.com/users/odrobnik/events{/privacy}",
"received_events_url": "https://api.github.com/users/odrobnik/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 3
| 2024-05-15T08:34:39
| 2024-12-05T00:50:01
| 2024-12-05T00:50:01
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
without JSON Mode the last few lines of the stream of Chunk objects is:
```
...
data: {"id":"chatcmpl-273","object":"chat.completion.chunk","created":1715761661,"model":"llama3","system_fingerprint":"fp_ollama","choices":[{"index":0,"delta":{"role":"assistant","content":" adventures"},"finish_reason":null}]}
data: {"id":"chatcmpl-273","object":"chat.completion.chunk","created":1715761661,"model":"llama3","system_fingerprint":"fp_ollama","choices":[{"index":0,"delta":{"role":"assistant","content":".\"\n"},"finish_reason":null}]}
data: {"id":"chatcmpl-273","object":"chat.completion.chunk","created":1715761661,"model":"llama3","system_fingerprint":"fp_ollama","choices":[{"index":0,"delta":{"role":"assistant","content":"}\n"},"finish_reason":null}]}
data: {"id":"chatcmpl-273","object":"chat.completion.chunk","created":1715761661,"model":"llama3","system_fingerprint":"fp_ollama","choices":[{"index":0,"delta":{"role":"assistant","content":"```"},"finish_reason":null}]}
data: {"id":"chatcmpl-273","object":"chat.completion.chunk","created":1715761661,"model":"llama3","system_fingerprint":"fp_ollama","choices":[{"index":0,"delta":{"role":"assistant","content":""},"finish_reason":"stop"}]}
data: [DONE]
```
if I enable JSON Mode the last few lines of the stream look like this:
```
...
{"id":"chatcmpl-626","object":"chat.completion.chunk","created":1715761446,"model":"llama3","system_fingerprint":"fp_ollama","choices":[{"index":0,"delta":{"role":"assistant","content":" adventures"},"finish_reason":null}]}
data: {"id":"chatcmpl-626","object":"chat.completion.chunk","created":1715761446,"model":"llama3","system_fingerprint":"fp_ollama","choices":[{"index":0,"delta":{"role":"assistant","content":".\""},"finish_reason":null}]}
// here the JSON is over, what follows is only junk whitespace
data: {"id":"chatcmpl-626","object":"chat.completion.chunk","created":1715761446,"model":"llama3","system_fingerprint":"fp_ollama","choices":[{"index":0,"delta":{"role":"assistant","content":"}\n\n"},"finish_reason":null}]}
data: {"id":"chatcmpl-626","object":"chat.completion.chunk","created":1715761446,"model":"llama3","system_fingerprint":"fp_ollama","choices":[{"index":0,"delta":{"role":"assistant","content":" \n"},"finish_reason":null}]}
data: {"id":"chatcmpl-626","object":"chat.completion.chunk","created":1715761447,"model":"llama3","system_fingerprint":"fp_ollama","choices":[{"index":0,"delta":{"role":"assistant","content":" "},"finish_reason":null}]}
data: {"id":"chatcmpl-626","object":"chat.completion.chunk","created":1715761447,"model":"llama3","system_fingerprint":"fp_ollama","choices":[{"index":0,"delta":{"role":"assistant","content":" "},"finish_reason":null}]}
data: {"id":"chatcmpl-626","object":"chat.completion.chunk","created":1715761447,"model":"llama3","system_fingerprint":"fp_ollama","choices":[{"index":0,"delta":{"role":"assistant","content":"\n\n\n\n\n\n\n\n"},"finish_reason":null}]}
data: {"id":"chatcmpl-626","object":"chat.completion.chunk","created":1715761447,"model":"llama3","system_fingerprint":"fp_ollama","choices":[{"index":0,"delta":{"role":"assistant","content":" "},"finish_reason":null}]}
data: {"id":"chatcmpl-626","object":"chat.completion.chunk","created":1715761447,"model":"llama3","system_fingerprint":"fp_ollama","choices":[{"index":0,"delta":{"role":"assistant","content":"\n\n\n\n"},"finish_reason":null}]}
data: {"id":"chatcmpl-626","object":"chat.completion.chunk","created":1715761447,"model":"llama3","system_fingerprint":"fp_ollama","choices":[{"index":0,"delta":{"role":"assistant","content":" "},"finish_reason":null}]}
data: {"id":"chatcmpl-626","object":"chat.completion.chunk","created":1715761447,"model":"llama3","system_fingerprint":"fp_ollama","choices":[{"index":0,"delta":{"role":"assistant","content":" "},"finish_reason":null}]}
data: {"id":"chatcmpl-626","object":"chat.completion.chunk","created":1715761447,"model":"llama3","system_fingerprint":"fp_ollama","choices":[{"index":0,"delta":{"role":"assistant","content":"\n\n\n\n\n\n\n\n"},"finish_reason":null}]}
data: {"id":"chatcmpl-626","object":"chat.completion.chunk","created":1715761447,"model":"llama3","system_fingerprint":"fp_ollama","choices":[{"index":0,"delta":{"role":"assistant","content":" \n\n\n\n"},"finish_reason":null}]}
data: {"id":"chatcmpl-626","object":"chat.completion.chunk","created":1715761447,"model":"llama3","system_fingerprint":"fp_ollama","choices":[{"index":0,"delta":{"role":"assistant","content":"\n\n\n"},"finish_reason":null}]}
data: {"id":"chatcmpl-626","object":"chat.completion.chunk","created":1715761447,"model":"llama3","system_fingerprint":"fp_ollama","choices":[{"index":0,"delta":{"role":"assistant","content":" "},"finish_reason":null}]}
data: {"id":"chatcmpl-626","object":"chat.completion.chunk","created":1715761447,"model":"llama3","system_fingerprint":"fp_ollama","choices":[{"index":0,"delta":{"role":"assistant","content":" "},"finish_reason":null}]}
data: {"id":"chatcmpl-626","object":"chat.completion.chunk","created":1715761448,"model":"llama3","system_fingerprint":"fp_ollama","choices":[{"index":0,"delta":{"role":"assistant","content":"\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n"},"finish_reason":null}]}
data: {"id":"chatcmpl-626","object":"chat.completion.chunk","created":1715761448,"model":"llama3","system_fingerprint":"fp_ollama","choices":[{"index":0,"delta":{"role":"assistant","content":"\n\n\n\n\n"},"finish_reason":null}]}
data: {"id":"chatcmpl-626","object":"chat.completion.chunk","created":1715761448,"model":"llama3","system_fingerprint":"fp_ollama","choices":[{"index":0,"delta":{"role":"assistant","content":" "},"finish_reason":null}]}
data: {"id":"chatcmpl-626","object":"chat.completion.chunk","created":1715761448,"model":"llama3","system_fingerprint":"fp_ollama","choices":[{"index":0,"delta":{"role":"assistant","content":"\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n"},"finish_reason":null}]}
data: {"id":"chatcmpl-626","object":"chat.completion.chunk","created":1715761448,"model":"llama3","system_fingerprint":"fp_ollama","choices":[{"index":0,"delta":{"role":"assistant","content":"\n\n\n\n\n\n"},"finish_reason":null}]}
data: {"id":"chatcmpl-626","object":"chat.completion.chunk","created":1715761448,"model":"llama3","system_fingerprint":"fp_ollama","choices":[{"index":0,"delta":{"role":"assistant","content":" "},"finish_reason":null}]}
data: {"id":"chatcmpl-626","object":"chat.completion.chunk","created":1715761448,"model":"llama3","system_fingerprint":"fp_ollama","choices":[{"index":0,"delta":{"role":"assistant","content":"\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n"},"finish_reason":null}]}
data: {"id":"chatcmpl-626","object":"chat.completion.chunk","created":1715761448,"model":"llama3","system_fingerprint":"fp_ollama","choices":[{"index":0,"delta":{"role":"assistant","content":"\n\n\n\n\n\n\n\n\n\n\n\n\n\n"},"finish_reason":null}]}
data: {"id":"chatcmpl-626","object":"chat.completion.chunk","created":1715761448,"model":"llama3","system_fingerprint":"fp_ollama","choices":[{"index":0,"delta":{"role":"assistant","content":" "},"finish_reason":null}]}
data: {"id":"chatcmpl-626","object":"chat.completion.chunk","created":1715761448,"model":"llama3","system_fingerprint":"fp_ollama","choices":[{"index":0,"delta":{"role":"assistant","content":" \n\n"},"finish_reason":null}]}
data: {"id":"chatcmpl-626","object":"chat.completion.chunk","created":1715761448,"model":"llama3","system_fingerprint":"fp_ollama","choices":[{"index":0,"delta":{"role":"assistant","content":" "},"finish_reason":null}]}
data: {"id":"chatcmpl-626","object":"chat.completion.chunk","created":1715761448,"model":"llama3","system_fingerprint":"fp_ollama","choices":[{"index":0,"delta":{"role":"assistant","content":"\n \n"},"finish_reason":null}]}
data: {"id":"chatcmpl-626","object":"chat.completion.chunk","created":1715761449,"model":"llama3","system_fingerprint":"fp_ollama","choices":[{"index":0,"delta":{"role":"assistant","content":" "},"finish_reason":null}]}
data: {"id":"chatcmpl-626","object":"chat.completion.chunk","created":1715761449,"model":"llama3","system_fingerprint":"fp_ollama","choices":[{"index":0,"delta":{"role":"assistant","content":"\n \n"},"finish_reason":null}]}
data: {"id":"chatcmpl-626","object":"chat.completion.chunk","created":1715761449,"model":"llama3","system_fingerprint":"fp_ollama","choices":[{"index":0,"delta":{"role":"assistant","content":" "},"finish_reason":null}]}
data: {"id":"chatcmpl-626","object":"chat.completion.chunk","created":1715761449,"model":"llama3","system_fingerprint":"fp_ollama","choices":[{"index":0,"delta":{"role":"assistant","content":"\n\n\n"},"finish_reason":null}]}
data: {"id":"chatcmpl-626","object":"chat.completion.chunk","created":1715761449,"model":"llama3","system_fingerprint":"fp_ollama","choices":[{"index":0,"delta":{"role":"assistant","content":" "},"finish_reason":null}]}
data: {"id":"chatcmpl-626","object":"chat.completion.chunk","created":1715761449,"model":"llama3","system_fingerprint":"fp_ollama","choices":[{"index":0,"delta":{"role":"assistant","content":" "},"finish_reason":null}]}
```
The JSON mode seems to be supported because the JSON begins correctly with `{` on the streamed version, but it fails to detect that it is done, goes into whitespace-junk-output-mode and then after some internal limit just stops generating, but never sends the finish_reason: "stop".
So there are three bugs:
1. JSON Mode should not produce extra white space after the JSON
2. JSON Mode should send a finish_reason: stop when the JSON is done
3. when generation goes haywire and is aborted, there should still be a finish_reason sent and the stream should be ended. Because currently it would just hang waiting for further data.
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.1.32
|
{
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "https://api.github.com/users/ParthSareen/followers",
"following_url": "https://api.github.com/users/ParthSareen/following{/other_user}",
"gists_url": "https://api.github.com/users/ParthSareen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ParthSareen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ParthSareen/subscriptions",
"organizations_url": "https://api.github.com/users/ParthSareen/orgs",
"repos_url": "https://api.github.com/users/ParthSareen/repos",
"events_url": "https://api.github.com/users/ParthSareen/events{/privacy}",
"received_events_url": "https://api.github.com/users/ParthSareen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4446/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4446/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3851
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3851/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3851/comments
|
https://api.github.com/repos/ollama/ollama/issues/3851/events
|
https://github.com/ollama/ollama/issues/3851
| 2,259,624,265
|
I_kwDOJ0Z1Ps6GryFJ
| 3,851
|
Why Ollama is so terribly slow when I set format="json"
|
{
"login": "marksalpeter",
"id": 1033500,
"node_id": "MDQ6VXNlcjEwMzM1MDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/1033500?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/marksalpeter",
"html_url": "https://github.com/marksalpeter",
"followers_url": "https://api.github.com/users/marksalpeter/followers",
"following_url": "https://api.github.com/users/marksalpeter/following{/other_user}",
"gists_url": "https://api.github.com/users/marksalpeter/gists{/gist_id}",
"starred_url": "https://api.github.com/users/marksalpeter/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/marksalpeter/subscriptions",
"organizations_url": "https://api.github.com/users/marksalpeter/orgs",
"repos_url": "https://api.github.com/users/marksalpeter/repos",
"events_url": "https://api.github.com/users/marksalpeter/events{/privacy}",
"received_events_url": "https://api.github.com/users/marksalpeter/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5808482718,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWjZpng",
"url": "https://api.github.com/repos/ollama/ollama/labels/performance",
"name": "performance",
"color": "A5B5C6",
"default": false,
"description": ""
},
{
"id": 7706482389,
"node_id": "LA_kwDOJ0Z1Ps8AAAABy1eW1Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/api",
"name": "api",
"color": "bfdadc",
"default": false,
"description": ""
}
] |
open
| false
| null |
[] | null | 6
| 2024-04-23T19:32:01
| 2024-11-06T17:41:30
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
This is a duplicate of #3154, which was closed, I'm assuming, by mistake.
The performance of the `format="json"` param is 10x slower than regular inference when additional context is included
A prompt like this takes ~24s to return on an NVIDIA T4 with CUDA enabled and ` format="json"`. The same exact prompt without format json takes ~2s to return. This has got to be a bug right?
```
Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer.
${context}
Please respond in the following JSON schema
{
"${schema.fieldName}": {
"type": ${schema.type},
"description": ${schema.description}
}
}
Question: ${schema.description}
Helpful Answer:
```
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.32
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3851/reactions",
"total_count": 5,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3851/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/7540
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7540/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7540/comments
|
https://api.github.com/repos/ollama/ollama/issues/7540/events
|
https://github.com/ollama/ollama/issues/7540
| 2,639,909,948
|
I_kwDOJ0Z1Ps6dWdQ8
| 7,540
|
ollama blocking itself from binding port it's already using...?
|
{
"login": "gearskullguy",
"id": 32692076,
"node_id": "MDQ6VXNlcjMyNjkyMDc2",
"avatar_url": "https://avatars.githubusercontent.com/u/32692076?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gearskullguy",
"html_url": "https://github.com/gearskullguy",
"followers_url": "https://api.github.com/users/gearskullguy/followers",
"following_url": "https://api.github.com/users/gearskullguy/following{/other_user}",
"gists_url": "https://api.github.com/users/gearskullguy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gearskullguy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gearskullguy/subscriptions",
"organizations_url": "https://api.github.com/users/gearskullguy/orgs",
"repos_url": "https://api.github.com/users/gearskullguy/repos",
"events_url": "https://api.github.com/users/gearskullguy/events{/privacy}",
"received_events_url": "https://api.github.com/users/gearskullguy/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
},
{
"id": 6677675697,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgU-sQ",
"url": "https://api.github.com/repos/ollama/ollama/labels/wsl",
"name": "wsl",
"color": "7E0821",
"default": false,
"description": "Issues using WSL"
},
{
"id": 6677677816,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgVG-A",
"url": "https://api.github.com/repos/ollama/ollama/labels/docker",
"name": "docker",
"color": "0052CC",
"default": false,
"description": "Issues relating to using ollama in containers"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-11-07T05:24:49
| 2024-12-02T15:19:58
| 2024-12-02T15:19:58
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I've just been going through the current instructions, and this is a really weird error to get after seeing all the "Pulled" messages:
$ sudo docker compose --profile gpu-nvidia up
[+] Running 38/38
... Pulled etc ... 0.0s
Attaching to n8n, n8n-import, ollama, ollama-pull-llama, qdrant, self-hosted-ai-starter-kit-postgres-1
Error response from daemon: driver failed programming external connectivity on endpoint ollama (e4cf1d...dc2cf1): Error starting userland proxy: listen tcp4 0.0.0.0:11434: bind: address already in use
$ sudo lsof -i :11434
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
ollama 226 ollama 3u IPv4 22777 0t0 TCP localhost:11434 (LISTEN)
It appears the compose instructions tried to bind ollama to a port that ollama is already using... maybe I did something to attach it before (I ran into different errors previously), and I'm just being redundant.
I apologize if I'm just being redundant. I may just be being redundant.
### OS
WSL2
### GPU
Intel
### CPU
Intel
### Ollama version
0.3.12
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7540/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7540/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7840
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7840/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7840/comments
|
https://api.github.com/repos/ollama/ollama/issues/7840/events
|
https://github.com/ollama/ollama/issues/7840
| 2,694,079,352
|
I_kwDOJ0Z1Ps6glGN4
| 7,840
|
Please add a way to view the request after a template is applied
|
{
"login": "vt-alt",
"id": 36664211,
"node_id": "MDQ6VXNlcjM2NjY0MjEx",
"avatar_url": "https://avatars.githubusercontent.com/u/36664211?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vt-alt",
"html_url": "https://github.com/vt-alt",
"followers_url": "https://api.github.com/users/vt-alt/followers",
"following_url": "https://api.github.com/users/vt-alt/following{/other_user}",
"gists_url": "https://api.github.com/users/vt-alt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vt-alt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vt-alt/subscriptions",
"organizations_url": "https://api.github.com/users/vt-alt/orgs",
"repos_url": "https://api.github.com/users/vt-alt/repos",
"events_url": "https://api.github.com/users/vt-alt/events{/privacy}",
"received_events_url": "https://api.github.com/users/vt-alt/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-11-26T10:27:50
| 2024-11-26T19:43:17
| 2024-11-26T19:43:17
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Would be nice to view raw requests passing between ollama and model. This would be useful to debug templates (for example to debug requests with tools) and for education purposes to see how things works low level.
|
{
"login": "vt-alt",
"id": 36664211,
"node_id": "MDQ6VXNlcjM2NjY0MjEx",
"avatar_url": "https://avatars.githubusercontent.com/u/36664211?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vt-alt",
"html_url": "https://github.com/vt-alt",
"followers_url": "https://api.github.com/users/vt-alt/followers",
"following_url": "https://api.github.com/users/vt-alt/following{/other_user}",
"gists_url": "https://api.github.com/users/vt-alt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vt-alt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vt-alt/subscriptions",
"organizations_url": "https://api.github.com/users/vt-alt/orgs",
"repos_url": "https://api.github.com/users/vt-alt/repos",
"events_url": "https://api.github.com/users/vt-alt/events{/privacy}",
"received_events_url": "https://api.github.com/users/vt-alt/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7840/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7840/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4344
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4344/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4344/comments
|
https://api.github.com/repos/ollama/ollama/issues/4344/events
|
https://github.com/ollama/ollama/issues/4344
| 2,290,692,929
|
I_kwDOJ0Z1Ps6IiTNB
| 4,344
|
failed to run llama3 on macos
|
{
"login": "Mercccccc",
"id": 91967966,
"node_id": "U_kgDOBXtR3g",
"avatar_url": "https://avatars.githubusercontent.com/u/91967966?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Mercccccc",
"html_url": "https://github.com/Mercccccc",
"followers_url": "https://api.github.com/users/Mercccccc/followers",
"following_url": "https://api.github.com/users/Mercccccc/following{/other_user}",
"gists_url": "https://api.github.com/users/Mercccccc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Mercccccc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mercccccc/subscriptions",
"organizations_url": "https://api.github.com/users/Mercccccc/orgs",
"repos_url": "https://api.github.com/users/Mercccccc/repos",
"events_url": "https://api.github.com/users/Mercccccc/events{/privacy}",
"received_events_url": "https://api.github.com/users/Mercccccc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-05-11T05:52:37
| 2024-05-11T06:00:11
| 2024-05-11T05:58:32
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
`ollama run llama3
pulling manifest
pulling 00e1317cbf74... 100% ▕██████████████████████████████████████████████████████████▏ 4.7 GB
pulling 4fa551d4f938... 100% ▕██████████████████████████████████████████████████████████▏ 12 KB
pulling 8ab4849b038c... 100% ▕██████████████████████████████████████████████████████████▏ 254 B
pulling 577073ffcc6c... 100% ▕██████████████████████████████████████████████████████████▏ 110 B
pulling ad1518640c43... 100% ▕██████████████████████████████████████████████████████████▏ 483 B
verifying sha256 digest
writing manifest
removing any unused layers
success
Error: error starting the external llama server: fork/exec /var/folders/c3/vcxjn8bn59jd7th67f2fr9840000gn/T/ollama1329764937/runners/metal/ollama_llama_server: no such file or directory`
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.1.32
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4344/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4344/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4447
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4447/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4447/comments
|
https://api.github.com/repos/ollama/ollama/issues/4447/events
|
https://github.com/ollama/ollama/issues/4447
| 2,297,232,264
|
I_kwDOJ0Z1Ps6I7PuI
| 4,447
|
NumCtx can't change, just 2048
|
{
"login": "jianwen-wang",
"id": 137679484,
"node_id": "U_kgDOCDTSfA",
"avatar_url": "https://avatars.githubusercontent.com/u/137679484?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jianwen-wang",
"html_url": "https://github.com/jianwen-wang",
"followers_url": "https://api.github.com/users/jianwen-wang/followers",
"following_url": "https://api.github.com/users/jianwen-wang/following{/other_user}",
"gists_url": "https://api.github.com/users/jianwen-wang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jianwen-wang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jianwen-wang/subscriptions",
"organizations_url": "https://api.github.com/users/jianwen-wang/orgs",
"repos_url": "https://api.github.com/users/jianwen-wang/repos",
"events_url": "https://api.github.com/users/jianwen-wang/events{/privacy}",
"received_events_url": "https://api.github.com/users/jianwen-wang/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-05-15T08:39:36
| 2024-08-02T03:06:35
| 2024-05-15T15:00:08
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
When using the LLaMA 3 model with OpenAI’s /v1/chat/completions API, it was discovered that the request message can’t exceed 2k tokens, while the LLaMA 3 8B model inherently supports up to 8K tokens.
Upon examining the code, it was found that in api/types.go at line 475, DefaultOptions contains a fixed default value of 2048 for NumCtx, and there is no available option to modify this default configuration.
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.1.37
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4447/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4447/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/450
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/450/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/450/comments
|
https://api.github.com/repos/ollama/ollama/issues/450/events
|
https://github.com/ollama/ollama/pull/450
| 1,877,579,278
|
PR_kwDOJ0Z1Ps5ZWP5q
| 450
|
update readme
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-09-01T14:55:13
| 2023-09-01T15:21:51
| 2023-09-01T15:21:50
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/450",
"html_url": "https://github.com/ollama/ollama/pull/450",
"diff_url": "https://github.com/ollama/ollama/pull/450.diff",
"patch_url": "https://github.com/ollama/ollama/pull/450.patch",
"merged_at": "2023-09-01T15:21:50"
}
| null |
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/450/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/450/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6221
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6221/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6221/comments
|
https://api.github.com/repos/ollama/ollama/issues/6221/events
|
https://github.com/ollama/ollama/issues/6221
| 2,452,290,027
|
I_kwDOJ0Z1Ps6SKvnr
| 6,221
|
Mistake tools calls +-every user prompt.
|
{
"login": "websharik",
"id": 33082364,
"node_id": "MDQ6VXNlcjMzMDgyMzY0",
"avatar_url": "https://avatars.githubusercontent.com/u/33082364?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/websharik",
"html_url": "https://github.com/websharik",
"followers_url": "https://api.github.com/users/websharik/followers",
"following_url": "https://api.github.com/users/websharik/following{/other_user}",
"gists_url": "https://api.github.com/users/websharik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/websharik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/websharik/subscriptions",
"organizations_url": "https://api.github.com/users/websharik/orgs",
"repos_url": "https://api.github.com/users/websharik/repos",
"events_url": "https://api.github.com/users/websharik/events{/privacy}",
"received_events_url": "https://api.github.com/users/websharik/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 0
| 2024-08-07T04:07:30
| 2024-08-07T04:07:30
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Model: `llama3.1:8b`
TOOLS:
```json
[
{
"type": "function",
"function": {
"name": "base64DecodeTool",
"description": "Tool to decode base64 string.",
"parameters": {
"type": "object",
"properties": {
"str": {
"type": "string",
"description": "string to be decoded"
}
},
"required": [
"str"
]
}
}
},
{
"type": "function",
"function": {
"name": "getAnimalNameRatingTool",
"description": "Tool to get cat/dog name rating.",
"parameters": {
"type": "object",
"properties": {
"name": {
"type": "string",
"description": "name of the animal"
}
},
"required": [
"name"
]
}
}
},
{
"type": "function",
"function": {
"name": "toggleLightTool",
"description": "Tool to on/off light",
"parameters": {
"type": "object",
"properties": {
"state": {
"type": "boolean",
"description": "state of light"
}
},
"required": [
"state"
]
}
}
}
]
```
Dialogs examples:
```json
[
{ "role": "user", "content": "Hey" },
{ "role": "assistant", "tool_calls": "..." },
{ "role": "tool", "content": "Rating of name \"cat\" is: 10" },
{ "role": "assistant", "content": "You asked about cat, and I got it's rating as 10! Would you like to know something else?" }
]
```
```json
[
{ "role": "user", "content": "Hey" },
{ "role": "assistant", "tool_calls": "..." },
{ "role": "tool", "content": "Light now is off" },
{ "role": "assistant", "content": "Hello" }
]
```
### OS
Docker
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.3.3
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6221/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6221/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/4214
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4214/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4214/comments
|
https://api.github.com/repos/ollama/ollama/issues/4214/events
|
https://github.com/ollama/ollama/issues/4214
| 2,281,979,049
|
I_kwDOJ0Z1Ps6IBDyp
| 4,214
|
(Prune unwanted dangling models) Prune data from models which where partially downloaded
|
{
"login": "arthurGrigo",
"id": 35745065,
"node_id": "MDQ6VXNlcjM1NzQ1MDY1",
"avatar_url": "https://avatars.githubusercontent.com/u/35745065?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arthurGrigo",
"html_url": "https://github.com/arthurGrigo",
"followers_url": "https://api.github.com/users/arthurGrigo/followers",
"following_url": "https://api.github.com/users/arthurGrigo/following{/other_user}",
"gists_url": "https://api.github.com/users/arthurGrigo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/arthurGrigo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arthurGrigo/subscriptions",
"organizations_url": "https://api.github.com/users/arthurGrigo/orgs",
"repos_url": "https://api.github.com/users/arthurGrigo/repos",
"events_url": "https://api.github.com/users/arthurGrigo/events{/privacy}",
"received_events_url": "https://api.github.com/users/arthurGrigo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-05-06T23:59:11
| 2024-05-07T16:44:39
| 2024-05-07T16:44:39
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Sometime it happens that a user starts to download the wrong model or runs out of space (https://github.com/ollama/ollama/issues/2497).
The partially downloaded model is not visible through 'ollama list' after canceling the download and therefore cannot be removed using 'ollama rm <model>'.
At the moment users have to find the corresponding sha and blob in the ollama directory and remove it manually or fully download the model just to be able to delete it.
Path for windows users: "C:\Users\your_user\ .ollama\models\blobs"
A command like 'ollama prune' or 'ollama autoremove' would be great.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4214/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4214/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1958
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1958/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1958/comments
|
https://api.github.com/repos/ollama/ollama/issues/1958/events
|
https://github.com/ollama/ollama/pull/1958
| 2,079,367,384
|
PR_kwDOJ0Z1Ps5j9BJJ
| 1,958
|
ci: update setup-go action
|
{
"login": "purificant",
"id": 4669013,
"node_id": "MDQ6VXNlcjQ2NjkwMTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4669013?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/purificant",
"html_url": "https://github.com/purificant",
"followers_url": "https://api.github.com/users/purificant/followers",
"following_url": "https://api.github.com/users/purificant/following{/other_user}",
"gists_url": "https://api.github.com/users/purificant/gists{/gist_id}",
"starred_url": "https://api.github.com/users/purificant/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/purificant/subscriptions",
"organizations_url": "https://api.github.com/users/purificant/orgs",
"repos_url": "https://api.github.com/users/purificant/repos",
"events_url": "https://api.github.com/users/purificant/events{/privacy}",
"received_events_url": "https://api.github.com/users/purificant/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-01-12T17:41:11
| 2024-01-18T22:53:37
| 2024-01-18T22:53:36
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1958",
"html_url": "https://github.com/ollama/ollama/pull/1958",
"diff_url": "https://github.com/ollama/ollama/pull/1958.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1958.patch",
"merged_at": "2024-01-18T22:53:36"
}
|
This PR updates [actions/setup-go](https://github.com/actions/setup-go/releases/tag/v5.0.0) ~~and tests with go 1.21~~
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1958/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1958/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6629
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6629/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6629/comments
|
https://api.github.com/repos/ollama/ollama/issues/6629/events
|
https://github.com/ollama/ollama/issues/6629
| 2,504,550,794
|
I_kwDOJ0Z1Ps6VSGmK
| 6,629
|
Fail to Convert Huggingface Llama3.1 with ollama create
|
{
"login": "YueChenkkk",
"id": 36752416,
"node_id": "MDQ6VXNlcjM2NzUyNDE2",
"avatar_url": "https://avatars.githubusercontent.com/u/36752416?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/YueChenkkk",
"html_url": "https://github.com/YueChenkkk",
"followers_url": "https://api.github.com/users/YueChenkkk/followers",
"following_url": "https://api.github.com/users/YueChenkkk/following{/other_user}",
"gists_url": "https://api.github.com/users/YueChenkkk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/YueChenkkk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/YueChenkkk/subscriptions",
"organizations_url": "https://api.github.com/users/YueChenkkk/orgs",
"repos_url": "https://api.github.com/users/YueChenkkk/repos",
"events_url": "https://api.github.com/users/YueChenkkk/events{/privacy}",
"received_events_url": "https://api.github.com/users/YueChenkkk/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 4
| 2024-09-04T07:39:23
| 2024-09-10T18:44:26
| 2024-09-10T18:44:25
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I downloaded the meta-llama-3.1-8b model from huggingface. [https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct]
And I installed ollama-linux-amd64 (version==0.3.6) manually.
When I start the service with `ollama serve &`, it seems nothing goes wrong.
But after I build a Modelfile with only `FROM .` in the huggingface model directory and try to create a ollama model with `ollama create my-llama-model`, I got the following error message:
```
usr-yc@lm-machine:~/models/vanilla-llama-3.1-8b$ ollama create my-llama-model
[GIN] 2024/09/04 - 15:12:28 | 200 | 23.352µs | 127.0.0.1 | HEAD "/"
transferring model data ⠙ [GIN] 2024/09/04 - 15:13:00 | 200 | 168.198µs | 127.0.0.1 | POST "/api/blobs/sha256:xxxxxxx"
[GIN] 2024/09/04 - 15:13:00 | 200 | 5.022104ms | 127.0.0.1 | POST "/api/create"
transferring model data 100%
converting model
Error: proto: cannot parse invalid wire-format data
```
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.3.6
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6629/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6629/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7671
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7671/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7671/comments
|
https://api.github.com/repos/ollama/ollama/issues/7671/events
|
https://github.com/ollama/ollama/pull/7671
| 2,660,148,716
|
PR_kwDOJ0Z1Ps6B-d_s
| 7,671
|
build: add sync-clean target
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-11-14T21:42:01
| 2024-11-20T22:12:40
| 2024-11-20T22:12:37
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7671",
"html_url": "https://github.com/ollama/ollama/pull/7671",
"diff_url": "https://github.com/ollama/ollama/pull/7671.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7671.patch",
"merged_at": null
}
|
Helpful target to ensure all vendored files are fresh
Since the sync target uses file timestamps, if you jump back and forth between different upstream commits it can get confused and fail to sync the "new" content since timestamps are older. This gives a quick way to reset all the vendored files to avoid potentially missing anything.
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7671/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7671/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7091
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7091/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7091/comments
|
https://api.github.com/repos/ollama/ollama/issues/7091/events
|
https://github.com/ollama/ollama/issues/7091
| 2,563,852,474
|
I_kwDOJ0Z1Ps6Y0Ui6
| 7,091
|
What are ollama doing?
|
{
"login": "Molnfront",
"id": 935328,
"node_id": "MDQ6VXNlcjkzNTMyOA==",
"avatar_url": "https://avatars.githubusercontent.com/u/935328?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Molnfront",
"html_url": "https://github.com/Molnfront",
"followers_url": "https://api.github.com/users/Molnfront/followers",
"following_url": "https://api.github.com/users/Molnfront/following{/other_user}",
"gists_url": "https://api.github.com/users/Molnfront/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Molnfront/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Molnfront/subscriptions",
"organizations_url": "https://api.github.com/users/Molnfront/orgs",
"repos_url": "https://api.github.com/users/Molnfront/repos",
"events_url": "https://api.github.com/users/Molnfront/events{/privacy}",
"received_events_url": "https://api.github.com/users/Molnfront/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 7
| 2024-10-03T11:48:42
| 2024-10-03T15:46:58
| 2024-10-03T15:46:57
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I let my model Gemma 2 work in the background because it was like watching paint dry when it was writing in the terminal. It took all the memory and CPU and lots of GPU. But I suspect nothing is being done, because now the ollama process is only at 0,4% and ollama server is at 0. I ask if it's working on the task and it gives a long slow answer. Then ollama is back att 0 - 0,4%
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7091/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7091/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6445
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6445/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6445/comments
|
https://api.github.com/repos/ollama/ollama/issues/6445/events
|
https://github.com/ollama/ollama/pull/6445
| 2,476,001,382
|
PR_kwDOJ0Z1Ps544Ie5
| 6,445
|
Update manual instructions with discrete ROCm bundle
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-08-20T15:56:57
| 2024-08-27T20:42:31
| 2024-08-27T20:42:28
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6445",
"html_url": "https://github.com/ollama/ollama/pull/6445",
"diff_url": "https://github.com/ollama/ollama/pull/6445.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6445.patch",
"merged_at": "2024-08-27T20:42:28"
}
| null |
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6445/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6445/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/677
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/677/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/677/comments
|
https://api.github.com/repos/ollama/ollama/issues/677/events
|
https://github.com/ollama/ollama/issues/677
| 1,922,588,766
|
I_kwDOJ0Z1Ps5ymGBe
| 677
|
Connecting the client to the server
|
{
"login": "skorokithakis",
"id": 23648,
"node_id": "MDQ6VXNlcjIzNjQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/23648?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/skorokithakis",
"html_url": "https://github.com/skorokithakis",
"followers_url": "https://api.github.com/users/skorokithakis/followers",
"following_url": "https://api.github.com/users/skorokithakis/following{/other_user}",
"gists_url": "https://api.github.com/users/skorokithakis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/skorokithakis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/skorokithakis/subscriptions",
"organizations_url": "https://api.github.com/users/skorokithakis/orgs",
"repos_url": "https://api.github.com/users/skorokithakis/repos",
"events_url": "https://api.github.com/users/skorokithakis/events{/privacy}",
"received_events_url": "https://api.github.com/users/skorokithakis/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2023-10-02T20:00:29
| 2023-10-02T20:08:25
| 2023-10-02T20:08:25
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Is there any way to specify the hostname of the server for `ollama run` to connect to? I can use the HTTP API, but what about the cli client? I haven't been able to find this info in the docs.
|
{
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.github.com/users/technovangelist/followers",
"following_url": "https://api.github.com/users/technovangelist/following{/other_user}",
"gists_url": "https://api.github.com/users/technovangelist/gists{/gist_id}",
"starred_url": "https://api.github.com/users/technovangelist/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/technovangelist/subscriptions",
"organizations_url": "https://api.github.com/users/technovangelist/orgs",
"repos_url": "https://api.github.com/users/technovangelist/repos",
"events_url": "https://api.github.com/users/technovangelist/events{/privacy}",
"received_events_url": "https://api.github.com/users/technovangelist/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/677/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/677/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7217
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7217/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7217/comments
|
https://api.github.com/repos/ollama/ollama/issues/7217/events
|
https://github.com/ollama/ollama/pull/7217
| 2,590,088,318
|
PR_kwDOJ0Z1Ps5-vcMr
| 7,217
|
Add arm64 cuda jetpack variants
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 7
| 2024-10-15T22:39:07
| 2024-12-30T09:52:50
| 2024-11-12T18:31:52
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7217",
"html_url": "https://github.com/ollama/ollama/pull/7217",
"diff_url": "https://github.com/ollama/ollama/pull/7217.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7217.patch",
"merged_at": "2024-11-12T18:31:52"
}
|
This adds 2 new variants for the arm64 build to support nvidia jetson systems based on jetpack 5 and 6. Jetpack 4 is too old to be built with our toolchain (the older cuda requires an old gcc which can't build llama.cpp) and will remain unsupported.
The sbsa discrete GPU cuda libraries we bundle in the existing arm64 build are incompatible with jetson iGPU systems. Unfortunately swapping them at runtime isn't viable given the way nvcc compilation/linking works, so we need to actually build and link against those specific cuda libraries, and bundle them.
File sizes are too large to try to combine into a single unified tgz bundle, so this splits things out into a main bundle which contains all the runners, and then two auxiliary bundles, one for jetpack 5 and 6 which contain all the libraries specific to those versions.
Fixes https://github.com/ollama/ollama/issues/2408
Fixes https://github.com/ollama/ollama/issues/4693
Fixes https://github.com/ollama/ollama/issues/5100
Fixes https://github.com/ollama/ollama/issues/4861
Fixes #6999
Fixes #7293
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7217/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7217/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6378
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6378/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6378/comments
|
https://api.github.com/repos/ollama/ollama/issues/6378/events
|
https://github.com/ollama/ollama/pull/6378
| 2,468,874,640
|
PR_kwDOJ0Z1Ps54gY5Q
| 6,378
|
Add confichat to README.md
|
{
"login": "1runeberg",
"id": 17371351,
"node_id": "MDQ6VXNlcjE3MzcxMzUx",
"avatar_url": "https://avatars.githubusercontent.com/u/17371351?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/1runeberg",
"html_url": "https://github.com/1runeberg",
"followers_url": "https://api.github.com/users/1runeberg/followers",
"following_url": "https://api.github.com/users/1runeberg/following{/other_user}",
"gists_url": "https://api.github.com/users/1runeberg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/1runeberg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/1runeberg/subscriptions",
"organizations_url": "https://api.github.com/users/1runeberg/orgs",
"repos_url": "https://api.github.com/users/1runeberg/repos",
"events_url": "https://api.github.com/users/1runeberg/events{/privacy}",
"received_events_url": "https://api.github.com/users/1runeberg/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-08-15T20:16:19
| 2024-09-04T21:26:03
| 2024-09-04T21:26:02
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6378",
"html_url": "https://github.com/ollama/ollama/pull/6378",
"diff_url": "https://github.com/ollama/ollama/pull/6378.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6378.patch",
"merged_at": "2024-09-04T21:26:02"
}
|
- Added ConfiChat to Community Integration > Web & Desktop
- Added ConfiChat to Community Integration > Mobile
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6378/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6378/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3803
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3803/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3803/comments
|
https://api.github.com/repos/ollama/ollama/issues/3803/events
|
https://github.com/ollama/ollama/issues/3803
| 2,255,195,626
|
I_kwDOJ0Z1Ps6Ga43q
| 3,803
|
Does not produce same results via curl API for the same model.
|
{
"login": "MathematicianOnGithub",
"id": 138249377,
"node_id": "U_kgDOCD2EoQ",
"avatar_url": "https://avatars.githubusercontent.com/u/138249377?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MathematicianOnGithub",
"html_url": "https://github.com/MathematicianOnGithub",
"followers_url": "https://api.github.com/users/MathematicianOnGithub/followers",
"following_url": "https://api.github.com/users/MathematicianOnGithub/following{/other_user}",
"gists_url": "https://api.github.com/users/MathematicianOnGithub/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MathematicianOnGithub/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MathematicianOnGithub/subscriptions",
"organizations_url": "https://api.github.com/users/MathematicianOnGithub/orgs",
"repos_url": "https://api.github.com/users/MathematicianOnGithub/repos",
"events_url": "https://api.github.com/users/MathematicianOnGithub/events{/privacy}",
"received_events_url": "https://api.github.com/users/MathematicianOnGithub/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 4
| 2024-04-21T17:59:25
| 2024-04-26T12:01:19
| 2024-04-26T12:01:19
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
`export OLLAMA_HOST="127.0.0.1:3336"; ollama create tagger -f tagger_Modelfile && ollama run tagger`
Running the below does NOT produce the same result as in the terminal cli in the above command.
```
curl http://localhost:3336/api/generate -d '{
"model": "tagger",
"prompt": "hello"
}'
```
with or without streaming.
It seems like it is using a different model somehow, it just is not giving the same responses at all.
tagger_Modelfile:
```
FROM gemma:7b
PARAMETER temperature 0.5
PARAMETER num_ctx 8192
TEMPLATE """<|im_start|>system
{{ .System }}<|im_end|>
<|im_start|>User
{{ .Prompt }}<|im_end|>
<|im_start|>Assistant
"""
SYSTEM """
Categorize user text using hierarchical tags.
You are to respond in one sentence. Example responses:
- #mathematics #linear-algebra #matrices #vectors
- #philosophy #epistemology
- #computer-science #algorithms #memoization
- #statistics #probability-theory
- #language #grammar
- #statistics #sample-theory #confidence-intervals
Especially important to get down whether it is calculus, analysis, combinatorics, etc.
"""
PARAMETER stop "<|im_start|>"
PARAMETER stop "<|im_end|>"
MESSAGE user """
Matrix, a set of numbers arranged in rows and columns so as to form a rectangular array. The numbers are called the elements, or entries, of the matrix. Matrices have wide applications in engineering, physics, economics, and statistics as well as in various branches of mathematics.
"""
MESSAGE assistant #mathematics #linear-algebra #matrices
MESSAGE user """
the binomial coefficients are the positive integers that occur as coefficients in the binomial theorem. Commonly, a binomial coefficient is indexed by a pair of integers n ≥ k ≥ 0
"""
MESSAGE assistant #mathematics #combinatorics #binomial-coefficient
```
Help much appreciated!
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.32
|
{
"login": "MathematicianOnGithub",
"id": 138249377,
"node_id": "U_kgDOCD2EoQ",
"avatar_url": "https://avatars.githubusercontent.com/u/138249377?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MathematicianOnGithub",
"html_url": "https://github.com/MathematicianOnGithub",
"followers_url": "https://api.github.com/users/MathematicianOnGithub/followers",
"following_url": "https://api.github.com/users/MathematicianOnGithub/following{/other_user}",
"gists_url": "https://api.github.com/users/MathematicianOnGithub/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MathematicianOnGithub/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MathematicianOnGithub/subscriptions",
"organizations_url": "https://api.github.com/users/MathematicianOnGithub/orgs",
"repos_url": "https://api.github.com/users/MathematicianOnGithub/repos",
"events_url": "https://api.github.com/users/MathematicianOnGithub/events{/privacy}",
"received_events_url": "https://api.github.com/users/MathematicianOnGithub/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3803/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3803/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2728
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2728/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2728/comments
|
https://api.github.com/repos/ollama/ollama/issues/2728/events
|
https://github.com/ollama/ollama/pull/2728
| 2,152,283,257
|
PR_kwDOJ0Z1Ps5n0ZrB
| 2,728
|
feat: implement OpenAI model listing
|
{
"login": "da-z",
"id": 3681019,
"node_id": "MDQ6VXNlcjM2ODEwMTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3681019?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/da-z",
"html_url": "https://github.com/da-z",
"followers_url": "https://api.github.com/users/da-z/followers",
"following_url": "https://api.github.com/users/da-z/following{/other_user}",
"gists_url": "https://api.github.com/users/da-z/gists{/gist_id}",
"starred_url": "https://api.github.com/users/da-z/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/da-z/subscriptions",
"organizations_url": "https://api.github.com/users/da-z/orgs",
"repos_url": "https://api.github.com/users/da-z/repos",
"events_url": "https://api.github.com/users/da-z/events{/privacy}",
"received_events_url": "https://api.github.com/users/da-z/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-02-24T12:14:43
| 2024-09-05T02:57:05
| 2024-09-05T02:57:05
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/2728",
"html_url": "https://github.com/ollama/ollama/pull/2728",
"diff_url": "https://github.com/ollama/ollama/pull/2728.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2728.patch",
"merged_at": null
}
| null |
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2728/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2728/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/897
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/897/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/897/comments
|
https://api.github.com/repos/ollama/ollama/issues/897/events
|
https://github.com/ollama/ollama/pull/897
| 1,960,140,405
|
PR_kwDOJ0Z1Ps5dr_NY
| 897
|
allow for a configurable ollama model storage directory
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 27
| 2023-10-24T21:53:27
| 2024-06-19T04:14:47
| 2023-10-27T14:19:59
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/897",
"html_url": "https://github.com/ollama/ollama/pull/897",
"diff_url": "https://github.com/ollama/ollama/pull/897.diff",
"patch_url": "https://github.com/ollama/ollama/pull/897.patch",
"merged_at": "2023-10-27T14:19:59"
}
|
- set `OLLAMA_MODELS` in the environment that ollama is running in to change where models are stored
- update docs
```bash
$ OLLAMA_MODELS=/Users/bruce/ollama_models ollama serve
# store models in /Users/bruce/ollama_models
```
Resolves ~#228~ #153
I'll hold off on merging this until #847 is in to avoid causing that PR pain.
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/897/reactions",
"total_count": 21,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 12,
"rocket": 6,
"eyes": 1
}
|
https://api.github.com/repos/ollama/ollama/issues/897/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/8461
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8461/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8461/comments
|
https://api.github.com/repos/ollama/ollama/issues/8461/events
|
https://github.com/ollama/ollama/issues/8461
| 2,793,832,386
|
I_kwDOJ0Z1Ps6mhn_C
| 8,461
|
Maintain Object Key Order in JSON Schema Outputs
|
{
"login": "ElliottStorey",
"id": 70775866,
"node_id": "MDQ6VXNlcjcwNzc1ODY2",
"avatar_url": "https://avatars.githubusercontent.com/u/70775866?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ElliottStorey",
"html_url": "https://github.com/ElliottStorey",
"followers_url": "https://api.github.com/users/ElliottStorey/followers",
"following_url": "https://api.github.com/users/ElliottStorey/following{/other_user}",
"gists_url": "https://api.github.com/users/ElliottStorey/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ElliottStorey/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ElliottStorey/subscriptions",
"organizations_url": "https://api.github.com/users/ElliottStorey/orgs",
"repos_url": "https://api.github.com/users/ElliottStorey/repos",
"events_url": "https://api.github.com/users/ElliottStorey/events{/privacy}",
"received_events_url": "https://api.github.com/users/ElliottStorey/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 0
| 2025-01-16T21:42:16
| 2025-01-16T21:42:16
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Currently, when generating JSON outputs based on a provided schema, the keys in objects do not retain the order specified in the schema. This behavior differs from OpenAI's implementation, where the order of keys is preserved as defined. Maintaining the specified key order is crucial for applications that rely on consistent JSON structures. Implementing this feature would enhance compatibility and predictability of the generated outputs.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8461/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8461/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/2591
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2591/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2591/comments
|
https://api.github.com/repos/ollama/ollama/issues/2591/events
|
https://github.com/ollama/ollama/issues/2591
| 2,142,264,088
|
I_kwDOJ0Z1Ps5_sFsY
| 2,591
|
Failure after download via curl
|
{
"login": "krenax",
"id": 127540387,
"node_id": "U_kgDOB5ocow",
"avatar_url": "https://avatars.githubusercontent.com/u/127540387?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/krenax",
"html_url": "https://github.com/krenax",
"followers_url": "https://api.github.com/users/krenax/followers",
"following_url": "https://api.github.com/users/krenax/following{/other_user}",
"gists_url": "https://api.github.com/users/krenax/gists{/gist_id}",
"starred_url": "https://api.github.com/users/krenax/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/krenax/subscriptions",
"organizations_url": "https://api.github.com/users/krenax/orgs",
"repos_url": "https://api.github.com/users/krenax/repos",
"events_url": "https://api.github.com/users/krenax/events{/privacy}",
"received_events_url": "https://api.github.com/users/krenax/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-02-19T12:30:36
| 2024-02-19T13:40:13
| 2024-02-19T13:40:12
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Ollama can not be started after download via curl. I received the following message:
```
Warning: Failed to open the file /tmp/tmp.T4lmv4bro6/ollama: No such file or
Warning: directory
curl: (23) Failure writing output to destination
```
|
{
"login": "krenax",
"id": 127540387,
"node_id": "U_kgDOB5ocow",
"avatar_url": "https://avatars.githubusercontent.com/u/127540387?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/krenax",
"html_url": "https://github.com/krenax",
"followers_url": "https://api.github.com/users/krenax/followers",
"following_url": "https://api.github.com/users/krenax/following{/other_user}",
"gists_url": "https://api.github.com/users/krenax/gists{/gist_id}",
"starred_url": "https://api.github.com/users/krenax/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/krenax/subscriptions",
"organizations_url": "https://api.github.com/users/krenax/orgs",
"repos_url": "https://api.github.com/users/krenax/repos",
"events_url": "https://api.github.com/users/krenax/events{/privacy}",
"received_events_url": "https://api.github.com/users/krenax/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2591/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2591/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5705
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5705/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5705/comments
|
https://api.github.com/repos/ollama/ollama/issues/5705/events
|
https://github.com/ollama/ollama/pull/5705
| 2,409,155,752
|
PR_kwDOJ0Z1Ps51aNFf
| 5,705
|
Enable windows error dialog for subprocess
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-07-15T16:31:31
| 2024-07-26T21:49:37
| 2024-07-26T21:49:34
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5705",
"html_url": "https://github.com/ollama/ollama/pull/5705",
"diff_url": "https://github.com/ollama/ollama/pull/5705.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5705.patch",
"merged_at": "2024-07-26T21:49:34"
}
|
Make sure if something goes wrong spawning the process, the user gets
enough info to be able to try to self correct, or at least file a bug
with details so we can fix it. Once the process starts, we immediately
change back to the recommended setting to prevent the blocking dialog.
This ensures if the model fails to load (OOM, unsupported model type,
etc.) the process will exit quickly and we can scan the stdout/stderr
of the subprocess for the reason to report via API.
Example when I remove the `ggml.dll` side-band and try to run a model

Note: ollama will be ~stuck until the user clicks the OK button as the subprocess is paused but doesn't exit until they acknowledge, and depending on the nature of the problem, they'll likely get the dialog ~2 times as we attempt fallback runners. If they didn't click the button within our startup timeout (currently 5min) we'd eventually timeout the model load.
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5705/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5705/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4620
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4620/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4620/comments
|
https://api.github.com/repos/ollama/ollama/issues/4620/events
|
https://github.com/ollama/ollama/issues/4620
| 2,316,133,442
|
I_kwDOJ0Z1Ps6KDWRC
| 4,620
|
Llava 1.6 34B fp16: refuses to answer questions on forms or hallucinates, when official Llava 1.6 34B demo does answer them perfectly
|
{
"login": "ChristianWeyer",
"id": 888718,
"node_id": "MDQ6VXNlcjg4ODcxOA==",
"avatar_url": "https://avatars.githubusercontent.com/u/888718?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ChristianWeyer",
"html_url": "https://github.com/ChristianWeyer",
"followers_url": "https://api.github.com/users/ChristianWeyer/followers",
"following_url": "https://api.github.com/users/ChristianWeyer/following{/other_user}",
"gists_url": "https://api.github.com/users/ChristianWeyer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ChristianWeyer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ChristianWeyer/subscriptions",
"organizations_url": "https://api.github.com/users/ChristianWeyer/orgs",
"repos_url": "https://api.github.com/users/ChristianWeyer/repos",
"events_url": "https://api.github.com/users/ChristianWeyer/events{/privacy}",
"received_events_url": "https://api.github.com/users/ChristianWeyer/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 1
| 2024-05-24T19:32:32
| 2024-05-24T19:38:17
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Hey all,
I am using https://ollama.com/library/llava:34b-v1.6-fp16.
When asking questions about data in a form (see attached, it is a public sample), the model refuses to answer them or it hallucinates.
```
ollama run llava:34b-v1.6-fp16
>>> How is the ending balance? ./demo.png
Added image '.demo.png'
The ending balance on the credit card statement shown in the image is €7,531.25. This is calculated by subtracting
all charges from your total amount available to spend (or "Amount Available") and adding any payments you've made or
credits received during that time period.
```
However, when using the exact same form image file here: https://llava.hliu.cc - everything works perfect and it answers the question with the correct data from the field in the form: € 4.728,33.
There has to be something very different in the quant (and maybe prompt) provided by Ollama.
Any ideas?

### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.1.38
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4620/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4620/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/7959
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7959/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7959/comments
|
https://api.github.com/repos/ollama/ollama/issues/7959/events
|
https://github.com/ollama/ollama/issues/7959
| 2,721,519,206
|
I_kwDOJ0Z1Ps6iNxZm
| 7,959
|
FROM ./vicuna-33b.Q4_0.gguf
|
{
"login": "enzoxic",
"id": 157711992,
"node_id": "U_kgDOCWZ-eA",
"avatar_url": "https://avatars.githubusercontent.com/u/157711992?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/enzoxic",
"html_url": "https://github.com/enzoxic",
"followers_url": "https://api.github.com/users/enzoxic/followers",
"following_url": "https://api.github.com/users/enzoxic/following{/other_user}",
"gists_url": "https://api.github.com/users/enzoxic/gists{/gist_id}",
"starred_url": "https://api.github.com/users/enzoxic/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/enzoxic/subscriptions",
"organizations_url": "https://api.github.com/users/enzoxic/orgs",
"repos_url": "https://api.github.com/users/enzoxic/repos",
"events_url": "https://api.github.com/users/enzoxic/events{/privacy}",
"received_events_url": "https://api.github.com/users/enzoxic/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-12-05T21:58:22
| 2024-12-06T09:04:06
| 2024-12-05T22:07:47
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
https://github.com/ollama/ollama/tree/main/llama
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7959/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7959/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6931
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6931/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6931/comments
|
https://api.github.com/repos/ollama/ollama/issues/6931/events
|
https://github.com/ollama/ollama/pull/6931
| 2,545,165,398
|
PR_kwDOJ0Z1Ps58gacN
| 6,931
|
Added Local Multimodal AI Chat link to README.md
|
{
"login": "Leon-Sander",
"id": 72946124,
"node_id": "MDQ6VXNlcjcyOTQ2MTI0",
"avatar_url": "https://avatars.githubusercontent.com/u/72946124?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Leon-Sander",
"html_url": "https://github.com/Leon-Sander",
"followers_url": "https://api.github.com/users/Leon-Sander/followers",
"following_url": "https://api.github.com/users/Leon-Sander/following{/other_user}",
"gists_url": "https://api.github.com/users/Leon-Sander/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Leon-Sander/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Leon-Sander/subscriptions",
"organizations_url": "https://api.github.com/users/Leon-Sander/orgs",
"repos_url": "https://api.github.com/users/Leon-Sander/repos",
"events_url": "https://api.github.com/users/Leon-Sander/events{/privacy}",
"received_events_url": "https://api.github.com/users/Leon-Sander/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-09-24T11:44:38
| 2024-11-22T04:39:38
| 2024-11-22T04:39:38
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6931",
"html_url": "https://github.com/ollama/ollama/pull/6931",
"diff_url": "https://github.com/ollama/ollama/pull/6931.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6931.patch",
"merged_at": "2024-11-22T04:39:38"
}
|
Ollama-based LLM Chat with support for multiple features, including PDF RAG, voice chat, image-based interactions, and integration with OpenAI.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6931/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6931/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2029
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2029/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2029/comments
|
https://api.github.com/repos/ollama/ollama/issues/2029/events
|
https://github.com/ollama/ollama/issues/2029
| 2,085,882,942
|
I_kwDOJ0Z1Ps58VAw-
| 2,029
|
ggml-cuda.cu: "8792: !" CUDA error
|
{
"login": "hsiehgeorge",
"id": 45024980,
"node_id": "MDQ6VXNlcjQ1MDI0OTgw",
"avatar_url": "https://avatars.githubusercontent.com/u/45024980?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hsiehgeorge",
"html_url": "https://github.com/hsiehgeorge",
"followers_url": "https://api.github.com/users/hsiehgeorge/followers",
"following_url": "https://api.github.com/users/hsiehgeorge/following{/other_user}",
"gists_url": "https://api.github.com/users/hsiehgeorge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hsiehgeorge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hsiehgeorge/subscriptions",
"organizations_url": "https://api.github.com/users/hsiehgeorge/orgs",
"repos_url": "https://api.github.com/users/hsiehgeorge/repos",
"events_url": "https://api.github.com/users/hsiehgeorge/events{/privacy}",
"received_events_url": "https://api.github.com/users/hsiehgeorge/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg",
"url": "https://api.github.com/repos/ollama/ollama/labels/nvidia",
"name": "nvidia",
"color": "8CDB00",
"default": false,
"description": "Issues relating to Nvidia GPUs and CUDA"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info",
"name": "needs more info",
"color": "BA8041",
"default": false,
"description": "More information is needed to assist"
}
] |
closed
| false
| null |
[] | null | 7
| 2024-01-17T10:28:56
| 2024-03-24T02:13:26
| 2024-03-11T18:56:11
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi,
We have a Dell XE8545 server with 4 * A100 GPU cards. When we are running "ollama run mixtral", it was fine but few minutes later, it's halt. I got multiple errors from the log:
1. ggml-cuda.cu: "8792: !" CUDA error
2. ollama.service: State 'stop-sigterm' timed out. Killing.
I tried to kill ollama process but can't (ollama.service: Processes still around after SIGKILL. Ignoring.), the only solution is reboot it but the same situation happens again.
Please advise how to make it works smoothly.
Thank you.
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2029/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2029/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2690
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2690/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2690/comments
|
https://api.github.com/repos/ollama/ollama/issues/2690/events
|
https://github.com/ollama/ollama/issues/2690
| 2,149,656,857
|
I_kwDOJ0Z1Ps6AISkZ
| 2,690
|
default windows install folder
|
{
"login": "goldelio",
"id": 98236877,
"node_id": "U_kgDOBdr5zQ",
"avatar_url": "https://avatars.githubusercontent.com/u/98236877?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/goldelio",
"html_url": "https://github.com/goldelio",
"followers_url": "https://api.github.com/users/goldelio/followers",
"following_url": "https://api.github.com/users/goldelio/following{/other_user}",
"gists_url": "https://api.github.com/users/goldelio/gists{/gist_id}",
"starred_url": "https://api.github.com/users/goldelio/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/goldelio/subscriptions",
"organizations_url": "https://api.github.com/users/goldelio/orgs",
"repos_url": "https://api.github.com/users/goldelio/repos",
"events_url": "https://api.github.com/users/goldelio/events{/privacy}",
"received_events_url": "https://api.github.com/users/goldelio/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-02-22T18:15:09
| 2024-02-22T20:24:08
| 2024-02-22T20:24:08
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Please, it would be nice if we could chose where to install the software, right now it default installs on C:/ on windows and it's not the best for multiple reasons.
|
{
"login": "goldelio",
"id": 98236877,
"node_id": "U_kgDOBdr5zQ",
"avatar_url": "https://avatars.githubusercontent.com/u/98236877?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/goldelio",
"html_url": "https://github.com/goldelio",
"followers_url": "https://api.github.com/users/goldelio/followers",
"following_url": "https://api.github.com/users/goldelio/following{/other_user}",
"gists_url": "https://api.github.com/users/goldelio/gists{/gist_id}",
"starred_url": "https://api.github.com/users/goldelio/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/goldelio/subscriptions",
"organizations_url": "https://api.github.com/users/goldelio/orgs",
"repos_url": "https://api.github.com/users/goldelio/repos",
"events_url": "https://api.github.com/users/goldelio/events{/privacy}",
"received_events_url": "https://api.github.com/users/goldelio/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2690/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2690/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2115
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2115/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2115/comments
|
https://api.github.com/repos/ollama/ollama/issues/2115/events
|
https://github.com/ollama/ollama/pull/2115
| 2,092,259,293
|
PR_kwDOJ0Z1Ps5kor4e
| 2,115
|
Update submodule to `6f9939d119b2d004c264952eb510bd106455531e`
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-01-20T22:19:52
| 2024-01-22T19:56:41
| 2024-01-22T19:56:40
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/2115",
"html_url": "https://github.com/ollama/ollama/pull/2115",
"diff_url": "https://github.com/ollama/ollama/pull/2115.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2115.patch",
"merged_at": "2024-01-22T19:56:40"
}
| null |
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2115/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2115/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7184
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7184/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7184/comments
|
https://api.github.com/repos/ollama/ollama/issues/7184/events
|
https://github.com/ollama/ollama/issues/7184
| 2,582,965,788
|
I_kwDOJ0Z1Ps6Z9O4c
| 7,184
|
create minimal cpu-only smaller docker image
|
{
"login": "ozbillwang",
"id": 8954908,
"node_id": "MDQ6VXNlcjg5NTQ5MDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/8954908?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ozbillwang",
"html_url": "https://github.com/ozbillwang",
"followers_url": "https://api.github.com/users/ozbillwang/followers",
"following_url": "https://api.github.com/users/ozbillwang/following{/other_user}",
"gists_url": "https://api.github.com/users/ozbillwang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ozbillwang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ozbillwang/subscriptions",
"organizations_url": "https://api.github.com/users/ozbillwang/orgs",
"repos_url": "https://api.github.com/users/ozbillwang/repos",
"events_url": "https://api.github.com/users/ozbillwang/events{/privacy}",
"received_events_url": "https://api.github.com/users/ozbillwang/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 6677677816,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgVG-A",
"url": "https://api.github.com/repos/ollama/ollama/labels/docker",
"name": "docker",
"color": "0052CC",
"default": false,
"description": "Issues relating to using ollama in containers"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 14
| 2024-10-12T12:50:37
| 2024-10-16T02:40:47
| 2024-10-16T02:40:47
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
The image `ollama/ollama` is 4.87GB already, which I plan to run it on my macbook or Ubuntu (linux) without any GPUs.
```
ollama/ollama latest e458178cf2c1 2 weeks ago 4.87GB
```
Are there any ways to reduce the size to as small as possible, since I don't need care of GPU, CUBA driver, etc
|
{
"login": "ozbillwang",
"id": 8954908,
"node_id": "MDQ6VXNlcjg5NTQ5MDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/8954908?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ozbillwang",
"html_url": "https://github.com/ozbillwang",
"followers_url": "https://api.github.com/users/ozbillwang/followers",
"following_url": "https://api.github.com/users/ozbillwang/following{/other_user}",
"gists_url": "https://api.github.com/users/ozbillwang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ozbillwang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ozbillwang/subscriptions",
"organizations_url": "https://api.github.com/users/ozbillwang/orgs",
"repos_url": "https://api.github.com/users/ozbillwang/repos",
"events_url": "https://api.github.com/users/ozbillwang/events{/privacy}",
"received_events_url": "https://api.github.com/users/ozbillwang/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7184/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7184/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/37
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/37/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/37/comments
|
https://api.github.com/repos/ollama/ollama/issues/37/events
|
https://github.com/ollama/ollama/pull/37
| 1,790,053,259
|
PR_kwDOJ0Z1Ps5UuxHc
| 37
|
upgrade fuzzy search library
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2023-07-05T18:16:14
| 2023-07-05T20:41:50
| 2023-07-05T19:16:19
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/37",
"html_url": "https://github.com/ollama/ollama/pull/37",
"diff_url": "https://github.com/ollama/ollama/pull/37.diff",
"patch_url": "https://github.com/ollama/ollama/pull/37.patch",
"merged_at": "2023-07-05T19:16:19"
}
|
fuzzywuzzy was renamed starting 0.19 so use that instead
use process.extract to produce a list of fuzzy matches instead of process.extractOne
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/37/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/37/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1469
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1469/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1469/comments
|
https://api.github.com/repos/ollama/ollama/issues/1469/events
|
https://github.com/ollama/ollama/pull/1469
| 2,036,192,736
|
PR_kwDOJ0Z1Ps5htcUp
| 1,469
|
remove per-model types
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-12-11T17:40:49
| 2023-12-12T20:27:04
| 2023-12-12T20:27:03
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1469",
"html_url": "https://github.com/ollama/ollama/pull/1469",
"diff_url": "https://github.com/ollama/ollama/pull/1469.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1469.patch",
"merged_at": "2023-12-12T20:27:03"
}
|
mostly replaced by decoding tensors except ggml models which only support llama
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1469/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1469/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7881
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7881/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7881/comments
|
https://api.github.com/repos/ollama/ollama/issues/7881/events
|
https://github.com/ollama/ollama/issues/7881
| 2,704,483,328
|
I_kwDOJ0Z1Ps6hMyQA
| 7,881
|
OpenAI-compatible API tool calls have no index
|
{
"login": "jackmpcollins",
"id": 6640905,
"node_id": "MDQ6VXNlcjY2NDA5MDU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6640905?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jackmpcollins",
"html_url": "https://github.com/jackmpcollins",
"followers_url": "https://api.github.com/users/jackmpcollins/followers",
"following_url": "https://api.github.com/users/jackmpcollins/following{/other_user}",
"gists_url": "https://api.github.com/users/jackmpcollins/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jackmpcollins/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jackmpcollins/subscriptions",
"organizations_url": "https://api.github.com/users/jackmpcollins/orgs",
"repos_url": "https://api.github.com/users/jackmpcollins/repos",
"events_url": "https://api.github.com/users/jackmpcollins/events{/privacy}",
"received_events_url": "https://api.github.com/users/jackmpcollins/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "https://api.github.com/users/ParthSareen/followers",
"following_url": "https://api.github.com/users/ParthSareen/following{/other_user}",
"gists_url": "https://api.github.com/users/ParthSareen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ParthSareen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ParthSareen/subscriptions",
"organizations_url": "https://api.github.com/users/ParthSareen/orgs",
"repos_url": "https://api.github.com/users/ParthSareen/repos",
"events_url": "https://api.github.com/users/ParthSareen/events{/privacy}",
"received_events_url": "https://api.github.com/users/ParthSareen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "https://api.github.com/users/ParthSareen/followers",
"following_url": "https://api.github.com/users/ParthSareen/following{/other_user}",
"gists_url": "https://api.github.com/users/ParthSareen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ParthSareen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ParthSareen/subscriptions",
"organizations_url": "https://api.github.com/users/ParthSareen/orgs",
"repos_url": "https://api.github.com/users/ParthSareen/repos",
"events_url": "https://api.github.com/users/ParthSareen/events{/privacy}",
"received_events_url": "https://api.github.com/users/ParthSareen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 3
| 2024-11-29T09:25:52
| 2024-12-02T03:50:48
| 2024-11-30T04:00:10
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
### What is the issue?
The streamed chat-completion response from ollama's openai-compatible API does not populate the `.choices[].delta.tool_calls[].index` field. This is different to OpenAI's API where this is populated on all tool call chunks and enumerates the tool calls. This breaks compatibility with the `client.beta.chat.completions.stream` helper from the openai package. It also breaks compatibility with https://github.com/pydantic/logfire which uses the same underlying code from openai. Please add the index field to the tool calls to match openai.
---
OpenAI chunks: tool call `index` is present starting at 0
```python
from openai import OpenAI
client = OpenAI()
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "What is the weather like in Boston?"}],
stream=True,
tools=[
{
"type": "function",
"function": {
"name": "get_current_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA",
},
"unit": {"type": "string", "enum": ["celsius", "fahrenheit"]},
},
"required": ["location"],
},
},
},
],
)
for chunk in response:
print(chunk.model_dump_json(exclude_none=True))
{"id":"chatcmpl-AYrJepetKGmoEh6pqPHCwPydZRPI3","choices":[{"delta":{"role":"assistant","tool_calls":[{"index":0,"id":"call_2dlDZrcl0VQMiDtzJzwjIN5j","function":{"arguments":"","name":"get_current_weather"},"type":"function"}]},"index":0}],"created":1732871462,"model":"gpt-4o-2024-08-06","object":"chat.completion.chunk","system_fingerprint":"fp_7f6be3efb0"}
{"id":"chatcmpl-AYrJepetKGmoEh6pqPHCwPydZRPI3","choices":[{"delta":{"tool_calls":[{"index":0,"function":{"arguments":"{\""}}]},"index":0}],"created":1732871462,"model":"gpt-4o-2024-08-06","object":"chat.completion.chunk","system_fingerprint":"fp_7f6be3efb0"}
{"id":"chatcmpl-AYrJepetKGmoEh6pqPHCwPydZRPI3","choices":[{"delta":{"tool_calls":[{"index":0,"function":{"arguments":"location"}}]},"index":0}],"created":1732871462,"model":"gpt-4o-2024-08-06","object":"chat.completion.chunk","system_fingerprint":"fp_7f6be3efb0"}
{"id":"chatcmpl-AYrJepetKGmoEh6pqPHCwPydZRPI3","choices":[{"delta":{"tool_calls":[{"index":0,"function":{"arguments":"\":\""}}]},"index":0}],"created":1732871462,"model":"gpt-4o-2024-08-06","object":"chat.completion.chunk","system_fingerprint":"fp_7f6be3efb0"}
{"id":"chatcmpl-AYrJepetKGmoEh6pqPHCwPydZRPI3","choices":[{"delta":{"tool_calls":[{"index":0,"function":{"arguments":"Boston"}}]},"index":0}],"created":1732871462,"model":"gpt-4o-2024-08-06","object":"chat.completion.chunk","system_fingerprint":"fp_7f6be3efb0"}
{"id":"chatcmpl-AYrJepetKGmoEh6pqPHCwPydZRPI3","choices":[{"delta":{"tool_calls":[{"index":0,"function":{"arguments":","}}]},"index":0}],"created":1732871462,"model":"gpt-4o-2024-08-06","object":"chat.completion.chunk","system_fingerprint":"fp_7f6be3efb0"}
{"id":"chatcmpl-AYrJepetKGmoEh6pqPHCwPydZRPI3","choices":[{"delta":{"tool_calls":[{"index":0,"function":{"arguments":" MA"}}]},"index":0}],"created":1732871462,"model":"gpt-4o-2024-08-06","object":"chat.completion.chunk","system_fingerprint":"fp_7f6be3efb0"}
{"id":"chatcmpl-AYrJepetKGmoEh6pqPHCwPydZRPI3","choices":[{"delta":{"tool_calls":[{"index":0,"function":{"arguments":"\"}"}}]},"index":0}],"created":1732871462,"model":"gpt-4o-2024-08-06","object":"chat.completion.chunk","system_fingerprint":"fp_7f6be3efb0"}
{"id":"chatcmpl-AYrJepetKGmoEh6pqPHCwPydZRPI3","choices":[{"delta":{},"finish_reason":"tool_calls","index":0}],"created":1732871462,"model":"gpt-4o-2024-08-06","object":"chat.completion.chunk","system_fingerprint":"fp_7f6be3efb0"}
```
Ollama chunks: tool call `index` is not present
```python
from openai import OpenAI
client = OpenAI(
base_url="http://localhost:11434/v1",
api_key="ollama",
)
response = client.chat.completions.create(
model="llama3.1",
# model="gpt-4o",
messages=[{"role": "user", "content": "What is the weather like in Boston?"}],
stream=True,
# stream_options={"include_usage": True},
tools=[
{
"type": "function",
"function": {
"name": "get_current_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA",
},
"unit": {"type": "string", "enum": ["celsius", "fahrenheit"]},
},
"required": ["location"],
},
},
},
],
)
for chunk in response:
print(chunk.model_dump_json(exclude_none=True))
{"id":"chatcmpl-914","choices":[{"delta":{"content":"","role":"assistant","tool_calls":[{"id":"call_rn5g1z57","function":{"arguments":"{\"location\":\"Boston, MA\",\"unit\":\"fahrenheit\"}","name":"get_current_weather"},"type":"function"}]},"index":0}],"created":1732871553,"model":"llama3.1","object":"chat.completion.chunk","system_fingerprint":"fp_ollama"}
{"id":"chatcmpl-914","choices":[{"delta":{"content":"","role":"assistant"},"finish_reason":"stop","index":0}],"created":1732871553,"model":"llama3.1","object":"chat.completion.chunk","system_fingerprint":"fp_ollama"}
```
Using `client.beta.chat.completions.stream` with ollama results in an exception due to `None` value for tool call index.
openai docs for this function: https://github.com/openai/openai-python/blob/646a579cdb305a9d3fba6c5f9a96011c5e2c2882/helpers.md#chat-completions-api
```python
from openai import OpenAI
client = OpenAI(
base_url="http://localhost:11434/v1",
api_key="ollama",
)
with client.beta.chat.completions.stream(
model="llama3.1",
messages=[{"role": "user", "content": "What is the weather like in Boston?"}],
tools=[
{
"type": "function",
"function": {
"name": "get_current_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA",
},
"unit": {"type": "string", "enum": ["celsius", "fahrenheit"]},
},
"required": ["location"],
},
},
},
],
) as stream:
for event in stream:
pass
print(stream.get_final_completion().model_dump_json(indent=2))
...
openai/lib/streaming/chat/_completions.py:505, in ChatCompletionStreamState._build_events(self, chunk, completion_snapshot)
502 assert tool_calls is not None
504 for tool_call_delta in choice.delta.tool_calls:
--> 505 tool_call = tool_calls[tool_call_delta.index]
507 if tool_call.type == "function":
508 assert tool_call_delta.function is not None
TypeError: list indices must be integers or slices, not NoneType
```
### OS
macOS
### GPU
_No response_
### CPU
Apple
### Ollama version
0.4.6
|
{
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "https://api.github.com/users/ParthSareen/followers",
"following_url": "https://api.github.com/users/ParthSareen/following{/other_user}",
"gists_url": "https://api.github.com/users/ParthSareen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ParthSareen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ParthSareen/subscriptions",
"organizations_url": "https://api.github.com/users/ParthSareen/orgs",
"repos_url": "https://api.github.com/users/ParthSareen/repos",
"events_url": "https://api.github.com/users/ParthSareen/events{/privacy}",
"received_events_url": "https://api.github.com/users/ParthSareen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7881/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7881/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1728
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1728/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1728/comments
|
https://api.github.com/repos/ollama/ollama/issues/1728/events
|
https://github.com/ollama/ollama/issues/1728
| 2,057,323,728
|
I_kwDOJ0Z1Ps56oETQ
| 1,728
|
Streaming multiple json objects at the same time
|
{
"login": "pepperoni21",
"id": 29759371,
"node_id": "MDQ6VXNlcjI5NzU5Mzcx",
"avatar_url": "https://avatars.githubusercontent.com/u/29759371?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pepperoni21",
"html_url": "https://github.com/pepperoni21",
"followers_url": "https://api.github.com/users/pepperoni21/followers",
"following_url": "https://api.github.com/users/pepperoni21/following{/other_user}",
"gists_url": "https://api.github.com/users/pepperoni21/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pepperoni21/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pepperoni21/subscriptions",
"organizations_url": "https://api.github.com/users/pepperoni21/orgs",
"repos_url": "https://api.github.com/users/pepperoni21/repos",
"events_url": "https://api.github.com/users/pepperoni21/events{/privacy}",
"received_events_url": "https://api.github.com/users/pepperoni21/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2023-12-27T12:35:50
| 2023-12-27T15:32:34
| 2023-12-27T15:32:33
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
It seems like sometimes Ollama streams multiple json objects one after the other in the same streamed response, which cannot be deserialized.
Here's an example of one single streamed json response using the /generate endpoint
```json
{"model":"dolphin-mixtral:latest","created_at":"2023-12-25T01:12:45.58944567Z","response":" you","done":false}\n
{"model":"dolphin-mixtral:latest","created_at":"2023-12-25T01:12:45.607384298Z","response":" today","done":false}\n
{"model":"dolphin-mixtral:latest","created_at":"2023-12-25T01:12:45.625372937Z","response":"?","done":false}\n
{"model":"dolphin-mixtral:latest","created_at":"2023-12-25T01:12:45.643531751Z","response":"","done":true,"context":[32001,6574,13,24205,574,8570,6817,28723,32000,13,32001,1838,13,21558,28801,13,32000,13,32001,489,11143,13,22557,28808,1602,541,315,6031,368,3154,28804],"total_duration":376468647,"load_duration":758387,"prompt_eval_count":23,"prompt_eval_duration":226302000,"eval_count":9,"eval_duration":147877000}
```
|
{
"login": "pepperoni21",
"id": 29759371,
"node_id": "MDQ6VXNlcjI5NzU5Mzcx",
"avatar_url": "https://avatars.githubusercontent.com/u/29759371?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pepperoni21",
"html_url": "https://github.com/pepperoni21",
"followers_url": "https://api.github.com/users/pepperoni21/followers",
"following_url": "https://api.github.com/users/pepperoni21/following{/other_user}",
"gists_url": "https://api.github.com/users/pepperoni21/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pepperoni21/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pepperoni21/subscriptions",
"organizations_url": "https://api.github.com/users/pepperoni21/orgs",
"repos_url": "https://api.github.com/users/pepperoni21/repos",
"events_url": "https://api.github.com/users/pepperoni21/events{/privacy}",
"received_events_url": "https://api.github.com/users/pepperoni21/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1728/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1728/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/347
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/347/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/347/comments
|
https://api.github.com/repos/ollama/ollama/issues/347/events
|
https://github.com/ollama/ollama/issues/347
| 1,850,569,328
|
I_kwDOJ0Z1Ps5uTXJw
| 347
|
Support for GPT-NeoX GGML models - e.g. Stablecode
|
{
"login": "njarecki",
"id": 94956985,
"node_id": "U_kgDOBajtuQ",
"avatar_url": "https://avatars.githubusercontent.com/u/94956985?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/njarecki",
"html_url": "https://github.com/njarecki",
"followers_url": "https://api.github.com/users/njarecki/followers",
"following_url": "https://api.github.com/users/njarecki/following{/other_user}",
"gists_url": "https://api.github.com/users/njarecki/gists{/gist_id}",
"starred_url": "https://api.github.com/users/njarecki/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/njarecki/subscriptions",
"organizations_url": "https://api.github.com/users/njarecki/orgs",
"repos_url": "https://api.github.com/users/njarecki/repos",
"events_url": "https://api.github.com/users/njarecki/events{/privacy}",
"received_events_url": "https://api.github.com/users/njarecki/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
closed
| false
| null |
[] | null | 7
| 2023-08-14T21:29:24
| 2024-02-20T00:52:41
| 2024-02-20T00:52:41
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null | null |
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/347/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/347/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2248
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2248/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2248/comments
|
https://api.github.com/repos/ollama/ollama/issues/2248/events
|
https://github.com/ollama/ollama/pull/2248
| 2,104,522,311
|
PR_kwDOJ0Z1Ps5lRhzi
| 2,248
|
Add requirements
|
{
"login": "Yuan-ManX",
"id": 68322456,
"node_id": "MDQ6VXNlcjY4MzIyNDU2",
"avatar_url": "https://avatars.githubusercontent.com/u/68322456?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Yuan-ManX",
"html_url": "https://github.com/Yuan-ManX",
"followers_url": "https://api.github.com/users/Yuan-ManX/followers",
"following_url": "https://api.github.com/users/Yuan-ManX/following{/other_user}",
"gists_url": "https://api.github.com/users/Yuan-ManX/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Yuan-ManX/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Yuan-ManX/subscriptions",
"organizations_url": "https://api.github.com/users/Yuan-ManX/orgs",
"repos_url": "https://api.github.com/users/Yuan-ManX/repos",
"events_url": "https://api.github.com/users/Yuan-ManX/events{/privacy}",
"received_events_url": "https://api.github.com/users/Yuan-ManX/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-01-29T02:37:07
| 2024-11-21T08:57:22
| 2024-11-21T08:57:22
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/2248",
"html_url": "https://github.com/ollama/ollama/pull/2248",
"diff_url": "https://github.com/ollama/ollama/pull/2248.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2248.patch",
"merged_at": null
}
| null |
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2248/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2248/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4805
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4805/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4805/comments
|
https://api.github.com/repos/ollama/ollama/issues/4805/events
|
https://github.com/ollama/ollama/issues/4805
| 2,332,545,712
|
I_kwDOJ0Z1Ps6LB9Kw
| 4,805
|
can not serve VL models
|
{
"login": "techResearcher2021",
"id": 90097102,
"node_id": "MDQ6VXNlcjkwMDk3MTAy",
"avatar_url": "https://avatars.githubusercontent.com/u/90097102?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/techResearcher2021",
"html_url": "https://github.com/techResearcher2021",
"followers_url": "https://api.github.com/users/techResearcher2021/followers",
"following_url": "https://api.github.com/users/techResearcher2021/following{/other_user}",
"gists_url": "https://api.github.com/users/techResearcher2021/gists{/gist_id}",
"starred_url": "https://api.github.com/users/techResearcher2021/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/techResearcher2021/subscriptions",
"organizations_url": "https://api.github.com/users/techResearcher2021/orgs",
"repos_url": "https://api.github.com/users/techResearcher2021/repos",
"events_url": "https://api.github.com/users/techResearcher2021/events{/privacy}",
"received_events_url": "https://api.github.com/users/techResearcher2021/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-06-04T04:53:22
| 2024-06-09T17:12:55
| 2024-06-09T17:12:55
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
As I served my VL models, It can not work correctly.
Here. I tried the Minicpm-llama3-V-2.5, and convert it to GGUF format under the instruction from the official repository: https://github.com/OpenBMB/llama.cpp/blob/minicpm-v2.5/examples/minicpmv/README.md.
Then I use the service from open-webui.
The running log is shown below:
{"log":"2024/06/04 04:30:04 routes.go:1007: INFO server config env=\"map[OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:true OLLAMA_HOST: OLLAMA_KEEP_ALIVE: OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:3 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_MODELS: OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:2 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:*] OLLAMA_RUNNERS_DIR: OLLAMA_TMPDIR:]\"\n","stream":"stderr","time":"2024-06-04T04:30:04.87310074Z"}
{"log":"time=2024-06-04T04:30:04.874Z level=INFO source=images.go:729 msg=\"total blobs: 17\"\n","stream":"stderr","time":"2024-06-04T04:30:04.874631858Z"}
{"log":"time=2024-06-04T04:30:04.876Z level=INFO source=images.go:736 msg=\"total unused blobs removed: 0\"\n","stream":"stderr","time":"2024-06-04T04:30:04.876140538Z"}
{"log":"time=2024-06-04T04:30:04.877Z level=INFO source=routes.go:1053 msg=\"Listening on [::]:11434 (version 0.1.41)\"\n","stream":"stderr","time":"2024-06-04T04:30:04.87753063Z"}
{"log":"time=2024-06-04T04:30:04.877Z level=INFO source=payload.go:30 msg=\"extracting embedded files\" dir=/tmp/ollama1378338659/runners\n","stream":"stderr","time":"2024-06-04T04:30:04.878027708Z"}
{"log":"time=2024-06-04T04:30:08.540Z level=INFO source=payload.go:44 msg=\"Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11 rocm_v60002]\"\n","stream":"stderr","time":"2024-06-04T04:30:08.540929839Z"}
{"log":"time=2024-06-04T04:30:08.772Z level=INFO source=types.go:71 msg=\"inference compute\" id=GPU-70127701-8921-747f-9194-ce6a8699d820 library=cuda compute=8.9 driver=12.4 name=\"NVIDIA GeForce RTX 4090\" total=\"23.6 GiB\" available=\"21.6 GiB\"\n","stream":"stderr","time":"2024-06-04T04:30:08.772983612Z"}
{"log":"time=2024-06-04T04:30:08.772Z level=INFO source=types.go:71 msg=\"inference compute\" id=GPU-61837e28-1bfe-a560-ddd2-0a14a55cf642 library=cuda compute=8.9 driver=12.4 name=\"NVIDIA GeForce RTX 4090\" total=\"23.6 GiB\" available=\"22.7 GiB\"\n","stream":"stderr","time":"2024-06-04T04:30:08.773014492Z"}
{"log":"time=2024-06-04T04:31:24.775Z level=INFO source=memory.go:133 msg=\"offload to gpu\" layers.requested=-1 layers.real=33 memory.available=\"22.7 GiB\" memory.required.full=\"18.9 GiB\" memory.required.partial=\"18.9 GiB\" memory.required.kv=\"2.0 GiB\" memory.weights.total=\"14.0 GiB\" memory.weights.repeating=\"13.0 GiB\" memory.weights.nonrepeating=\"1002.0 MiB\" memory.graph.full=\"1.1 GiB\" memory.graph.partial=\"1.1 GiB\"\n","stream":"stderr","time":"2024-06-04T04:31:24.776105102Z"}
{"log":"time=2024-06-04T04:31:24.782Z level=INFO source=memory.go:133 msg=\"offload to gpu\" layers.requested=-1 layers.real=33 memory.available=\"22.7 GiB\" memory.required.full=\"18.9 GiB\" memory.required.partial=\"18.9 GiB\" memory.required.kv=\"2.0 GiB\" memory.weights.total=\"14.0 GiB\" memory.weights.repeating=\"13.0 GiB\" memory.weights.nonrepeating=\"1002.0 MiB\" memory.graph.full=\"1.1 GiB\" memory.graph.partial=\"1.1 GiB\"\n","stream":"stderr","time":"2024-06-04T04:31:24.782196984Z"}
{"log":"time=2024-06-04T04:31:24.782Z level=WARN source=server.go:230 msg=\"multimodal models don't support parallel requests yet\"\n","stream":"stderr","time":"2024-06-04T04:31:24.782266894Z"}
{"log":"time=2024-06-04T04:31:24.783Z level=INFO source=server.go:341 msg=\"starting llama server\" cmd=\"/tmp/ollama1378338659/runners/cuda_v11/ollama_llama_server --model /root/.ollama/models/blobs/sha256-a7a6ce348ebc060ceb8aa973f3b0bad5d3007b7ced23228c0e1aeba59c1fb72f --ctx-size 16384 --batch-size 512 --embedding --log-disable --n-gpu-layers 33 --mmproj /root/.ollama/models/blobs/sha256-64fdb7da947f450c745dae303caae7e186d84531cf4acdcddb791fb4503535b6 --flash-attn --parallel 1 --port 46139\"\n","stream":"stderr","time":"2024-06-04T04:31:24.783576905Z"}
{"log":"time=2024-06-04T04:31:24.784Z level=INFO source=sched.go:338 msg=\"loaded runners\" count=1\n","stream":"stderr","time":"2024-06-04T04:31:24.784619881Z"}
{"log":"time=2024-06-04T04:31:24.784Z level=INFO source=server.go:529 msg=\"waiting for llama runner to start responding\"\n","stream":"stderr","time":"2024-06-04T04:31:24.784803776Z"}
{"log":"time=2024-06-04T04:31:24.785Z level=INFO source=server.go:567 msg=\"waiting for server to become available\" status=\"llm server error\"\n","stream":"stderr","time":"2024-06-04T04:31:24.785306271Z"}
{"log":"INFO [main] build info | build=1 commit=\"5921b8f\" tid=\"139656081428480\" timestamp=1717475484\n","stream":"stdout","time":"2024-06-04T04:31:24.804439262Z"}
{"log":"INFO [main] system info | n_threads=40 n_threads_batch=-1 system_info=\"AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | \" tid=\"139656081428480\" timestamp=1717475484 total_threads=80\n","stream":"stdout","time":"2024-06-04T04:31:24.804461692Z"}
{"log":"INFO [main] HTTP server listening | hostname=\"127.0.0.1\" n_threads_http=\"79\" port=\"46139\" tid=\"139656081428480\" timestamp=1717475484\n","stream":"stdout","time":"2024-06-04T04:31:24.804468913Z"}
{"log":"ggml_cuda_init: GGML_CUDA_FORCE_MMQ: yes\n","stream":"stderr","time":"2024-06-04T04:31:24.817467652Z"}
{"log":"ggml_cuda_init: CUDA_USE_TENSOR_CORES: no\n","stream":"stderr","time":"2024-06-04T04:31:24.817475275Z"}
{"log":"ggml_cuda_init: found 1 CUDA devices:\n","stream":"stderr","time":"2024-06-04T04:31:24.817478125Z"}
{"log":" Device 0: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes\n","stream":"stderr","time":"2024-06-04T04:31:24.817480583Z"}
{"log":"GGML_ASSERT: /go/src/github.com/ollama/ollama/llm/llama.cpp/examples/llava/clip.cpp:1024: new_clip-\u003ehas_llava_projector\n","stream":"stderr","time":"2024-06-04T04:31:24.817483029Z"}
{"log":"time=2024-06-04T04:31:25.036Z level=INFO source=server.go:567 msg=\"waiting for server to become available\" status=\"llm server loading model\"\n","stream":"stderr","time":"2024-06-04T04:31:25.036981546Z"}
{"log":"time=2024-06-04T04:31:26.492Z level=INFO source=server.go:567 msg=\"waiting for server to become available\" status=\"llm server not responding\"\n","stream":"stderr","time":"2024-06-04T04:31:26.492654151Z"}
{"log":"time=2024-06-04T04:31:27.230Z level=INFO source=server.go:567 msg=\"waiting for server to become available\" status=\"llm server error\"\n","stream":"stderr","time":"2024-06-04T04:31:27.2308139Z"}
{"log":"time=2024-06-04T04:31:27.481Z level=ERROR source=sched.go:344 msg=\"error loading llama server\" error=\"llama runner process has terminated: signal: aborted (core dumped) \"\n","stream":"stderr","time":"2024-06-04T04:31:27.481522057Z"}
{"log":"[GIN] 2024/06/04 - 04:31:27 | 500 | 4.771330699s | 172.17.0.1 | POST \"/api/chat\"\n","stream":"stdout","time":"2024-06-04T04:31:27.481601072Z"}
{"log":"time=2024-06-04T04:31:31.627Z level=INFO source=memory.go:133 msg=\"offload to gpu\" layers.requested=-1 layers.real=33 memory.available=\"22.7 GiB\" memory.required.full=\"16.6 GiB\" memory.required.partial=\"16.6 GiB\" memory.required.kv=\"512.0 MiB\" memory.weights.total=\"14.0 GiB\" memory.weights.repeating=\"13.0 GiB\" memory.weights.nonrepeating=\"1002.0 MiB\" memory.graph.full=\"296.0 MiB\" memory.graph.partial=\"677.5 MiB\"\n","stream":"stderr","time":"2024-06-04T04:31:31.627507969Z"}
{"log":"time=2024-06-04T04:31:31.633Z level=INFO source=memory.go:133 msg=\"offload to gpu\" layers.requested=-1 layers.real=33 memory.available=\"22.7 GiB\" memory.required.full=\"16.6 GiB\" memory.required.partial=\"16.6 GiB\" memory.required.kv=\"512.0 MiB\" memory.weights.total=\"14.0 GiB\" memory.weights.repeating=\"13.0 GiB\" memory.weights.nonrepeating=\"1002.0 MiB\" memory.graph.full=\"296.0 MiB\" memory.graph.partial=\"677.5 MiB\"\n","stream":"stderr","time":"2024-06-04T04:31:31.633800635Z"}
{"log":"time=2024-06-04T04:31:31.633Z level=WARN source=server.go:230 msg=\"multimodal models don't support parallel requests yet\"\n","stream":"stderr","time":"2024-06-04T04:31:31.633876464Z"}
{"log":"time=2024-06-04T04:31:31.634Z level=INFO source=server.go:341 msg=\"starting llama server\" cmd=\"/tmp/ollama1378338659/runners/cuda_v11/ollama_llama_server --model /root/.ollama/models/blobs/sha256-a7a6ce348ebc060ceb8aa973f3b0bad5d3007b7ced23228c0e1aeba59c1fb72f --ctx-size 4096 --batch-size 512 --embedding --log-disable --n-gpu-layers 33 --mmproj /root/.ollama/models/blobs/sha256-64fdb7da947f450c745dae303caae7e186d84531cf4acdcddb791fb4503535b6 --flash-attn --parallel 1 --port 40265\"\n","stream":"stderr","time":"2024-06-04T04:31:31.634136257Z"}
{"log":"time=2024-06-04T04:31:31.634Z level=INFO source=sched.go:338 msg=\"loaded runners\" count=1\n","stream":"stderr","time":"2024-06-04T04:31:31.63453042Z"}
{"log":"time=2024-06-04T04:31:31.634Z level=INFO source=server.go:529 msg=\"waiting for llama runner to start responding\"\n","stream":"stderr","time":"2024-06-04T04:31:31.634547574Z"}
{"log":"time=2024-06-04T04:31:31.634Z level=INFO source=server.go:567 msg=\"waiting for server to become available\" status=\"llm server error\"\n","stream":"stderr","time":"2024-06-04T04:31:31.634736673Z"}
{"log":"INFO [main] build info | build=1 commit=\"5921b8f\" tid=\"140615737364480\" timestamp=1717475491\n","stream":"stdout","time":"2024-06-04T04:31:31.652889203Z"}
{"log":"INFO [main] system info | n_threads=40 n_threads_batch=-1 system_info=\"AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | \" tid=\"140615737364480\" timestamp=1717475491 total_threads=80\n","stream":"stdout","time":"2024-06-04T04:31:31.652907647Z"}
{"log":"INFO [main] HTTP server listening | hostname=\"127.0.0.1\" n_threads_http=\"79\" port=\"40265\" tid=\"140615737364480\" timestamp=1717475491\n","stream":"stdout","time":"2024-06-04T04:31:31.652991725Z"}
{"log":"ggml_cuda_init: GGML_CUDA_FORCE_MMQ: yes\n","stream":"stderr","time":"2024-06-04T04:31:31.66543453Z"}
{"log":"ggml_cuda_init: CUDA_USE_TENSOR_CORES: no\n","stream":"stderr","time":"2024-06-04T04:31:31.66545219Z"}
{"log":"ggml_cuda_init: found 1 CUDA devices:\n","stream":"stderr","time":"2024-06-04T04:31:31.665455804Z"}
{"log":" Device 0: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes\n","stream":"stderr","time":"2024-06-04T04:31:31.666761164Z"}
{"log":"GGML_ASSERT: /go/src/github.com/ollama/ollama/llm/llama.cpp/examples/llava/clip.cpp:1024: new_clip-\u003ehas_llava_projector\n","stream":"stderr","time":"2024-06-04T04:31:31.666775789Z"}
{"log":"time=2024-06-04T04:31:31.887Z level=INFO source=server.go:567 msg=\"waiting for server to become available\" status=\"llm server loading model\"\n","stream":"stderr","time":"2024-06-04T04:31:31.887765132Z"}
{"log":"time=2024-06-04T04:31:32.525Z level=WARN source=sched.go:512 msg=\"gpu VRAM usage didn't recover within timeout\" seconds=5.043738778\n","stream":"stderr","time":"2024-06-04T04:31:32.525213094Z"}
{"log":"time=2024-06-04T04:31:32.739Z level=WARN source=sched.go:512 msg=\"gpu VRAM usage didn't recover within timeout\" seconds=5.257773327\n","stream":"stderr","time":"2024-06-04T04:31:32.739367248Z"}
{"log":"time=2024-06-04T04:31:32.990Z level=WARN source=sched.go:512 msg=\"gpu VRAM usage didn't recover within timeout\" seconds=5.50864897\n","stream":"stderr","time":"2024-06-04T04:31:32.990175753Z"}
{"log":"time=2024-06-04T04:31:33.092Z level=INFO source=server.go:567 msg=\"waiting for server to become available\" status=\"llm server not responding\"\n","stream":"stderr","time":"2024-06-04T04:31:33.092506363Z"}
{"log":"time=2024-06-04T04:31:33.875Z level=INFO source=server.go:567 msg=\"waiting for server to become available\" status=\"llm server error\"\n","stream":"stderr","time":"2024-06-04T04:31:33.875501116Z"}
{"log":"time=2024-06-04T04:31:34.126Z level=ERROR source=sched.go:344 msg=\"error loading llama server\" error=\"llama runner process has terminated: signal: aborted (core dumped) \"\n","stream":"stderr","time":"2024-06-04T04:31:34.126784317Z"}
{"log":"[GIN] 2024/06/04 - 04:31:34 | 500 | 4.781078255s | 172.17.0.1 | POST \"/v1/chat/completions\"\n","stream":"stdout","time":"2024-06-04T04:31:34.127032625Z"}
{"log":"time=2024-06-04T04:31:39.349Z level=WARN source=sched.go:512 msg=\"gpu VRAM usage didn't recover within timeout\" seconds=5.223073684\n","stream":"stderr","time":"2024-06-04T04:31:39.350100964Z"}
{"log":"time=2024-06-04T04:31:39.600Z level=WARN source=sched.go:512 msg=\"gpu VRAM usage didn't recover within timeout\" seconds=5.47392563\n","stream":"stderr","time":"2024-06-04T04:31:39.60084398Z"}
{"log":"time=2024-06-04T04:31:39.850Z level=WARN source=sched.go:512 msg=\"gpu VRAM usage didn't recover within timeout\" seconds=5.723634591\n","stream":"stderr","time":"2024-06-04T04:31:39.850455842Z"}
### OS
Docker
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.41
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4805/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4805/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4360
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4360/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4360/comments
|
https://api.github.com/repos/ollama/ollama/issues/4360/events
|
https://github.com/ollama/ollama/issues/4360
| 2,290,889,479
|
I_kwDOJ0Z1Ps6IjDMH
| 4,360
|
bge-reranker-v2-m3、mxbai-rerank-large-v1 and other rerank models
|
{
"login": "Feng-YiJing-Dao",
"id": 18107069,
"node_id": "MDQ6VXNlcjE4MTA3MDY5",
"avatar_url": "https://avatars.githubusercontent.com/u/18107069?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Feng-YiJing-Dao",
"html_url": "https://github.com/Feng-YiJing-Dao",
"followers_url": "https://api.github.com/users/Feng-YiJing-Dao/followers",
"following_url": "https://api.github.com/users/Feng-YiJing-Dao/following{/other_user}",
"gists_url": "https://api.github.com/users/Feng-YiJing-Dao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Feng-YiJing-Dao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Feng-YiJing-Dao/subscriptions",
"organizations_url": "https://api.github.com/users/Feng-YiJing-Dao/orgs",
"repos_url": "https://api.github.com/users/Feng-YiJing-Dao/repos",
"events_url": "https://api.github.com/users/Feng-YiJing-Dao/events{/privacy}",
"received_events_url": "https://api.github.com/users/Feng-YiJing-Dao/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
closed
| false
| null |
[] | null | 17
| 2024-05-11T13:02:06
| 2024-09-19T03:18:01
| 2024-09-02T20:57:25
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
The rerank model cannot be converted to the ollama-supported format through llama.cpp, but in RAG, I hope to run a rerank model to improve the accuracy of recall.
I try to use bge-reranker-v2-m3、mxbai-rerank-large-v1,model.safetensors format,


|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4360/reactions",
"total_count": 27,
"+1": 27,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4360/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/647
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/647/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/647/comments
|
https://api.github.com/repos/ollama/ollama/issues/647/events
|
https://github.com/ollama/ollama/issues/647
| 1,919,515,498
|
I_kwDOJ0Z1Ps5yaXtq
| 647
|
Read-only filesystem support
|
{
"login": "swthorn",
"id": 62764299,
"node_id": "MDQ6VXNlcjYyNzY0Mjk5",
"avatar_url": "https://avatars.githubusercontent.com/u/62764299?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/swthorn",
"html_url": "https://github.com/swthorn",
"followers_url": "https://api.github.com/users/swthorn/followers",
"following_url": "https://api.github.com/users/swthorn/following{/other_user}",
"gists_url": "https://api.github.com/users/swthorn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/swthorn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/swthorn/subscriptions",
"organizations_url": "https://api.github.com/users/swthorn/orgs",
"repos_url": "https://api.github.com/users/swthorn/repos",
"events_url": "https://api.github.com/users/swthorn/events{/privacy}",
"received_events_url": "https://api.github.com/users/swthorn/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 4
| 2023-09-29T15:41:25
| 2024-01-16T22:17:01
| 2024-01-16T22:17:00
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi,
I ran the installation command described in the README:
`curl https://ollama.ai/install.sh | sh`
on NixOS.
However, the binaries and systemd service are not installed correctly. Is it possible to install this on a read-only file system? Or, can we install this in a local directory rather than /usr/bin?
This is the entire output:
```❯ curl https://ollama.ai/install.sh | sh
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 7391 0 7391 0 0 28304 0 --:--:-- --:--:-- --:--:-- 28209
>>> Downloading ollama...
######################################################################## 100.0%##O=# #
>>> Installing ollama to /usr/bin...
[sudo] password for swthorn:
>>> Creating ollama user...
useradd: Warning: missing or non-executable shell '/bin/false'
>>> Creating ollama systemd service...
tee: /etc/systemd/system/ollama.service: Read-only file system
>>> Install complete. Run "ollama" from the command line.
❯ ollama
ollama: command not found
❯ zsh
❯ ollama
ollama: command not found
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/647/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/647/timeline
| null |
not_planned
| false
|
https://api.github.com/repos/ollama/ollama/issues/4745
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4745/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4745/comments
|
https://api.github.com/repos/ollama/ollama/issues/4745/events
|
https://github.com/ollama/ollama/issues/4745
| 2,327,113,895
|
I_kwDOJ0Z1Ps6KtPCn
| 4,745
|
CMake Error at CMakeLists.txt:2 (project): Generator System.Management.Automation.RemoteException Ninja System.Management.Automation.RemoteException does not support platform specification, but platform
|
{
"login": "chaoqunxie",
"id": 44899524,
"node_id": "MDQ6VXNlcjQ0ODk5NTI0",
"avatar_url": "https://avatars.githubusercontent.com/u/44899524?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chaoqunxie",
"html_url": "https://github.com/chaoqunxie",
"followers_url": "https://api.github.com/users/chaoqunxie/followers",
"following_url": "https://api.github.com/users/chaoqunxie/following{/other_user}",
"gists_url": "https://api.github.com/users/chaoqunxie/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chaoqunxie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chaoqunxie/subscriptions",
"organizations_url": "https://api.github.com/users/chaoqunxie/orgs",
"repos_url": "https://api.github.com/users/chaoqunxie/repos",
"events_url": "https://api.github.com/users/chaoqunxie/events{/privacy}",
"received_events_url": "https://api.github.com/users/chaoqunxie/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg",
"url": "https://api.github.com/repos/ollama/ollama/labels/windows",
"name": "windows",
"color": "0052CC",
"default": false,
"description": ""
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 3
| 2024-05-31T07:07:46
| 2024-10-23T21:33:12
| 2024-10-23T21:33:08
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Your branch is up to date with 'origin/minicpm-v2.5'.
Already on 'minicpm-v2.5'
Submodule path '../llama.cpp': checked out 'd8974b8ea61e1268a4cad27f4f6e2cde3c5d1370'
Checking for MinGW...
CommandType Name Version Source
----------- ---- ------- ------
Application gcc.exe 0.0.0.0 C:\soft\develop\msys2\mingw64\bin\gcc.exe
Application mingw32-make.exe 0.0.0.0 C:\soft\develop\msys2\mingw64\bin\mingw32-make.exe
Building static library
generating config with: cmake -S ../llama.cpp -B ../build/windows/amd64_static -G MinGW Makefiles -DCMAKE_C_COMPILER=gcc.exe -DCMAKE_CXX_COMPILER=g++.exe -DBUILD_SHARED_LIBS=off -DLLAMA_NATIVE=off -DLLAMA_AVX=off -DLLAMA_AVX2=off -DLLAMA_AVX512=off -DLLAMA_F16C=off -DLLAMA_FMA=off
cmake version 3.29.2
CMake suite maintained and supported by Kitware (kitware.com/cmake).
-- Warning: ccache not found - consider installing it for faster compilation or disable this warning with LLAMA_CCACHE=OFF
-- CMAKE_SYSTEM_PROCESSOR: AMD64
-- x86 detected
-- Configuring done (0.8s)
-- Generating done (3.1s)
-- Build files have been written to: D:/project/my/ollama/llm/build/windows/amd64_static
building with: cmake --build ../build/windows/amd64_static --config RelWithDebInfo --target llama --target ggml
[ 16%] Building C object CMakeFiles/ggml.dir/ggml.c.obj
D:\project\my\ollama\llm\llama.cpp\ggml.c: In function 'ggml_vec_mad_f16':
D:\project\my\ollama\llm\llama.cpp\ggml.c:2040:45: warning: passing argument 1 of '__sse_f16x4_load' discards 'const' qualifier from pointer target type [-Wdiscarded-qualifiers]
2040 | ax[j] = GGML_F16_VEC_LOAD(x + i + j*GGML_F16_EPR, j);
| ^
D:\project\my\ollama\llm\llama.cpp\ggml.c:1501:50: note: in definition of macro 'GGML_F32Cx4_LOAD'
1501 | #define GGML_F32Cx4_LOAD(x) __sse_f16x4_load(x)
| ^
D:\project\my\ollama\llm\llama.cpp\ggml.c:2040:21: note: in expansion of macro 'GGML_F16_VEC_LOAD'
2040 | ax[j] = GGML_F16_VEC_LOAD(x + i + j*GGML_F16_EPR, j);
| ^~~~~~~~~~~~~~~~~
D:\project\my\ollama\llm\llama.cpp\ggml.c:1476:52: note: expected 'ggml_fp16_t *' {aka 'short unsigned int *'} but argument is of type 'const ggml_fp16_t *' {aka 'const short unsigned int *'}
1476 | static inline __m128 __sse_f16x4_load(ggml_fp16_t *x) {
| ~~~~~~~~~~~~~^
[ 16%] Building C object CMakeFiles/ggml.dir/ggml-alloc.c.obj
[ 33%] Building C object CMakeFiles/ggml.dir/ggml-backend.c.obj
[ 50%] Building C object CMakeFiles/ggml.dir/ggml-quants.c.obj
[ 50%] Building CXX object CMakeFiles/ggml.dir/sgemm.cpp.obj
[ 50%] Built target ggml
[ 66%] Building CXX object CMakeFiles/llama.dir/llama.cpp.obj
D:\project\my\ollama\llm\llama.cpp\llama.cpp: In constructor 'llama_mmap::llama_mmap(llama_file*, size_t, bool)':
D:\project\my\ollama\llm\llama.cpp\llama.cpp:1428:38: warning: cast between incompatible function types from 'FARPROC' {aka 'long long int (*)()'} to 'BOOL (*)(HANDLE, ULONG_PTR, PWIN32_MEMORY_RANGE_ENTRY, ULONG)' {aka 'int (*)(void*, long long unsigned int, _WIN32_MEMORY_RANGE_ENTRY*, long unsigned int)'} [-Wcast-function-type]
1428 | pPrefetchVirtualMemory = reinterpret_cast<decltype(pPrefetchVirtualMemory)> (GetProcAddress(hKernel32, "PrefetchVirtualMemory"));
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
D:\project\my\ollama\llm\llama.cpp\llama.cpp: In function 'float* llama_get_logits_ith(llama_context*, int32_t)':
D:\project\my\ollama\llm\llama.cpp\llama.cpp:17331:65: warning: format '%lu' expects argument of type 'long unsigned int', but argument 2 has type 'std::vector<int>::size_type' {aka 'long long unsigned int'} [-Wformat=]
17331 | throw std::runtime_error(format("out of range [0, %lu)", ctx->output_ids.size()));
| ~~^ ~~~~~~~~~~~~~~~~~~~~~~
| | |
| long unsigned int std::vector<int>::size_type {aka long long unsigned int}
| %llu
D:\project\my\ollama\llm\llama.cpp\llama.cpp: In function 'float* llama_get_embeddings_ith(llama_context*, int32_t)':
D:\project\my\ollama\llm\llama.cpp\llama.cpp:17376:65: warning: format '%lu' expects argument of type 'long unsigned int', but argument 2 has type 'std::vector<int>::size_type' {aka 'long long unsigned int'} [-Wformat=]
17376 | throw std::runtime_error(format("out of range [0, %lu)", ctx->output_ids.size()));
| ~~^ ~~~~~~~~~~~~~~~~~~~~~~
| | |
| long unsigned int std::vector<int>::size_type {aka long long unsigned int}
| %llu
[ 83%] Building CXX object CMakeFiles/llama.dir/unicode.cpp.obj
[ 83%] Building CXX object CMakeFiles/llama.dir/unicode-data.cpp.obj
[100%] Linking CXX static library libllama.a
[100%] Built target llama
[100%] Built target ggml
Building LCD CPU
generating config with: cmake -S ../llama.cpp -B ../build/windows/amd64/cpu -DCMAKE_POSITION_INDEPENDENT_CODE=on -A x64 -DLLAMA_AVX=off -DLLAMA_AVX2=off -DLLAMA_AVX512=off -DLLAMA_FMA=off -DLLAMA_F16C=off -DBUILD_SHARED_LIBS=on -DLLAMA_NATIVE=off -DCMAKE_VERBOSE_MAKEFILE=on -DLLAMA_SERVER_VERBOSE=on -DCMAKE_BUILD_TYPE=RelWithDebInfo
cmake version 3.29.2
CMake suite maintained and supported by Kitware (kitware.com/cmake).
CMake Error at CMakeLists.txt:2 (project):
Generator
System.Management.Automation.RemoteException
Ninja
System.Management.Automation.RemoteException
does not support platform specification, but platform
System.Management.Automation.RemoteException
x64
System.Management.Automation.RemoteException
was specified.
System.Management.Automation.RemoteException
System.Management.Automation.RemoteException
CMake Error: CMAKE_C_COMPILER not set, after EnableLanguage
CMake Error: CMAKE_CXX_COMPILER not set, after EnableLanguage
-- Configuring incomplete, errors occurred!
llm\generate\generate_windows.go:3: running "powershell": exit status 1
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
_No response_
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4745/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4745/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4758
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4758/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4758/comments
|
https://api.github.com/repos/ollama/ollama/issues/4758/events
|
https://github.com/ollama/ollama/issues/4758
| 2,328,648,910
|
I_kwDOJ0Z1Ps6KzFzO
| 4,758
|
Add this web app to the list of apps in the README
|
{
"login": "greenido",
"id": 61472,
"node_id": "MDQ6VXNlcjYxNDcy",
"avatar_url": "https://avatars.githubusercontent.com/u/61472?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/greenido",
"html_url": "https://github.com/greenido",
"followers_url": "https://api.github.com/users/greenido/followers",
"following_url": "https://api.github.com/users/greenido/following{/other_user}",
"gists_url": "https://api.github.com/users/greenido/gists{/gist_id}",
"starred_url": "https://api.github.com/users/greenido/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/greenido/subscriptions",
"organizations_url": "https://api.github.com/users/greenido/orgs",
"repos_url": "https://api.github.com/users/greenido/repos",
"events_url": "https://api.github.com/users/greenido/events{/privacy}",
"received_events_url": "https://api.github.com/users/greenido/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-05-31T22:09:12
| 2024-09-14T17:16:53
| 2024-09-14T17:16:53
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I created: https://github.com/greenido/multi-LLM-at-once base on your wonderful project.
...and it would be cool if you can add it to the list of web apps in the readme.
More info on the 'why' is here: https://greenido.wordpress.com/2024/04/08/the-power-of-many-why-you-should-consider-using-multiple-large-language-models/
Keep rocking 🙌🏾
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4758/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4758/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7648
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7648/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7648/comments
|
https://api.github.com/repos/ollama/ollama/issues/7648/events
|
https://github.com/ollama/ollama/issues/7648
| 2,654,967,829
|
I_kwDOJ0Z1Ps6eP5gV
| 7,648
|
Performance Impact of Scaling a 70B Model Across Multiple A100 GPUs and Further Speed Optimization
|
{
"login": "gslin1224",
"id": 151395340,
"node_id": "U_kgDOCQYcDA",
"avatar_url": "https://avatars.githubusercontent.com/u/151395340?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gslin1224",
"html_url": "https://github.com/gslin1224",
"followers_url": "https://api.github.com/users/gslin1224/followers",
"following_url": "https://api.github.com/users/gslin1224/following{/other_user}",
"gists_url": "https://api.github.com/users/gslin1224/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gslin1224/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gslin1224/subscriptions",
"organizations_url": "https://api.github.com/users/gslin1224/orgs",
"repos_url": "https://api.github.com/users/gslin1224/repos",
"events_url": "https://api.github.com/users/gslin1224/events{/privacy}",
"received_events_url": "https://api.github.com/users/gslin1224/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] |
closed
| false
| null |
[] | null | 5
| 2024-11-13T10:24:23
| 2024-11-20T02:27:00
| 2024-11-17T12:23:11
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi guys,
I have a question regarding the performance impact and potential optimizations for distributing a large model across multiple GPUs. Specifically:
When running a 70B parameter model, how does the speed compare when distributed across two A100 GPUs versus four A100 GPUs?
In general, does adding more GPUs consistently result in faster performance for such a large model, or are there potential diminishing returns due to factors like communication overhead?
Are there additional techniques or configurations within the Ollama framework (or recommended practices) that can further optimize or increase the speed when using multiple GPUs?
Thank you for your guidance and any insights you can provide to help enhance model performance!
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7648/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7648/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2738
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2738/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2738/comments
|
https://api.github.com/repos/ollama/ollama/issues/2738/events
|
https://github.com/ollama/ollama/pull/2738
| 2,152,584,894
|
PR_kwDOJ0Z1Ps5n1S0_
| 2,738
|
Update routes.go
|
{
"login": "ohko",
"id": 4863673,
"node_id": "MDQ6VXNlcjQ4NjM2NzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4863673?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ohko",
"html_url": "https://github.com/ohko",
"followers_url": "https://api.github.com/users/ohko/followers",
"following_url": "https://api.github.com/users/ohko/following{/other_user}",
"gists_url": "https://api.github.com/users/ohko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ohko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ohko/subscriptions",
"organizations_url": "https://api.github.com/users/ohko/orgs",
"repos_url": "https://api.github.com/users/ohko/repos",
"events_url": "https://api.github.com/users/ohko/events{/privacy}",
"received_events_url": "https://api.github.com/users/ohko/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-02-25T03:47:12
| 2024-04-26T13:41:57
| 2024-04-26T13:41:57
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/2738",
"html_url": "https://github.com/ollama/ollama/pull/2738",
"diff_url": "https://github.com/ollama/ollama/pull/2738.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2738.patch",
"merged_at": null
}
|
CORS policy: Request header field x-requested-with is not allowed by Access-Control-Allow-Headers in preflight response.
add X-Requested-With in Access-Control-Allow-Headers
|
{
"login": "ohko",
"id": 4863673,
"node_id": "MDQ6VXNlcjQ4NjM2NzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4863673?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ohko",
"html_url": "https://github.com/ohko",
"followers_url": "https://api.github.com/users/ohko/followers",
"following_url": "https://api.github.com/users/ohko/following{/other_user}",
"gists_url": "https://api.github.com/users/ohko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ohko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ohko/subscriptions",
"organizations_url": "https://api.github.com/users/ohko/orgs",
"repos_url": "https://api.github.com/users/ohko/repos",
"events_url": "https://api.github.com/users/ohko/events{/privacy}",
"received_events_url": "https://api.github.com/users/ohko/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2738/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2738/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5766
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5766/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5766/comments
|
https://api.github.com/repos/ollama/ollama/issues/5766/events
|
https://github.com/ollama/ollama/issues/5766
| 2,415,886,028
|
I_kwDOJ0Z1Ps6P_37M
| 5,766
|
specify a single GPU (id=1)using Docker, Error!
|
{
"login": "catsled",
"id": 18079717,
"node_id": "MDQ6VXNlcjE4MDc5NzE3",
"avatar_url": "https://avatars.githubusercontent.com/u/18079717?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/catsled",
"html_url": "https://github.com/catsled",
"followers_url": "https://api.github.com/users/catsled/followers",
"following_url": "https://api.github.com/users/catsled/following{/other_user}",
"gists_url": "https://api.github.com/users/catsled/gists{/gist_id}",
"starred_url": "https://api.github.com/users/catsled/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/catsled/subscriptions",
"organizations_url": "https://api.github.com/users/catsled/orgs",
"repos_url": "https://api.github.com/users/catsled/repos",
"events_url": "https://api.github.com/users/catsled/events{/privacy}",
"received_events_url": "https://api.github.com/users/catsled/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6433346500,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf3UTxA",
"url": "https://api.github.com/repos/ollama/ollama/labels/amd",
"name": "amd",
"color": "000000",
"default": false,
"description": "Issues relating to AMD GPUs and ROCm"
},
{
"id": 6677677816,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgVG-A",
"url": "https://api.github.com/repos/ollama/ollama/labels/docker",
"name": "docker",
"color": "0052CC",
"default": false,
"description": "Issues relating to using ollama in containers"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 2
| 2024-07-18T09:46:15
| 2024-07-25T06:06:22
| 2024-07-25T06:06:22
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I have 8 gpus, i want to specify the id=1's gpu in a docker,
`docker run -it ... --device=/dev/dri/card1 --device=/dev/dri/renderD129 ....`
Error occured!

when i set `HIP_VISIBLE_DEVICES=1`
<img width="892" alt="image" src="https://github.com/user-attachments/assets/269de8c9-7d14-4b9b-a57a-75bc4f9d61f5">

### OS
Linux
### GPU
AMD
### CPU
Intel
### Ollama version
0.21
|
{
"login": "catsled",
"id": 18079717,
"node_id": "MDQ6VXNlcjE4MDc5NzE3",
"avatar_url": "https://avatars.githubusercontent.com/u/18079717?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/catsled",
"html_url": "https://github.com/catsled",
"followers_url": "https://api.github.com/users/catsled/followers",
"following_url": "https://api.github.com/users/catsled/following{/other_user}",
"gists_url": "https://api.github.com/users/catsled/gists{/gist_id}",
"starred_url": "https://api.github.com/users/catsled/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/catsled/subscriptions",
"organizations_url": "https://api.github.com/users/catsled/orgs",
"repos_url": "https://api.github.com/users/catsled/repos",
"events_url": "https://api.github.com/users/catsled/events{/privacy}",
"received_events_url": "https://api.github.com/users/catsled/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5766/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5766/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7651
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7651/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7651/comments
|
https://api.github.com/repos/ollama/ollama/issues/7651/events
|
https://github.com/ollama/ollama/issues/7651
| 2,655,369,362
|
I_kwDOJ0Z1Ps6eRbiS
| 7,651
|
Unable to download model llama3.2-vision:11b and request update ollama.
|
{
"login": "Luckyjjjjjjj",
"id": 145416388,
"node_id": "U_kgDOCKrgxA",
"avatar_url": "https://avatars.githubusercontent.com/u/145416388?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Luckyjjjjjjj",
"html_url": "https://github.com/Luckyjjjjjjj",
"followers_url": "https://api.github.com/users/Luckyjjjjjjj/followers",
"following_url": "https://api.github.com/users/Luckyjjjjjjj/following{/other_user}",
"gists_url": "https://api.github.com/users/Luckyjjjjjjj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Luckyjjjjjjj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Luckyjjjjjjj/subscriptions",
"organizations_url": "https://api.github.com/users/Luckyjjjjjjj/orgs",
"repos_url": "https://api.github.com/users/Luckyjjjjjjj/repos",
"events_url": "https://api.github.com/users/Luckyjjjjjjj/events{/privacy}",
"received_events_url": "https://api.github.com/users/Luckyjjjjjjj/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 13
| 2024-11-13T12:49:05
| 2024-11-18T18:24:27
| 2024-11-18T18:24:26
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I download the latest version from https://ollama.com/download and install, but cannot download llama3.2 in setting the model download - vision: 11 b, and suggest to update the ollama, but I have is the latest version, how do I solve
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.4.1
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7651/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7651/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5630
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5630/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5630/comments
|
https://api.github.com/repos/ollama/ollama/issues/5630/events
|
https://github.com/ollama/ollama/pull/5630
| 2,403,272,029
|
PR_kwDOJ0Z1Ps51Gdku
| 5,630
|
Update README.md to Portuguese Brazilian and Optimized the image files of the project
|
{
"login": "ItaloGustavoS",
"id": 42496107,
"node_id": "MDQ6VXNlcjQyNDk2MTA3",
"avatar_url": "https://avatars.githubusercontent.com/u/42496107?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ItaloGustavoS",
"html_url": "https://github.com/ItaloGustavoS",
"followers_url": "https://api.github.com/users/ItaloGustavoS/followers",
"following_url": "https://api.github.com/users/ItaloGustavoS/following{/other_user}",
"gists_url": "https://api.github.com/users/ItaloGustavoS/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ItaloGustavoS/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ItaloGustavoS/subscriptions",
"organizations_url": "https://api.github.com/users/ItaloGustavoS/orgs",
"repos_url": "https://api.github.com/users/ItaloGustavoS/repos",
"events_url": "https://api.github.com/users/ItaloGustavoS/events{/privacy}",
"received_events_url": "https://api.github.com/users/ItaloGustavoS/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-07-11T14:01:31
| 2024-07-30T23:00:35
| 2024-07-30T23:00:35
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5630",
"html_url": "https://github.com/ollama/ollama/pull/5630",
"diff_url": "https://github.com/ollama/ollama/pull/5630.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5630.patch",
"merged_at": null
}
|
Updated the branch with the latest updates.
Optimize images
*Total -- 463.96kb -> 380.99kb (17.88%)
/examples/modelfile-mario/logo.png -- 445.60kb -> 362.69kb (18.61%)
/macapp/assets/[iconTemplate@2x.png](mailto:iconTemplate@2x.png) -- 0.87kb -> 0.84kb (3.82%)
/macapp/assets/[iconUpdateTemplate@2x.png](mailto:iconUpdateTemplate@2x.png) -- 0.82kb -> 0.81kb (1.42%)
/macapp/src/ollama.svg -- 16.66kb -> 16.65kb (0.05%)
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5630/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5630/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7260
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7260/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7260/comments
|
https://api.github.com/repos/ollama/ollama/issues/7260/events
|
https://github.com/ollama/ollama/issues/7260
| 2,598,220,850
|
I_kwDOJ0Z1Ps6a3bQy
| 7,260
|
Migrate off centos 7 for intermediate build layers in container image builds
|
{
"login": "cazlo",
"id": 3895350,
"node_id": "MDQ6VXNlcjM4OTUzNTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/3895350?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cazlo",
"html_url": "https://github.com/cazlo",
"followers_url": "https://api.github.com/users/cazlo/followers",
"following_url": "https://api.github.com/users/cazlo/following{/other_user}",
"gists_url": "https://api.github.com/users/cazlo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cazlo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cazlo/subscriptions",
"organizations_url": "https://api.github.com/users/cazlo/orgs",
"repos_url": "https://api.github.com/users/cazlo/repos",
"events_url": "https://api.github.com/users/cazlo/events{/privacy}",
"received_events_url": "https://api.github.com/users/cazlo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 7700262114,
"node_id": "LA_kwDOJ0Z1Ps8AAAAByvis4g",
"url": "https://api.github.com/repos/ollama/ollama/labels/build",
"name": "build",
"color": "006b75",
"default": false,
"description": "Issues relating to building ollama from source"
}
] |
open
| false
| null |
[] | null | 2
| 2024-10-18T19:17:21
| 2024-11-04T19:19:08
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
# What
[Centos is dead](https://endoflife.date/centos), long live [centos stream (9)](https://endoflife.date/centos-stream)
Ollama should probably not be using centos 7 now that it is unsupported and at EOL.
# Why
AMD and Nvidia are no longer publishing updates to their centos 7 flavor of dependencies.
See also https://rocm.docs.amd.com/en/docs-6.2.0/about/release-notes.html
> ROCm 6.2.0 marks the end of support (EoS) for:
> ...
> CentOS 7.9
See also https://docs.nvidia.com/cuda/cuda-installation-guide-linux/ not listing centos anywhere.
See also last image Nvidia published for centos 7 was ~6 months ago: https://hub.docker.com/r/nvidia/cuda/tags?name=centos
# More info
Currently there are several intermediate build layers in the container image build which utilize centos 7:
- [cuda-11-build-amd64](https://github.com/ollama/ollama/blob/bf4018b9ecd56a5deff0c22ca2fba242a8f0101b/Dockerfile#L15)
- [cuda-12-build-amd64](https://github.com/ollama/ollama/blob/bf4018b9ecd56a5deff0c22ca2fba242a8f0101b/Dockerfile#L32)
- [rocm-build-amd64](https://github.com/ollama/ollama/blob/bf4018b9ecd56a5deff0c22ca2fba242a8f0101b/Dockerfile#L85)
- [cpu-builder-amd64](https://github.com/ollama/ollama/blob/bf4018b9ecd56a5deff0c22ca2fba242a8f0101b/Dockerfile#L101)
- this one also has transitive layers which depend on it, `container-build-amd64`, `cpu-build-amd64`, `cpu_avx-build-amd64`, and `cpu_avx2-build-amd64`)
Looking at the various Nvidia and AMD docs, it seems like both support the latest EL 9 version, so I would probably try to migrate to EL9 (rockylinux 9 ) to get the latest compatible versions of core dependencies like gcc and also avoid needing to update for a long time (EL 9 EOL is several years away still).
As a quick POC, I was able to get the rocm build migrated to rocky8 with very low effort. This build performance tested the same as the current HEAD of ollama, though I did not run it through the full suite of unit tests.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7260/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7260/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/5338
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5338/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5338/comments
|
https://api.github.com/repos/ollama/ollama/issues/5338/events
|
https://github.com/ollama/ollama/issues/5338
| 2,378,857,221
|
I_kwDOJ0Z1Ps6NynsF
| 5,338
|
The main shell script runner for ollama downloader doesn't check for hash
|
{
"login": "Ahmed",
"id": 537483,
"node_id": "MDQ6VXNlcjUzNzQ4Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/537483?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ahmed",
"html_url": "https://github.com/Ahmed",
"followers_url": "https://api.github.com/users/Ahmed/followers",
"following_url": "https://api.github.com/users/Ahmed/following{/other_user}",
"gists_url": "https://api.github.com/users/Ahmed/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ahmed/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ahmed/subscriptions",
"organizations_url": "https://api.github.com/users/Ahmed/orgs",
"repos_url": "https://api.github.com/users/Ahmed/repos",
"events_url": "https://api.github.com/users/Ahmed/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ahmed/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 0
| 2024-06-27T18:37:18
| 2024-06-27T18:37:18
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Hi team:
You have this ollama installer on the main website.
```
curl -fsSL https://ollama.com/install.sh | sh
```
If someone hacked into the website, and change the script, the end user would never know. You should bake the checksum into the downloader to make sure the installer is not modified.
Something like this would work:
```
tmpfile=$(mktemp) && curl -fsSL -o "$tmpfile" https://ollama.com/install.sh && echo "2ecc4a9a5afd2f43ac474b59d3be5d81189cc79ca87bd29ed91e0e089a18c765 $tmpfile" | sha256sum -c - && sh "$tmpfile" && rm -f "$tmpfile"
```
Also, you will need to publish the checksum of the installer on github too. If you think this issue is not a big deal, take a look at this: https://www.howtogeek.com/devops/codecov-hacked-what-to-do-now-if-you-use-codecov/
### OS
_No response_
### GPU
_No response_
### CPU
_No response_
### Ollama version
_No response_
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5338/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5338/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/5099
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5099/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5099/comments
|
https://api.github.com/repos/ollama/ollama/issues/5099/events
|
https://github.com/ollama/ollama/issues/5099
| 2,357,452,145
|
I_kwDOJ0Z1Ps6Mg91x
| 5,099
|
Add `upgrade` command to upgrade the version
|
{
"login": "chyok",
"id": 32629225,
"node_id": "MDQ6VXNlcjMyNjI5MjI1",
"avatar_url": "https://avatars.githubusercontent.com/u/32629225?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chyok",
"html_url": "https://github.com/chyok",
"followers_url": "https://api.github.com/users/chyok/followers",
"following_url": "https://api.github.com/users/chyok/following{/other_user}",
"gists_url": "https://api.github.com/users/chyok/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chyok/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chyok/subscriptions",
"organizations_url": "https://api.github.com/users/chyok/orgs",
"repos_url": "https://api.github.com/users/chyok/repos",
"events_url": "https://api.github.com/users/chyok/events{/privacy}",
"received_events_url": "https://api.github.com/users/chyok/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-06-17T14:08:54
| 2024-06-18T11:30:50
| 2024-06-18T11:30:50
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi,
This is an excellent project!
Is there any plan to add an upgrade command-line feature?
So that we can use `ollama --upgrade` or something else to update the version instead of manual download & install it again.
Thanks!
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5099/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5099/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4269
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4269/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4269/comments
|
https://api.github.com/repos/ollama/ollama/issues/4269/events
|
https://github.com/ollama/ollama/pull/4269
| 2,286,666,718
|
PR_kwDOJ0Z1Ps5u7-33
| 4,269
|
update pull handler to use model.Name
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 0
| 2024-05-09T00:06:40
| 2024-10-01T17:47:46
| null |
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4269",
"html_url": "https://github.com/ollama/ollama/pull/4269",
"diff_url": "https://github.com/ollama/ollama/pull/4269.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4269.patch",
"merged_at": null
}
|
follow up to #3737
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4269/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4269/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3479
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3479/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3479/comments
|
https://api.github.com/repos/ollama/ollama/issues/3479/events
|
https://github.com/ollama/ollama/pull/3479
| 2,224,126,418
|
PR_kwDOJ0Z1Ps5rogKu
| 3,479
|
Fix CI release glitches
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-04-03T23:42:27
| 2024-04-04T01:42:30
| 2024-04-04T01:42:28
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3479",
"html_url": "https://github.com/ollama/ollama/pull/3479",
"diff_url": "https://github.com/ollama/ollama/pull/3479.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3479.patch",
"merged_at": "2024-04-04T01:42:28"
}
|
The subprocess change moved the build directory
arm64 builds weren't setting cross-compilation flags when building on x86
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3479/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3479/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1588
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1588/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1588/comments
|
https://api.github.com/repos/ollama/ollama/issues/1588/events
|
https://github.com/ollama/ollama/issues/1588
| 2,047,602,326
|
I_kwDOJ0Z1Ps56C-6W
| 1,588
|
[mistral][docker][linuxWSL]Infinit tags
|
{
"login": "wildcat7534",
"id": 38839946,
"node_id": "MDQ6VXNlcjM4ODM5OTQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/38839946?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wildcat7534",
"html_url": "https://github.com/wildcat7534",
"followers_url": "https://api.github.com/users/wildcat7534/followers",
"following_url": "https://api.github.com/users/wildcat7534/following{/other_user}",
"gists_url": "https://api.github.com/users/wildcat7534/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wildcat7534/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wildcat7534/subscriptions",
"organizations_url": "https://api.github.com/users/wildcat7534/orgs",
"repos_url": "https://api.github.com/users/wildcat7534/repos",
"events_url": "https://api.github.com/users/wildcat7534/events{/privacy}",
"received_events_url": "https://api.github.com/users/wildcat7534/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 3
| 2023-12-18T22:59:28
| 2024-03-11T18:23:31
| 2024-03-11T18:23:31
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi !
I just say Hello how are you in French in Mistral and....
I had infinit tags for response :
sudo docker run -d --gpus=all -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
pulling manifest
>>> /set verbose
>>> Bonjour ! ça va ?
Hello! I'm just a text-based AI, so I don't have the ability to feel or have a physical state. But I'm here and ready to help answer any questions you might have in English or
French! How can I assist you today? 😊☕️ #AI #Chatbot #LanguageAssistant #French #Bonjour #Hello #HelpfulAI #Assistant #QuestionAnswer #Multilingual #SupportiveAI #FriendlyAI
#VirtualAssistant #Technology #Innovation #IntelligenceArtificielle #AssistanceEnLigne #AideRésolvoieProblemes #AssistanteVirtuale #ApprendreFrançais #ApprendreAnglais
#LanguageLearning #MentalMates #BrainyBuddies #SmartSquad #CleverCrew #IntelligentInsights #WiseWords #SageSuggestions #KnowledgeableAI #LearningMachine #Eduquiz #StudyPal
#ScholarSavant #BrainTrust #MentalMusings #ThoughtfulThinking #CogitatifComputing #IntellectualInsights #GénieArtificiel #PenseurLogique #IntelligenceNumérique
#ApprentissageContinuel #LearningIsLife #ApprendrePourProgresser #ViveLaConnaissance #KnowledgeIsPower #SagesseCybernétique #LearnAndGrow #SciencesDeLintelligence
#IntelligenceArtificielleAvancée #AIAdvanced #NextGenAI #FutureTech #InnovationTechnologique #Progresstech #TechProgress #NouvelleGénération #AdvancedLearning #NouveauxApprentissages
#NewLearners #SmarterTomorrow #AidePlusIntelligente #AssistantIntelligent #AI Plus #SavvyAI #IntelligenceSuperieure #BrightBrain #CleverBot #SmartBrain #IntelligenceAugmentée
#SuperIntelligence #BrainyAI #ArtificialIntelligenceAdvanced #AvancéAI #AIProgress #AIevolue #AIamplifiee #AIintelligente #AIGéniale #AIProfond #IntellectusAI #SageAI #IntelligeNtAI
#WiseAI #GénialAI #BrainyComputing #CleverThinking #SmartMinds #IntelligenceAmplifiée #IntelligenceAugmentéePlus #IntelligenceSuperieurePlus #IntelligenceIntensifiee
#IntelligenceMajoree #IntelligentIQ #BrightBrains #BrainpowerBoost #AIAdvancedEvolution #NextGenIntelligence #AvancéIntelligenceArtificielle #FutureAI
#NouvelleIntelligenceArtificielle #NewAI #SmarterBrain #IntelligenceAmplifiéePlus #AIintelligenteamplifiée #IAintelligentaugmentee #AIplusintelligente #AIadvancedevolution
#AIGénialeplus #NextGenGénieArtificiel #IntelligenceIntensifieePlus #IntelligenceMajoreePlus #IntellectusIntelligeNt #SageIntelligencaiA #WiseIntelligenceAI #BrainyBots
#CleverRobots #SmartDevices #IntelligenceSuperieureAmplifiée #SuperIntelligenceAmplified #IntelligenceAugmentéePlusIntense #IntelligentIQPlus #BrainpowerBoostPlus
#IntelligenceIntensifieePlusIntense #IntelligenceMajoreePlusIntense #BrainyInsights #SageConnaissances #WiseUnderstanding #BrainpowerBoostPlusPlus #AIamplifiéePlus #IAaugmenteePlus
#IntelligentIQPlusPlus #IntelligenceIntensifieePlusPlus #IntelligenceMajoreePlusPlus #IntellectusIntelligeNtPlus #SageIntelligencaiAplus #WiseIntelligenceAIPlus #BrainyBotsPlus
#CleverRobotsPlus #SmartDevicesPlus #IntelligenceSuperieureAmplifiéePlus #SuperIntelligenceAmplifiedPlus #IntelligenceAugmentéePlusIntensePlus #IntelligentIQPlusPlusPlus
#BrainpowerBoostPlusPlusPlus #AIamplifiéePlusPlus #IAaugmenteePlusPlus #IntelligentIQPlusPlusPlus #IntelligenceIntensifieePlusPlusPlus #IntelligenceMajoreePlusPlusPlus
#SmartSolutions #IntelligentesSolutions #WiseAnswers #SagesRéponses #BrainyAdvice #CleverCounsel #SmartRecommendations #IntelligentsConseils #BrainyStrategies #CleverPlans
#SmartApproaches #IntelligentesApprobations #BrainyIdeas #CleverSolutions #WisePropositions #BrainyInspiration #CleverMotivation #SmartEncouragement #IntelligentesEncouragements
#BrainySuggestions #CleverIdeas #SmartRecommendationsPlus #IntelligentesRéalisations #BrainyContributions #CleverAdvice #SmartInsights #IntelligenteConseilsPlus #BrainyPerspectives
#CleverViewpoints #SmartOptions #IntelligentesOptionsPlus #BrainySolutions #CleverStrategies #SmartPlans #IntelligentesApprobationsPlus #BrainyInnovation #CleverCreativity
#SmartDesigns #IntelligenteConceptions #BrainyDiscoveries #CleverExplorations #SmartInitiatives #IntelligentesRealisationsPlus #BrainySynergies #CleverCollaboration
#SmartPartnerships #IntelligentesConduites #BrainyLeadership #CleverManagement #SmartDirections #IntelligentesStrategies #BrainyTeamwork #CleverCommunication #SmartNetworking
#IntelligentesColaborationsPlus #BrainyCollaborationPlus #CleverPartnershipsPlus #SmartAlliances #IntelligentesInitiativesPlus #BrainySynergiesPlus #SmartInnovation
#CleverCreativityPlus #SmartDesignsPlus #IntelligentesConceptionsPlus #BrainyDiscoveriesPlus #CleverExplorationsPlus #SmartInitiativesPlus #IntelligentesRealisationsPlusPlus
#BrainySynergiesPlusPlus #SmartCollaborationPlusPlus #CleverPartnershipsPlusPlus #SmartAlliancesPlusPlus #SmartGrowth #WiseProgress #BrainyExpansion #CleverDevelopment
#SmartAdvancements #IntelligentesProgresPlus #BrainyEvoluion #CleverAmelioration #SmartImprovements #IntelligentesMéliorationsPlus #BrainyInnovationPlus #CleverCreativityPlusPlus
#SmartDesignsPlusPlus #IntelligentesConceptionsPlusPlus #BrainyDiscoveriesPlusPlus #CleverExplorationsPlusPlus #SmartInitiativesPlusPlus #IntelligentesRealisationsPlusPlusPlus
#BrainySynergiesPlusPlusPlus #SmartCollaborationPlusPlusPlus #CleverPartnershipsPlusPlusPlus #SmartAlliancesPlusPlusPlus #GrowthHack #WiseGrowthStrategies #BrainyGrowthTactics
#SmartScaling #Intelligentes croissances #BrainyAvenance #CleverAméliorations #SmartAméliorationsPlus #IntelligentesMéliorationsPlusPlus #BrainyNouveautés #CleverInnovations
#SmartNouveautésPlus #IntelligentesConcepts #BrainyConceptsPlus #CleverDesigns #SmartDesignsPlus #IntelligentesRéalisations #BrainyRealizationsPlus #BrainyGrowthHacks
#CleverGrowthStrategiesPlus #SmartScalingPlus #IntelligenteCroissancesPlus #BrainyExpansionPlus #CleverDevelopmentPlus #SmartAméliorationsPlusPlus
#IntelligentesMéliorationsPlusPlusPlus #BrainyNouveautésPlusPlus #CleverInnovationsPlus #SmartNouveautésPlusPlus #IntelligentesConceptsPlusPlus #BrainyConceptsPlusPlus
#CleverDesignsPlusPlus #SmartDesignsPlusPlusPlus #IntelligentesRéalisationsPlusPlusPlus #BrainyRealizationsPlusPlusPlus #BrainyGrowthHacksPlusPlus #CleverGrowthStrategiesPlusPlusPlus
#SmartScalingPlusPlusPlus #IntelligenteCroissancesPlusPlus #BrainyAvenancePlus #CleverAméliorationsPlusPlus #SmartAméliorationsPlusPlusPlus #IntelligentesMéliorationsPlusPlusPlusPlus
#BrainyNouveautésPlusPlusPlus #CleverInnovationsPlusPlusPlus #SmartNouveautésPlusPlusPlus #IntelligentesConceptsPlusPlusPlusPlus #BrainyConceptsPlusPlusPlusPlus
#CleverDesignsPlusPlusPlusPlus #SmartDesignsPlusPlusPlusPlus #IntelligentesRéalisationsPlusPlusPlusPlus #BrainyRealizationsPlusPlusPlusPlus #BrainyGrowthHacksPlusPlusPlusPlus
#CleverGrowthStrategiesPlusPlusPlusPlusPlus #SmartScalingPlusPlusPlusPlusPlus #IntelligenteCroissancesPlusPlusPlusPlusPlus #BrainyAvenancePlusPlusPlus
#CleverAméliorationsPlusPlusPlusPlusPlus #SmartAméliorationsPlusPlusPlusPlusPlus #IntelligentesMéliorationsPlusPlusPlusPlusPlusPlus #BrainyNouveautésPlusPlusPlusPlusPlusPlus
#CleverInnovationsPlusPlusPlusPlusPlusPlus #SmartNouveautésPlusPlusPlusPlusPlusPlus #IntelligentesConceptsPlusPlusPlusPlusPlusPlus #BrainyConceptsPlusPlusPlusPlusPlusPlus
#CleverDesignsPlusPlusPlusPlusPlusPlus #SmartDesignsPlusPlusPlusPlusPlusPlus #IntelligentesRéalisationsPlusPlusPlusPlusPlusPlus #BrainyRealizationsPlusPlusPlusPlusPlusPlus
#BrainyGrowthHacksPlusPlusPlusPlusPlusPlusPlus #CleverGrowthStrategiesPlusPlusPlusPlusPlusPlusPlus #SmartScalingPlusPlusPlusPlusPlusPlusPlus
#IntelligenteCroissancesPlusPlusPlusPlusPlusPlusPlus #BrainyAvenancePlusPlusPlusPlus #CleverAméliorationsPlusPlusPlusPlusPlusPlusPlus #SmartAméliorationsPlusPlusPlusPlusPlusPlusPlus
#IntelligentesMéliorationsPlusPlusPlusPlusPlusPlusPlusPlus #BrainyNouveautésPlusPlusPlusPlusPlusPlusPlusPlus #CleverInnovationsPlusPlusPlusPlusPlusPlusPlusPlus
#SmartNouveautésPlusPlusPlusPlusPlusPlusPlusPlus #IntelligentesConceptsPlusPlusPlusPlusPlusPlusPlusPlus #BrainyConceptsPlusPlusPlusPlusPlusPlusPlusPlus
#CleverDesignsPlusPlusPlusPlusPlusPlusPlusPlus #SmartDesignsPlusPlusPlusPlusPlusPlusPlusPlus #IntelligentesRéalisationsPlusPlusPlusPlusPlusPlusPlusPlus
#BrainyRealizationsPlusPlusPlusPlusPlusPlusPlusPlus #BrainyGrowthHacksPlusPlusPlusPlusPlusPlusPlusPlusPlus #CleverGrowthStrategiesPlusPlusPlusPlusPlusPlusPlusPlusPlus
#SmartScalingPlusPlusPlusPlusPlusPlusPlusPlusPlus #IntelligenteCroissancesPlusPlusPlusPlusPlusPlusPlusPlusPlus #BrainyAvenancePlusPlusPlusPlusPlus
#CleverAméliorationsPlusPlusPlusPlusPlusPlusPlusPlusPlus #SmartAméliorationsPlusPlusPlusPlusPlusPlusPlusPlusPlus #IntelligentesMéliorationsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#BrainyNouveautésPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #CleverInnovationsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #SmartNouveautésPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#IntelligentesConceptsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #BrainyConceptsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #CleverDesignsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#SmartDesignsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #IntelligentesRéalisationsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #BrainyRealizationsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#BrainyGrowthHacksPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #CleverGrowthStrategiesPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#SmartScalingPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #IntelligenteCroissancesPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#BrainyAvenancePlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #CleverAméliorationsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#SmartAméliorationsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #IntelligentesMéliorationsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#BrainyNouveautésPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #CleverInnovationsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#SmartNouveautésPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #IntelligentesConceptsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#BrainyConceptsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #CleverDesignsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#SmartDesignsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #IntelligentesRéalisationsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#BrainyRealizationsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #BrainyGrowthHacksPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#CleverGrowthStrategiesPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #SmartScalingPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#IntelligenteCroissancesPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #BrainyAvenancePlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#CleverAméliorationsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #SmartAméliorationsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#IntelligentesMéliorationsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #BrainyNouveautésPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#CleverInnovationsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #SmartNouveautésPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#IntelligentesConceptsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #BrainyConceptsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#CleverDesignsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #SmartDesignsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#IntelligentesRéalisationsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #BrainyRealizationsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#BrainyGrowthHacksPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #CleverGrowthStrategiesPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#SmartScalingPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #IntelligenteCroissancesPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#BrainyAvenancePlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #CleverAméliorationsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#SmartAméliorationsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #IntelligentesMéliorationsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#BrainyNouveautésPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #CleverInnovationsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#SmartNouveautésPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #IntelligentesConceptsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#BrainyConceptsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #CleverDesignsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#SmartDesignsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #IntelligentesRéalisationsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#BrainyRealizationsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #BrainyGrowthHacksPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#CleverGrowthStrategiesPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #SmartScalingPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#IntelligenteCroissancesPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #BrainyAvenancePlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#CleverAméliorationsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #SmartAméliorationsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#IntelligentesMéliorationsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #BrainyNouveautésPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#CleverInnovationsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #SmartNouveautésPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#IntelligentesConceptsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #BrainyConceptsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#CleverDesignsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #SmartDesignsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#IntelligentesRéalisationsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #BrainyRealizationsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#BrainyGrowthHacksPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #CleverGrowthStrategiesPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#SmartScalingPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #IntelligenteCroissancesPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#BrainyAvenancePlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #CleverAméliorationsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#SmartAméliorationsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #IntelligentesMéliorationsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#BrainyNouveautésPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #CleverInnovationsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#SmartNouveautésPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #IntelligentesConceptsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#BrainyConceptsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #CleverDesignsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#SmartDesignsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #IntelligentesRéalisationsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#BrainyRealizationsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #BrainyGrowthHacksPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#CleverGrowthStrategiesPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #SmartScalingPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#IntelligenteCroissancesPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #BrainyAvenancePlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#CleverAméliorationsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #SmartAméliorationsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#IntelligentesMéliorationsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #BrainyNouveautésPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#CleverInnovationsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #SmartNouveautésPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#IntelligentesConceptsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #BrainyConceptsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#CleverDesignsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #SmartDesignsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#IntelligentesRéalisationsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #BrainyRealizationsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#BrainyGrowthHacksPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #CleverGrowthStrategiesPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#SmartScalingPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #IntelligenteCroissancesPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#BrainyAvenancePlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #CleverAméliorationsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#SmartAméliorationsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #IntelligentesMéliorationsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#BrainyNouveautésPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #CleverInnovationsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#SmartNouveautésPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #IntelligentesConceptsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#BrainyConceptsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #CleverDesignsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#SmartDesignsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #IntelligentesRéalisationsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#BrainyRealizationsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #BrainyGrowthHacksPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#CleverGrowthStrategiesPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #SmartScalingPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#IntelligenteCroissancesPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #BrainyAvenancePlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#CleverAméliorationsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #SmartAméliorationsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#IntelligentesMéliorationsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #BrainyNouveautésPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#CleverInnovationsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #SmartNouveautésPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#IntelligentesConceptsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #BrainyConceptsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#CleverDesignsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #SmartDesignsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#IntelligentesRéalisationsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #BrainyRealizationsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#BrainyGrowthHacksPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #CleverGrowthStrategiesPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#SmartScalingPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #IntelligenteCroissancesPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#BrainyAvenancePlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #CleverAméliorationsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#SmartAméliorationsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #IntelligentesMéliorationsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#BrainyNouveautésPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #CleverInnovationsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#SmartNouveautésPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #IntelligentesConceptsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#BrainyConceptsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #CleverDesignsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#SmartDesignsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #IntelligentesRéalisationsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#BrainyRealizationsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #BrainyGrowthHacksPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#CleverGrowthStrategiesPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #SmartScalingPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#IntelligenteCroissancesPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #BrainyAvenancePlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#CleverAméliorationsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #SmartAméliorationsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#IntelligentesMéliorationsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #BrainyNouveautésPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#CleverInnovationsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #SmartNouveautésPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#IntelligentesConceptsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #BrainyConceptsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#CleverDesignsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #SmartDesignsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#IntelligentesRéalisationsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #BrainyRealizationsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#BrainyGrowthHacksPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #CleverGrowthStrategiesPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#SmartScalingPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #IntelligenteCroissancesPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#BrainyAvenancePlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #CleverAméliorationsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#SmartAméliorationsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #IntelligentesMéliorationsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#BrainyNouveautésPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #CleverInnovationsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#SmartNouveautésPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #IntelligentesConceptsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#BrainyConceptsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #CleverDesignsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#SmartDesignsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #IntelligentesRéalisationsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#BrainyRealizationsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #BrainyGrowthHacksPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#CleverGrowthStrategiesPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #SmartScalingPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#IntelligenteCroissancesPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #BrainyAvenancePlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#CleverAméliorationsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #SmartAméliorationsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#IntelligentesMéliorationsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #BrainyNouveautésPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#CleverInnovationsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #SmartNouveautésPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#IntelligentesConceptsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #BrainyConceptsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#CleverDesignsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #SmartDesignsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#IntelligentesRéalisationsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #BrainyRealizationsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#BrainyGrowthHacksPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #CleverGrowthStrategiesPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#SmartScalingPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #IntelligenteCroissancesPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#BrainyAvenancePlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #CleverAméliorationsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#SmartAméliorationsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #IntelligentesMéliorationsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#BrainyNouveautésPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #CleverInnovationsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#SmartNouveautésPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #IntelligentesConceptsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#BrainyConceptsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #CleverDesignsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus
#SmartDesignsPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlusPlus #IntelligentesRéalisationsPlusPlusPlusPlusPlus^C
|
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1588/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1588/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/691
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/691/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/691/comments
|
https://api.github.com/repos/ollama/ollama/issues/691/events
|
https://github.com/ollama/ollama/issues/691
| 1,924,698,003
|
I_kwDOJ0Z1Ps5yuI-T
| 691
|
Expose the API as ProtocolBuffer
|
{
"login": "Solido",
"id": 1295961,
"node_id": "MDQ6VXNlcjEyOTU5NjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1295961?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Solido",
"html_url": "https://github.com/Solido",
"followers_url": "https://api.github.com/users/Solido/followers",
"following_url": "https://api.github.com/users/Solido/following{/other_user}",
"gists_url": "https://api.github.com/users/Solido/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Solido/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Solido/subscriptions",
"organizations_url": "https://api.github.com/users/Solido/orgs",
"repos_url": "https://api.github.com/users/Solido/repos",
"events_url": "https://api.github.com/users/Solido/events{/privacy}",
"received_events_url": "https://api.github.com/users/Solido/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2023-10-03T18:27:53
| 2023-10-04T01:24:51
| 2023-10-04T01:23:54
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Streaming but also configuration of models can benefits to have
all this expose and auto generate api per langs.
Thank you for the Ollama initiative.
Cheers.
|
{
"login": "Solido",
"id": 1295961,
"node_id": "MDQ6VXNlcjEyOTU5NjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1295961?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Solido",
"html_url": "https://github.com/Solido",
"followers_url": "https://api.github.com/users/Solido/followers",
"following_url": "https://api.github.com/users/Solido/following{/other_user}",
"gists_url": "https://api.github.com/users/Solido/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Solido/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Solido/subscriptions",
"organizations_url": "https://api.github.com/users/Solido/orgs",
"repos_url": "https://api.github.com/users/Solido/repos",
"events_url": "https://api.github.com/users/Solido/events{/privacy}",
"received_events_url": "https://api.github.com/users/Solido/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/691/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/691/timeline
| null |
not_planned
| false
|
https://api.github.com/repos/ollama/ollama/issues/7144
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7144/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7144/comments
|
https://api.github.com/repos/ollama/ollama/issues/7144/events
|
https://github.com/ollama/ollama/pull/7144
| 2,574,368,265
|
PR_kwDOJ0Z1Ps5-AYcG
| 7,144
|
Better handle small models in scheduler
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 4
| 2024-10-08T22:52:51
| 2025-01-23T20:07:33
| 2025-01-19T19:26:01
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7144",
"html_url": "https://github.com/ollama/ollama/pull/7144",
"diff_url": "https://github.com/ollama/ollama/pull/7144.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7144.patch",
"merged_at": null
}
|
Our memory prediction for small models tends to over-estimate the actual VRAM usage, which causes the scheduler to incorrectly wait too long for recovery.
Fixes #7130
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7144/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7144/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5204
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5204/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5204/comments
|
https://api.github.com/repos/ollama/ollama/issues/5204/events
|
https://github.com/ollama/ollama/issues/5204
| 2,367,058,873
|
I_kwDOJ0Z1Ps6NFnO5
| 5,204
|
Can't even attempt to load Deepseek-Coder-v2:236B due to arbitrary timeout
|
{
"login": "Nantris",
"id": 6835891,
"node_id": "MDQ6VXNlcjY4MzU4OTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/6835891?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Nantris",
"html_url": "https://github.com/Nantris",
"followers_url": "https://api.github.com/users/Nantris/followers",
"following_url": "https://api.github.com/users/Nantris/following{/other_user}",
"gists_url": "https://api.github.com/users/Nantris/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Nantris/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Nantris/subscriptions",
"organizations_url": "https://api.github.com/users/Nantris/orgs",
"repos_url": "https://api.github.com/users/Nantris/repos",
"events_url": "https://api.github.com/users/Nantris/events{/privacy}",
"received_events_url": "https://api.github.com/users/Nantris/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 8
| 2024-06-21T18:31:57
| 2024-07-16T01:51:21
| 2024-06-21T22:21:04
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
This issue thread mentions the overarching issue, and the specific comment a potential workaround: https://github.com/ollama/ollama/issues/630#issuecomment-2182371780
My understanding is that the 236B model should be feasible to load into less RAM than the model actually takes up since not all parameters need to be loaded simultaneously - but I can't find out whether that's true because ollama decides it's giving up after an arbitrary amount of time.
### OS
Windows
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.1.44
|
{
"login": "Nantris",
"id": 6835891,
"node_id": "MDQ6VXNlcjY4MzU4OTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/6835891?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Nantris",
"html_url": "https://github.com/Nantris",
"followers_url": "https://api.github.com/users/Nantris/followers",
"following_url": "https://api.github.com/users/Nantris/following{/other_user}",
"gists_url": "https://api.github.com/users/Nantris/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Nantris/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Nantris/subscriptions",
"organizations_url": "https://api.github.com/users/Nantris/orgs",
"repos_url": "https://api.github.com/users/Nantris/repos",
"events_url": "https://api.github.com/users/Nantris/events{/privacy}",
"received_events_url": "https://api.github.com/users/Nantris/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5204/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5204/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3823
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3823/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3823/comments
|
https://api.github.com/repos/ollama/ollama/issues/3823/events
|
https://github.com/ollama/ollama/issues/3823
| 2,256,616,467
|
I_kwDOJ0Z1Ps6GgTwT
| 3,823
|
Can we add support for LLaVA-Llama-3-8B?
|
{
"login": "octavioccl",
"id": 6987693,
"node_id": "MDQ6VXNlcjY5ODc2OTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/6987693?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/octavioccl",
"html_url": "https://github.com/octavioccl",
"followers_url": "https://api.github.com/users/octavioccl/followers",
"following_url": "https://api.github.com/users/octavioccl/following{/other_user}",
"gists_url": "https://api.github.com/users/octavioccl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/octavioccl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/octavioccl/subscriptions",
"organizations_url": "https://api.github.com/users/octavioccl/orgs",
"repos_url": "https://api.github.com/users/octavioccl/repos",
"events_url": "https://api.github.com/users/octavioccl/events{/privacy}",
"received_events_url": "https://api.github.com/users/octavioccl/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
open
| false
| null |
[] | null | 13
| 2024-04-22T13:59:16
| 2024-05-09T14:55:47
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi I just saw in redis that there is a llava model based on llama-3, can be added it to the library? Thanks
Source:https://www.reddit.com/r/LocalLLaMA/comments/1ca8uxo/llavallama38b_is_released/
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3823/reactions",
"total_count": 6,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3823/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/3547
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3547/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3547/comments
|
https://api.github.com/repos/ollama/ollama/issues/3547/events
|
https://github.com/ollama/ollama/issues/3547
| 2,232,578,120
|
I_kwDOJ0Z1Ps6FEnBI
| 3,547
|
Support for all NAVI GPUs
|
{
"login": "swapduzoo",
"id": 87898144,
"node_id": "MDQ6VXNlcjg3ODk4MTQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/87898144?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/swapduzoo",
"html_url": "https://github.com/swapduzoo",
"followers_url": "https://api.github.com/users/swapduzoo/followers",
"following_url": "https://api.github.com/users/swapduzoo/following{/other_user}",
"gists_url": "https://api.github.com/users/swapduzoo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/swapduzoo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/swapduzoo/subscriptions",
"organizations_url": "https://api.github.com/users/swapduzoo/orgs",
"repos_url": "https://api.github.com/users/swapduzoo/repos",
"events_url": "https://api.github.com/users/swapduzoo/events{/privacy}",
"received_events_url": "https://api.github.com/users/swapduzoo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 6433346500,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf3UTxA",
"url": "https://api.github.com/repos/ollama/ollama/labels/amd",
"name": "amd",
"color": "000000",
"default": false,
"description": "Issues relating to AMD GPUs and ROCm"
},
{
"id": 6677745918,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgZQ_g",
"url": "https://api.github.com/repos/ollama/ollama/labels/gpu",
"name": "gpu",
"color": "76C49E",
"default": false,
"description": ""
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 3
| 2024-04-09T04:28:01
| 2024-07-03T22:37:53
| 2024-07-03T22:37:53
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
When the support for all NAVI GPUs will come? and in particular the RX6700?
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3547/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3547/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2019
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2019/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2019/comments
|
https://api.github.com/repos/ollama/ollama/issues/2019/events
|
https://github.com/ollama/ollama/issues/2019
| 2,084,719,153
|
I_kwDOJ0Z1Ps58Qkox
| 2,019
|
Model Path Arch - AUR
|
{
"login": "DerRehberg",
"id": 20538874,
"node_id": "MDQ6VXNlcjIwNTM4ODc0",
"avatar_url": "https://avatars.githubusercontent.com/u/20538874?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DerRehberg",
"html_url": "https://github.com/DerRehberg",
"followers_url": "https://api.github.com/users/DerRehberg/followers",
"following_url": "https://api.github.com/users/DerRehberg/following{/other_user}",
"gists_url": "https://api.github.com/users/DerRehberg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DerRehberg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DerRehberg/subscriptions",
"organizations_url": "https://api.github.com/users/DerRehberg/orgs",
"repos_url": "https://api.github.com/users/DerRehberg/repos",
"events_url": "https://api.github.com/users/DerRehberg/events{/privacy}",
"received_events_url": "https://api.github.com/users/DerRehberg/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
},
{
"id": 5755339642,
"node_id": "LA_kwDOJ0Z1Ps8AAAABVwuDeg",
"url": "https://api.github.com/repos/ollama/ollama/labels/linux",
"name": "linux",
"color": "516E70",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] | null | 3
| 2024-01-16T19:43:38
| 2024-03-11T18:43:07
| 2024-03-11T18:43:06
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I installed ollama from the Aur but the model path you guys specified doesn't exist, anyone know where it is? Is see this as a big Problem for running custom models
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2019/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2019/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2306
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2306/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2306/comments
|
https://api.github.com/repos/ollama/ollama/issues/2306/events
|
https://github.com/ollama/ollama/issues/2306
| 2,111,993,477
|
I_kwDOJ0Z1Ps594naF
| 2,306
|
Show file sizes on the models page on the ollama website
|
{
"login": "mika76",
"id": 229311,
"node_id": "MDQ6VXNlcjIyOTMxMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/229311?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mika76",
"html_url": "https://github.com/mika76",
"followers_url": "https://api.github.com/users/mika76/followers",
"following_url": "https://api.github.com/users/mika76/following{/other_user}",
"gists_url": "https://api.github.com/users/mika76/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mika76/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mika76/subscriptions",
"organizations_url": "https://api.github.com/users/mika76/orgs",
"repos_url": "https://api.github.com/users/mika76/repos",
"events_url": "https://api.github.com/users/mika76/events{/privacy}",
"received_events_url": "https://api.github.com/users/mika76/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-02-01T09:24:02
| 2024-02-01T20:18:33
| 2024-02-01T19:46:18
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I would like to try different models but it does not really show me how much space it will take up and on my desktop machine space is at a premium. Please show the size on the search list as well as the model detail page.
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2306/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2306/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3086
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3086/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3086/comments
|
https://api.github.com/repos/ollama/ollama/issues/3086/events
|
https://github.com/ollama/ollama/pull/3086
| 2,182,694,322
|
PR_kwDOJ0Z1Ps5pb505
| 3,086
|
Import server.cpp to retain llava support
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-03-12T21:21:05
| 2024-03-15T23:10:38
| 2024-03-15T23:10:35
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3086",
"html_url": "https://github.com/ollama/ollama/pull/3086",
"diff_url": "https://github.com/ollama/ollama/pull/3086.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3086.patch",
"merged_at": "2024-03-15T23:10:35"
}
|
Recent refactoring upstream has temporarily(?) removed llava support from the server.cpp code, which we rely on. This pulls the server just before that change into our repo so we can keep current with the base llama.cpp code updates until llava support is added back.
Verified on Mac, Linux and Windows.
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3086/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3086/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6969
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6969/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6969/comments
|
https://api.github.com/repos/ollama/ollama/issues/6969/events
|
https://github.com/ollama/ollama/pull/6969
| 2,549,158,578
|
PR_kwDOJ0Z1Ps58uF_p
| 6,969
|
Bump ROCm on linux to 6.2
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 3
| 2024-09-25T23:23:12
| 2025-01-24T21:25:33
| null |
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6969",
"html_url": "https://github.com/ollama/ollama/pull/6969",
"diff_url": "https://github.com/ollama/ollama/pull/6969.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6969.patch",
"merged_at": null
}
|
Fixes #6773
According to the compat matrix, no GPUs are dropped compared to 6.1
No regressions detected across gfx1034 gfx1035 gfx1030 gfx900 gfx906 gfx1100 gfx1103
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6969/reactions",
"total_count": 10,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 5,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6969/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/8540
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8540/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8540/comments
|
https://api.github.com/repos/ollama/ollama/issues/8540/events
|
https://github.com/ollama/ollama/pull/8540
| 2,805,211,246
|
PR_kwDOJ0Z1Ps6IrB2D
| 8,540
|
Update README.md added deepseek-r1
|
{
"login": "olumolu",
"id": 162728301,
"node_id": "U_kgDOCbMJbQ",
"avatar_url": "https://avatars.githubusercontent.com/u/162728301?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/olumolu",
"html_url": "https://github.com/olumolu",
"followers_url": "https://api.github.com/users/olumolu/followers",
"following_url": "https://api.github.com/users/olumolu/following{/other_user}",
"gists_url": "https://api.github.com/users/olumolu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/olumolu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/olumolu/subscriptions",
"organizations_url": "https://api.github.com/users/olumolu/orgs",
"repos_url": "https://api.github.com/users/olumolu/repos",
"events_url": "https://api.github.com/users/olumolu/events{/privacy}",
"received_events_url": "https://api.github.com/users/olumolu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 1
| 2025-01-22T19:45:52
| 2025-01-30T05:58:08
| null |
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/8540",
"html_url": "https://github.com/ollama/ollama/pull/8540",
"diff_url": "https://github.com/ollama/ollama/pull/8540.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8540.patch",
"merged_at": null
}
| null | null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8540/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8540/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6666
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6666/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6666/comments
|
https://api.github.com/repos/ollama/ollama/issues/6666/events
|
https://github.com/ollama/ollama/pull/6666
| 2,509,120,961
|
PR_kwDOJ0Z1Ps56mObP
| 6,666
|
Improve logging on GPU too small
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-09-06T00:22:17
| 2024-09-06T15:29:40
| 2024-09-06T15:29:37
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6666",
"html_url": "https://github.com/ollama/ollama/pull/6666",
"diff_url": "https://github.com/ollama/ollama/pull/6666.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6666.patch",
"merged_at": "2024-09-06T15:29:37"
}
|
When we determine a GPU is too small for any layers, it's not always clear why. This will help troubleshoot those scenarios.
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6666/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6666/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3620
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3620/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3620/comments
|
https://api.github.com/repos/ollama/ollama/issues/3620/events
|
https://github.com/ollama/ollama/issues/3620
| 2,241,126,705
|
I_kwDOJ0Z1Ps6FlOEx
| 3,620
|
Mixtral 8x22b - v0.1
|
{
"login": "igorschlum",
"id": 2884312,
"node_id": "MDQ6VXNlcjI4ODQzMTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/2884312?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/igorschlum",
"html_url": "https://github.com/igorschlum",
"followers_url": "https://api.github.com/users/igorschlum/followers",
"following_url": "https://api.github.com/users/igorschlum/following{/other_user}",
"gists_url": "https://api.github.com/users/igorschlum/gists{/gist_id}",
"starred_url": "https://api.github.com/users/igorschlum/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/igorschlum/subscriptions",
"organizations_url": "https://api.github.com/users/igorschlum/orgs",
"repos_url": "https://api.github.com/users/igorschlum/repos",
"events_url": "https://api.github.com/users/igorschlum/events{/privacy}",
"received_events_url": "https://api.github.com/users/igorschlum/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-04-12T23:51:44
| 2024-04-16T23:32:11
| 2024-04-16T23:32:11
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What model would you like?
BTW Mixtral released a new model: https://huggingface.co/mistral-community/Mixtral-8x22B-v0.1
The Mixtral-8x22B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts.
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3620/reactions",
"total_count": 18,
"+1": 17,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3620/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/73
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/73/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/73/comments
|
https://api.github.com/repos/ollama/ollama/issues/73/events
|
https://github.com/ollama/ollama/pull/73
| 1,801,359,638
|
PR_kwDOJ0Z1Ps5VVOZZ
| 73
|
fix eof error in generate
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-07-12T16:36:33
| 2023-07-12T18:09:27
| 2023-07-12T18:09:23
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/73",
"html_url": "https://github.com/ollama/ollama/pull/73",
"diff_url": "https://github.com/ollama/ollama/pull/73.diff",
"patch_url": "https://github.com/ollama/ollama/pull/73.patch",
"merged_at": "2023-07-12T18:09:23"
}
|
maybe related to #72
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/73/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/73/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7933
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7933/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7933/comments
|
https://api.github.com/repos/ollama/ollama/issues/7933/events
|
https://github.com/ollama/ollama/pull/7933
| 2,718,511,426
|
PR_kwDOJ0Z1Ps6EFNxR
| 7,933
|
Added logging for generated responses
|
{
"login": "NicholasPaulick",
"id": 76536219,
"node_id": "MDQ6VXNlcjc2NTM2MjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/76536219?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NicholasPaulick",
"html_url": "https://github.com/NicholasPaulick",
"followers_url": "https://api.github.com/users/NicholasPaulick/followers",
"following_url": "https://api.github.com/users/NicholasPaulick/following{/other_user}",
"gists_url": "https://api.github.com/users/NicholasPaulick/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NicholasPaulick/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NicholasPaulick/subscriptions",
"organizations_url": "https://api.github.com/users/NicholasPaulick/orgs",
"repos_url": "https://api.github.com/users/NicholasPaulick/repos",
"events_url": "https://api.github.com/users/NicholasPaulick/events{/privacy}",
"received_events_url": "https://api.github.com/users/NicholasPaulick/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 0
| 2024-12-04T18:31:15
| 2024-12-04T18:31:15
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7933",
"html_url": "https://github.com/ollama/ollama/pull/7933",
"diff_url": "https://github.com/ollama/ollama/pull/7933.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7933.patch",
"merged_at": null
}
|
https://github.com/ollama/ollama/issues/4669
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7933/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7933/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1905
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1905/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1905/comments
|
https://api.github.com/repos/ollama/ollama/issues/1905/events
|
https://github.com/ollama/ollama/pull/1905
| 2,074,879,748
|
PR_kwDOJ0Z1Ps5jtkMG
| 1,905
|
docs: add `ollero.nvim` to community applications
|
{
"login": "marco-souza",
"id": 4452113,
"node_id": "MDQ6VXNlcjQ0NTIxMTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4452113?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/marco-souza",
"html_url": "https://github.com/marco-souza",
"followers_url": "https://api.github.com/users/marco-souza/followers",
"following_url": "https://api.github.com/users/marco-souza/following{/other_user}",
"gists_url": "https://api.github.com/users/marco-souza/gists{/gist_id}",
"starred_url": "https://api.github.com/users/marco-souza/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/marco-souza/subscriptions",
"organizations_url": "https://api.github.com/users/marco-souza/orgs",
"repos_url": "https://api.github.com/users/marco-souza/repos",
"events_url": "https://api.github.com/users/marco-souza/events{/privacy}",
"received_events_url": "https://api.github.com/users/marco-souza/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-01-10T17:49:04
| 2024-03-25T19:06:09
| 2024-03-25T19:06:08
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1905",
"html_url": "https://github.com/ollama/ollama/pull/1905",
"diff_url": "https://github.com/ollama/ollama/pull/1905.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1905.patch",
"merged_at": "2024-03-25T19:06:08"
}
|
- adding `[ollero.nvim](https://github.com/marco-souza/ollero.nvim)` to the terminal applications session
> Ollero (ollero.nvim) is a Neovim Plugin that unleashes Ollama super powers to your beloved text editor.
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1905/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1905/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3870
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3870/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3870/comments
|
https://api.github.com/repos/ollama/ollama/issues/3870/events
|
https://github.com/ollama/ollama/issues/3870
| 2,260,555,441
|
I_kwDOJ0Z1Ps6GvVax
| 3,870
|
Failure to Load Llava in Ollama Windows Ver.
|
{
"login": "PasserDreamer",
"id": 30385417,
"node_id": "MDQ6VXNlcjMwMzg1NDE3",
"avatar_url": "https://avatars.githubusercontent.com/u/30385417?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PasserDreamer",
"html_url": "https://github.com/PasserDreamer",
"followers_url": "https://api.github.com/users/PasserDreamer/followers",
"following_url": "https://api.github.com/users/PasserDreamer/following{/other_user}",
"gists_url": "https://api.github.com/users/PasserDreamer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PasserDreamer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PasserDreamer/subscriptions",
"organizations_url": "https://api.github.com/users/PasserDreamer/orgs",
"repos_url": "https://api.github.com/users/PasserDreamer/repos",
"events_url": "https://api.github.com/users/PasserDreamer/events{/privacy}",
"received_events_url": "https://api.github.com/users/PasserDreamer/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 5
| 2024-04-24T07:15:20
| 2024-10-23T18:45:23
| 2024-10-23T18:45:23
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I encountered an issue when attempting to load the 'llava' AI model; however, others such as 'Llama3' or 'Phi3' have no problem. Here are the details:
```
>>ollama run llava
Error: llama runner process no longer running: 1
```
server.log
```
...
clip_model_load: CLIP using CUDA backend
clip_model_load: text_encoder: 0
clip_model_load: vision_encoder: 1
clip_model_load: llava_projector: 1
clip_model_load: model size: 595.49 MB
clip_model_load: metadata size: 0.14 MB
clip_model_load: params backend buffer size = 595.49 MB (377 tensors)
cannot open model file for loading tensors
{"function":"load_model","level":"ERR","line":398,"model":"D:\\Llama\\models\\blobs\\sha256-72d6f08a42f656d36b356dbe0920675899a99ce21192fd66266fb7d82ed07539","msg":"unable to load clip model","tid":"162240","timestamp":1713942368}
time=2024-04-24T15:06:08.293+08:00 level=ERROR source=routes.go:120 msg="error loading llama server" error="llama runner process no longer running: 1 "
```
### OS
Windows
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.1.32
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3870/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3870/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4654
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4654/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4654/comments
|
https://api.github.com/repos/ollama/ollama/issues/4654/events
|
https://github.com/ollama/ollama/issues/4654
| 2,318,052,299
|
I_kwDOJ0Z1Ps6KKqvL
| 4,654
|
Can the model download page add a new ranking?
|
{
"login": "despairTK",
"id": 111871110,
"node_id": "U_kgDOBqsEhg",
"avatar_url": "https://avatars.githubusercontent.com/u/111871110?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/despairTK",
"html_url": "https://github.com/despairTK",
"followers_url": "https://api.github.com/users/despairTK/followers",
"following_url": "https://api.github.com/users/despairTK/following{/other_user}",
"gists_url": "https://api.github.com/users/despairTK/gists{/gist_id}",
"starred_url": "https://api.github.com/users/despairTK/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/despairTK/subscriptions",
"organizations_url": "https://api.github.com/users/despairTK/orgs",
"repos_url": "https://api.github.com/users/despairTK/repos",
"events_url": "https://api.github.com/users/despairTK/events{/privacy}",
"received_events_url": "https://api.github.com/users/despairTK/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 6573197867,
"node_id": "LA_kwDOJ0Z1Ps8AAAABh8sKKw",
"url": "https://api.github.com/repos/ollama/ollama/labels/ollama.com",
"name": "ollama.com",
"color": "ffffff",
"default": false,
"description": ""
}
] |
open
| false
| null |
[] | null | 0
| 2024-05-27T01:30:29
| 2024-07-08T17:21:06
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
https://ollama.com/library
At present, there are only three types of sorting the model download page. Can it add a sort that is mainly based on model update time? This is convenient for the models that are updated on the old version recently, don't find it slowly.

| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4654/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4654/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/1597
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1597/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1597/comments
|
https://api.github.com/repos/ollama/ollama/issues/1597/events
|
https://github.com/ollama/ollama/issues/1597
| 2,047,967,249
|
I_kwDOJ0Z1Ps56EYAR
| 1,597
|
PowerPC
|
{
"login": "RealMrCactus",
"id": 36554881,
"node_id": "MDQ6VXNlcjM2NTU0ODgx",
"avatar_url": "https://avatars.githubusercontent.com/u/36554881?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RealMrCactus",
"html_url": "https://github.com/RealMrCactus",
"followers_url": "https://api.github.com/users/RealMrCactus/followers",
"following_url": "https://api.github.com/users/RealMrCactus/following{/other_user}",
"gists_url": "https://api.github.com/users/RealMrCactus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RealMrCactus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RealMrCactus/subscriptions",
"organizations_url": "https://api.github.com/users/RealMrCactus/orgs",
"repos_url": "https://api.github.com/users/RealMrCactus/repos",
"events_url": "https://api.github.com/users/RealMrCactus/events{/privacy}",
"received_events_url": "https://api.github.com/users/RealMrCactus/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 4
| 2023-12-19T05:48:29
| 2024-03-12T16:55:14
| 2024-03-12T16:55:10
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Is it possible to run this on powerpc
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1597/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 1,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1597/timeline
| null |
not_planned
| false
|
https://api.github.com/repos/ollama/ollama/issues/7189
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7189/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7189/comments
|
https://api.github.com/repos/ollama/ollama/issues/7189/events
|
https://github.com/ollama/ollama/pull/7189
| 2,583,778,540
|
PR_kwDOJ0Z1Ps5-cVg7
| 7,189
|
Update RAM requirements for model sizes in README.md
|
{
"login": "sezer-muhammed",
"id": 74321576,
"node_id": "MDQ6VXNlcjc0MzIxNTc2",
"avatar_url": "https://avatars.githubusercontent.com/u/74321576?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sezer-muhammed",
"html_url": "https://github.com/sezer-muhammed",
"followers_url": "https://api.github.com/users/sezer-muhammed/followers",
"following_url": "https://api.github.com/users/sezer-muhammed/following{/other_user}",
"gists_url": "https://api.github.com/users/sezer-muhammed/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sezer-muhammed/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sezer-muhammed/subscriptions",
"organizations_url": "https://api.github.com/users/sezer-muhammed/orgs",
"repos_url": "https://api.github.com/users/sezer-muhammed/repos",
"events_url": "https://api.github.com/users/sezer-muhammed/events{/privacy}",
"received_events_url": "https://api.github.com/users/sezer-muhammed/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 0
| 2024-10-13T08:26:22
| 2024-10-13T08:28:06
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7189",
"html_url": "https://github.com/ollama/ollama/pull/7189",
"diff_url": "https://github.com/ollama/ollama/pull/7189.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7189.patch",
"merged_at": null
}
|
I have used RTX3070 with 8GB of ram and I can Run gemma2 9b on all GPU and I can run qwen2.5 14B on RTX4070 with 12GB of ram on all GPU. So I revise requirements so that people with that GPU's would know they can run these models.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7189/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7189/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7719
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7719/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7719/comments
|
https://api.github.com/repos/ollama/ollama/issues/7719/events
|
https://github.com/ollama/ollama/pull/7719
| 2,667,009,264
|
PR_kwDOJ0Z1Ps6CL8Va
| 7,719
|
Update README.md
|
{
"login": "adarshM84",
"id": 95633830,
"node_id": "U_kgDOBbNBpg",
"avatar_url": "https://avatars.githubusercontent.com/u/95633830?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/adarshM84",
"html_url": "https://github.com/adarshM84",
"followers_url": "https://api.github.com/users/adarshM84/followers",
"following_url": "https://api.github.com/users/adarshM84/following{/other_user}",
"gists_url": "https://api.github.com/users/adarshM84/gists{/gist_id}",
"starred_url": "https://api.github.com/users/adarshM84/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/adarshM84/subscriptions",
"organizations_url": "https://api.github.com/users/adarshM84/orgs",
"repos_url": "https://api.github.com/users/adarshM84/repos",
"events_url": "https://api.github.com/users/adarshM84/events{/privacy}",
"received_events_url": "https://api.github.com/users/adarshM84/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-11-18T03:44:03
| 2024-11-18T12:31:13
| 2024-11-18T12:31:13
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7719",
"html_url": "https://github.com/ollama/ollama/pull/7719",
"diff_url": "https://github.com/ollama/ollama/pull/7719.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7719.patch",
"merged_at": null
}
|
This Chrome extension will help users interact with the UI. Users can download and delete models from the UI, along with many other features.
|
{
"login": "adarshM84",
"id": 95633830,
"node_id": "U_kgDOBbNBpg",
"avatar_url": "https://avatars.githubusercontent.com/u/95633830?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/adarshM84",
"html_url": "https://github.com/adarshM84",
"followers_url": "https://api.github.com/users/adarshM84/followers",
"following_url": "https://api.github.com/users/adarshM84/following{/other_user}",
"gists_url": "https://api.github.com/users/adarshM84/gists{/gist_id}",
"starred_url": "https://api.github.com/users/adarshM84/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/adarshM84/subscriptions",
"organizations_url": "https://api.github.com/users/adarshM84/orgs",
"repos_url": "https://api.github.com/users/adarshM84/repos",
"events_url": "https://api.github.com/users/adarshM84/events{/privacy}",
"received_events_url": "https://api.github.com/users/adarshM84/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7719/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7719/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3459
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3459/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3459/comments
|
https://api.github.com/repos/ollama/ollama/issues/3459/events
|
https://github.com/ollama/ollama/issues/3459
| 2,220,950,575
|
I_kwDOJ0Z1Ps6EYQQv
| 3,459
|
Add Auto-Update switch to Windows client
|
{
"login": "sebastianlau",
"id": 5213667,
"node_id": "MDQ6VXNlcjUyMTM2Njc=",
"avatar_url": "https://avatars.githubusercontent.com/u/5213667?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sebastianlau",
"html_url": "https://github.com/sebastianlau",
"followers_url": "https://api.github.com/users/sebastianlau/followers",
"following_url": "https://api.github.com/users/sebastianlau/following{/other_user}",
"gists_url": "https://api.github.com/users/sebastianlau/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sebastianlau/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sebastianlau/subscriptions",
"organizations_url": "https://api.github.com/users/sebastianlau/orgs",
"repos_url": "https://api.github.com/users/sebastianlau/repos",
"events_url": "https://api.github.com/users/sebastianlau/events{/privacy}",
"received_events_url": "https://api.github.com/users/sebastianlau/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg",
"url": "https://api.github.com/repos/ollama/ollama/labels/windows",
"name": "windows",
"color": "0052CC",
"default": false,
"description": ""
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 2
| 2024-04-02T16:32:46
| 2025-01-14T11:44:26
| 2024-05-21T18:28:02
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What are you trying to do?
It would be good to be able to disable auto-updates for the Windows client
### How should we solve this?
An environment variable, empty file for persistence -- checkbox in the taskbar as a stretch goal
### What is the impact of not solving this?
Currently I am renaming **ollama app.exe** to **ollama app.exe.disable** and manually starting ollama via cmd and **ollama serve** (not great for restarts)
### Anything else?
_No response_
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3459/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3459/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1168
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1168/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1168/comments
|
https://api.github.com/repos/ollama/ollama/issues/1168/events
|
https://github.com/ollama/ollama/issues/1168
| 1,998,399,912
|
I_kwDOJ0Z1Ps53HSmo
| 1,168
|
Support WhisperForConditionalGeneration
|
{
"login": "OpenWaygate",
"id": 27287694,
"node_id": "MDQ6VXNlcjI3Mjg3Njk0",
"avatar_url": "https://avatars.githubusercontent.com/u/27287694?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/OpenWaygate",
"html_url": "https://github.com/OpenWaygate",
"followers_url": "https://api.github.com/users/OpenWaygate/followers",
"following_url": "https://api.github.com/users/OpenWaygate/following{/other_user}",
"gists_url": "https://api.github.com/users/OpenWaygate/gists{/gist_id}",
"starred_url": "https://api.github.com/users/OpenWaygate/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/OpenWaygate/subscriptions",
"organizations_url": "https://api.github.com/users/OpenWaygate/orgs",
"repos_url": "https://api.github.com/users/OpenWaygate/repos",
"events_url": "https://api.github.com/users/OpenWaygate/events{/privacy}",
"received_events_url": "https://api.github.com/users/OpenWaygate/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
open
| false
| null |
[] | null | 16
| 2023-11-17T06:52:34
| 2025-01-27T03:26:43
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi, it will be so great if ollama can run openai/whisper, then we can chain voice and text. Is there any roadmap about it?
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1168/reactions",
"total_count": 45,
"+1": 45,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1168/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/880
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/880/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/880/comments
|
https://api.github.com/repos/ollama/ollama/issues/880/events
|
https://github.com/ollama/ollama/pull/880
| 1,957,514,411
|
PR_kwDOJ0Z1Ps5djIrq
| 880
|
Temporary Workaround for GGUF v3 Support
|
{
"login": "deichbewohner",
"id": 54838329,
"node_id": "MDQ6VXNlcjU0ODM4MzI5",
"avatar_url": "https://avatars.githubusercontent.com/u/54838329?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/deichbewohner",
"html_url": "https://github.com/deichbewohner",
"followers_url": "https://api.github.com/users/deichbewohner/followers",
"following_url": "https://api.github.com/users/deichbewohner/following{/other_user}",
"gists_url": "https://api.github.com/users/deichbewohner/gists{/gist_id}",
"starred_url": "https://api.github.com/users/deichbewohner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/deichbewohner/subscriptions",
"organizations_url": "https://api.github.com/users/deichbewohner/orgs",
"repos_url": "https://api.github.com/users/deichbewohner/repos",
"events_url": "https://api.github.com/users/deichbewohner/events{/privacy}",
"received_events_url": "https://api.github.com/users/deichbewohner/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2023-10-23T16:07:29
| 2023-10-23T17:32:20
| 2023-10-23T17:32:19
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/880",
"html_url": "https://github.com/ollama/ollama/pull/880",
"diff_url": "https://github.com/ollama/ollama/pull/880.diff",
"patch_url": "https://github.com/ollama/ollama/pull/880.patch",
"merged_at": null
}
|
Addresses the problem raised in Issue #877.
This pull request introduces a temporary workaround to support the GGUF container specification version 3 by treating it as version 2 within the switch case block in `llm/gguf.go`. This change ensures that the new models utilizing version 3 can be processed correctly in the interim.
I am hesitant to even suggest that such workarounds be merged. However, this branch could serve as a temporary solution for others until a more robust fix is deployed. I intend to use this branch to work with models quantized by TheBloke in the meantime.
I have tested it against new v3 models and v2 models:
```bash
$ ollama run agentlm-7b:Q4_K_M "Hi"
Hello! How can I assist you today?
$ ollama run samantha-1.2-mistral-7b:Q4_K_M "Hi"
Hello! I'm glad you decided to say hello. What would you like to talk about today? I'm here for a friendly conversation and to provide support whenever you need it.
$ ollama run zephyr-7b-alpha:Q4_K_M "Hi"
Hello! How can I assist you?
```
|
{
"login": "deichbewohner",
"id": 54838329,
"node_id": "MDQ6VXNlcjU0ODM4MzI5",
"avatar_url": "https://avatars.githubusercontent.com/u/54838329?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/deichbewohner",
"html_url": "https://github.com/deichbewohner",
"followers_url": "https://api.github.com/users/deichbewohner/followers",
"following_url": "https://api.github.com/users/deichbewohner/following{/other_user}",
"gists_url": "https://api.github.com/users/deichbewohner/gists{/gist_id}",
"starred_url": "https://api.github.com/users/deichbewohner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/deichbewohner/subscriptions",
"organizations_url": "https://api.github.com/users/deichbewohner/orgs",
"repos_url": "https://api.github.com/users/deichbewohner/repos",
"events_url": "https://api.github.com/users/deichbewohner/events{/privacy}",
"received_events_url": "https://api.github.com/users/deichbewohner/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/880/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/880/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/8641
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8641/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8641/comments
|
https://api.github.com/repos/ollama/ollama/issues/8641/events
|
https://github.com/ollama/ollama/issues/8641
| 2,816,461,135
|
I_kwDOJ0Z1Ps6n38lP
| 8,641
|
ollama list command not listing installed models
|
{
"login": "Straykinich",
"id": 196836971,
"node_id": "U_kgDOC7t-aw",
"avatar_url": "https://avatars.githubusercontent.com/u/196836971?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Straykinich",
"html_url": "https://github.com/Straykinich",
"followers_url": "https://api.github.com/users/Straykinich/followers",
"following_url": "https://api.github.com/users/Straykinich/following{/other_user}",
"gists_url": "https://api.github.com/users/Straykinich/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Straykinich/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Straykinich/subscriptions",
"organizations_url": "https://api.github.com/users/Straykinich/orgs",
"repos_url": "https://api.github.com/users/Straykinich/repos",
"events_url": "https://api.github.com/users/Straykinich/events{/privacy}",
"received_events_url": "https://api.github.com/users/Straykinich/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 17
| 2025-01-28T18:41:28
| 2025-01-29T10:05:46
| 2025-01-29T10:05:45
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I installed ollama than changed the drive to install models using this article -
"https://medium.com/@dpn.majumder/how-to-deploy-and-experiment-with-ollama-models-on-your-local-machine-windows-34c967a7ab0e"
Than I installed deepseek r1-7b. Now I have restarted and also tried reinstalling everything again. Still ollama list command doesn't show installed models.
Also the models are running correctly
Any solution??
### OS
Windows
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.5.7
|
{
"login": "Straykinich",
"id": 196836971,
"node_id": "U_kgDOC7t-aw",
"avatar_url": "https://avatars.githubusercontent.com/u/196836971?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Straykinich",
"html_url": "https://github.com/Straykinich",
"followers_url": "https://api.github.com/users/Straykinich/followers",
"following_url": "https://api.github.com/users/Straykinich/following{/other_user}",
"gists_url": "https://api.github.com/users/Straykinich/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Straykinich/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Straykinich/subscriptions",
"organizations_url": "https://api.github.com/users/Straykinich/orgs",
"repos_url": "https://api.github.com/users/Straykinich/repos",
"events_url": "https://api.github.com/users/Straykinich/events{/privacy}",
"received_events_url": "https://api.github.com/users/Straykinich/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8641/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8641/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8637
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8637/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8637/comments
|
https://api.github.com/repos/ollama/ollama/issues/8637/events
|
https://github.com/ollama/ollama/issues/8637
| 2,815,815,322
|
I_kwDOJ0Z1Ps6n1e6a
| 8,637
|
can't install deepseek 1.7b on windows
|
{
"login": "passtock",
"id": 39671643,
"node_id": "MDQ6VXNlcjM5NjcxNjQz",
"avatar_url": "https://avatars.githubusercontent.com/u/39671643?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/passtock",
"html_url": "https://github.com/passtock",
"followers_url": "https://api.github.com/users/passtock/followers",
"following_url": "https://api.github.com/users/passtock/following{/other_user}",
"gists_url": "https://api.github.com/users/passtock/gists{/gist_id}",
"starred_url": "https://api.github.com/users/passtock/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/passtock/subscriptions",
"organizations_url": "https://api.github.com/users/passtock/orgs",
"repos_url": "https://api.github.com/users/passtock/repos",
"events_url": "https://api.github.com/users/passtock/events{/privacy}",
"received_events_url": "https://api.github.com/users/passtock/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677370291,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCVsw",
"url": "https://api.github.com/repos/ollama/ollama/labels/networking",
"name": "networking",
"color": "0B5368",
"default": false,
"description": "Issues relating to ollama pull and push"
}
] |
closed
| false
| null |
[] | null | 6
| 2025-01-28T14:18:45
| 2025-01-29T23:26:51
| 2025-01-29T23:26:50
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
while trying to run ollama run deepseek-r1:7b it repeatedly fails at about 6%.everytime i try to run deepseek
i get an error saying error max retries exceeded: EOF
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
latest
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8637/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8637/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2576
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2576/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2576/comments
|
https://api.github.com/repos/ollama/ollama/issues/2576/events
|
https://github.com/ollama/ollama/pull/2576
| 2,141,050,755
|
PR_kwDOJ0Z1Ps5nN_3T
| 2,576
|
Vulkan support: WIP, do not merge
|
{
"login": "ddpasa",
"id": 112642920,
"node_id": "U_kgDOBrbLaA",
"avatar_url": "https://avatars.githubusercontent.com/u/112642920?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ddpasa",
"html_url": "https://github.com/ddpasa",
"followers_url": "https://api.github.com/users/ddpasa/followers",
"following_url": "https://api.github.com/users/ddpasa/following{/other_user}",
"gists_url": "https://api.github.com/users/ddpasa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ddpasa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ddpasa/subscriptions",
"organizations_url": "https://api.github.com/users/ddpasa/orgs",
"repos_url": "https://api.github.com/users/ddpasa/repos",
"events_url": "https://api.github.com/users/ddpasa/events{/privacy}",
"received_events_url": "https://api.github.com/users/ddpasa/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-02-18T15:46:29
| 2024-02-18T15:47:36
| 2024-02-18T15:47:36
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/2576",
"html_url": "https://github.com/ollama/ollama/pull/2576",
"diff_url": "https://github.com/ollama/ollama/pull/2576.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2576.patch",
"merged_at": null
}
|
This is a very preliminary ~~implementation~~ hack of Vulkan support, which llama.cpp recently added.
This is not intended to be merged. This code is far from there. I jsut want to get feedback from ollama devs and some pointers.
I tested this on an Intel Iris Plus G7 GPU on Linux. Phi-2 works fine with 20%-50% speedup compared to CPU with VNNI enabled. It behaves incorrectly for multimodal models such as Bakllava, which I'm still debugging.
I think I need to pull the latest llama.cpp commits to make it work properly, but updating the submodule is throwing bizarre compile time errors.
|
{
"login": "ddpasa",
"id": 112642920,
"node_id": "U_kgDOBrbLaA",
"avatar_url": "https://avatars.githubusercontent.com/u/112642920?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ddpasa",
"html_url": "https://github.com/ddpasa",
"followers_url": "https://api.github.com/users/ddpasa/followers",
"following_url": "https://api.github.com/users/ddpasa/following{/other_user}",
"gists_url": "https://api.github.com/users/ddpasa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ddpasa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ddpasa/subscriptions",
"organizations_url": "https://api.github.com/users/ddpasa/orgs",
"repos_url": "https://api.github.com/users/ddpasa/repos",
"events_url": "https://api.github.com/users/ddpasa/events{/privacy}",
"received_events_url": "https://api.github.com/users/ddpasa/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2576/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2576/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5807
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5807/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5807/comments
|
https://api.github.com/repos/ollama/ollama/issues/5807/events
|
https://github.com/ollama/ollama/pull/5807
| 2,420,582,011
|
PR_kwDOJ0Z1Ps51-IzE
| 5,807
|
Add temporary patch for the new Mistral Tekken tokenizer
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-07-20T05:25:27
| 2024-07-20T17:41:23
| 2024-07-20T17:41:22
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5807",
"html_url": "https://github.com/ollama/ollama/pull/5807",
"diff_url": "https://github.com/ollama/ollama/pull/5807.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5807.patch",
"merged_at": "2024-07-20T17:41:22"
}
|
Adds a temporary patch with the Tekken pre-tokenizer (added in https://github.com/ggerganov/llama.cpp/pull/8579) until the submodule is updated
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5807/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5807/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3222
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3222/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3222/comments
|
https://api.github.com/repos/ollama/ollama/issues/3222/events
|
https://github.com/ollama/ollama/issues/3222
| 2,191,953,757
|
I_kwDOJ0Z1Ps6Cpo9d
| 3,222
|
Support Grok
|
{
"login": "FloLecoeuche",
"id": 2616127,
"node_id": "MDQ6VXNlcjI2MTYxMjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/2616127?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/FloLecoeuche",
"html_url": "https://github.com/FloLecoeuche",
"followers_url": "https://api.github.com/users/FloLecoeuche/followers",
"following_url": "https://api.github.com/users/FloLecoeuche/following{/other_user}",
"gists_url": "https://api.github.com/users/FloLecoeuche/gists{/gist_id}",
"starred_url": "https://api.github.com/users/FloLecoeuche/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/FloLecoeuche/subscriptions",
"organizations_url": "https://api.github.com/users/FloLecoeuche/orgs",
"repos_url": "https://api.github.com/users/FloLecoeuche/repos",
"events_url": "https://api.github.com/users/FloLecoeuche/events{/privacy}",
"received_events_url": "https://api.github.com/users/FloLecoeuche/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
open
| false
| null |
[] | null | 21
| 2024-03-18T11:31:44
| 2024-09-05T14:44:33
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What model would you like?
Please add [xai-org/grok-1](https://github.com/xai-org/grok-1) model to ollama.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3222/reactions",
"total_count": 82,
"+1": 67,
"-1": 0,
"laugh": 15,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3222/timeline
| null | null | false
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.