url
stringlengths 51
54
| repository_url
stringclasses 1
value | labels_url
stringlengths 65
68
| comments_url
stringlengths 60
63
| events_url
stringlengths 58
61
| html_url
stringlengths 39
44
| id
int64 1.78B
2.82B
| node_id
stringlengths 18
19
| number
int64 1
8.69k
| title
stringlengths 1
382
| user
dict | labels
listlengths 0
5
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
2
| milestone
null | comments
int64 0
323
| created_at
timestamp[s] | updated_at
timestamp[s] | closed_at
timestamp[s] | author_association
stringclasses 4
values | sub_issues_summary
dict | active_lock_reason
null | draft
bool 2
classes | pull_request
dict | body
stringlengths 2
118k
⌀ | closed_by
dict | reactions
dict | timeline_url
stringlengths 60
63
| performed_via_github_app
null | state_reason
stringclasses 4
values | is_pull_request
bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/ollama/ollama/issues/3186
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3186/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3186/comments
|
https://api.github.com/repos/ollama/ollama/issues/3186/events
|
https://github.com/ollama/ollama/issues/3186
| 2,190,222,208
|
I_kwDOJ0Z1Ps6CjCOA
| 3,186
|
Support alternate symlink path for ARM Mac
|
{
"login": "vassilmladenov",
"id": 5396637,
"node_id": "MDQ6VXNlcjUzOTY2Mzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/5396637?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vassilmladenov",
"html_url": "https://github.com/vassilmladenov",
"followers_url": "https://api.github.com/users/vassilmladenov/followers",
"following_url": "https://api.github.com/users/vassilmladenov/following{/other_user}",
"gists_url": "https://api.github.com/users/vassilmladenov/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vassilmladenov/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vassilmladenov/subscriptions",
"organizations_url": "https://api.github.com/users/vassilmladenov/orgs",
"repos_url": "https://api.github.com/users/vassilmladenov/repos",
"events_url": "https://api.github.com/users/vassilmladenov/events{/privacy}",
"received_events_url": "https://api.github.com/users/vassilmladenov/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 0
| 2024-03-16T19:54:29
| 2024-03-18T09:10:56
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What are you trying to do?
If you install Ollama with a `brew install --cask ollama` on ARM, it creates a symlink at `/opt/homebrew/bin/ollama`, but the app still wants to run its install script to put a symlink in `/usr/local/bin/ollama` I think because of this line:
https://github.com/ollama/ollama/blob/main/macapp/src/install.ts#L9
Which causes it to pop up this window
<img width="512" alt="Screenshot 2024-03-16 at 12 52 24 PM" src="https://github.com/ollama/ollama/assets/5396637/23a169bb-1664-4b03-8b59-a2a993b7bc99">
even though `ollama run` is working fine in the Terminal.
### How should we solve this?
some kind of plist that lets you specify the ollama symlink or even just hardcoding `/opt/homebrew/bin/ollama` as a possibility since that's the new default Homebrew path for ARM
### What is the impact of not solving this?
Closing the "Welcome to Ollama" window when it pops up
### Anything else?
_No response_
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3186/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3186/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/7158
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7158/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7158/comments
|
https://api.github.com/repos/ollama/ollama/issues/7158/events
|
https://github.com/ollama/ollama/pull/7158
| 2,577,269,982
|
PR_kwDOJ0Z1Ps5-JVDz
| 7,158
|
runner.go: Handle truncation of tokens for stop sequences
|
{
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users/jessegross/followers",
"following_url": "https://api.github.com/users/jessegross/following{/other_user}",
"gists_url": "https://api.github.com/users/jessegross/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jessegross/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jessegross/subscriptions",
"organizations_url": "https://api.github.com/users/jessegross/orgs",
"repos_url": "https://api.github.com/users/jessegross/repos",
"events_url": "https://api.github.com/users/jessegross/events{/privacy}",
"received_events_url": "https://api.github.com/users/jessegross/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-10-10T01:03:55
| 2024-10-10T03:39:05
| 2024-10-10T03:39:04
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7158",
"html_url": "https://github.com/ollama/ollama/pull/7158",
"diff_url": "https://github.com/ollama/ollama/pull/7158.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7158.patch",
"merged_at": "2024-10-10T03:39:04"
}
|
When a single token contains both text to be return and a stop sequence, this causes an out of bounds error when we update the cache to match our text. This is because we currently assume that the removing the stop sequence will consume at least one token.
This also inverts the logic to deal with positive numbers, rather than a value to be subtracted, which is easier to reason about.
|
{
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users/jessegross/followers",
"following_url": "https://api.github.com/users/jessegross/following{/other_user}",
"gists_url": "https://api.github.com/users/jessegross/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jessegross/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jessegross/subscriptions",
"organizations_url": "https://api.github.com/users/jessegross/orgs",
"repos_url": "https://api.github.com/users/jessegross/repos",
"events_url": "https://api.github.com/users/jessegross/events{/privacy}",
"received_events_url": "https://api.github.com/users/jessegross/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7158/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7158/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2779
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2779/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2779/comments
|
https://api.github.com/repos/ollama/ollama/issues/2779/events
|
https://github.com/ollama/ollama/issues/2779
| 2,156,552,252
|
I_kwDOJ0Z1Ps6AimA8
| 2,779
|
Feature request: Additional Console Outputs for more efficient logging and debugging
|
{
"login": "LumiWasTaken",
"id": 49376128,
"node_id": "MDQ6VXNlcjQ5Mzc2MTI4",
"avatar_url": "https://avatars.githubusercontent.com/u/49376128?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LumiWasTaken",
"html_url": "https://github.com/LumiWasTaken",
"followers_url": "https://api.github.com/users/LumiWasTaken/followers",
"following_url": "https://api.github.com/users/LumiWasTaken/following{/other_user}",
"gists_url": "https://api.github.com/users/LumiWasTaken/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LumiWasTaken/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LumiWasTaken/subscriptions",
"organizations_url": "https://api.github.com/users/LumiWasTaken/orgs",
"repos_url": "https://api.github.com/users/LumiWasTaken/repos",
"events_url": "https://api.github.com/users/LumiWasTaken/events{/privacy}",
"received_events_url": "https://api.github.com/users/LumiWasTaken/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 6849881759,
"node_id": "LA_kwDOJ0Z1Ps8AAAABmEjmnw",
"url": "https://api.github.com/repos/ollama/ollama/labels/memory",
"name": "memory",
"color": "5017EA",
"default": false,
"description": ""
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 2
| 2024-02-27T13:08:48
| 2024-07-25T10:15:16
| 2024-07-25T10:15:16
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Heya, i have the common issue that for example when using LLAVA 34b on a small-ish GPU with CPU offloading it sometimes gets stuck.
I can't really trace the issue anywhere, is it the BLAST Batch Processing, is it a OOM error, what is it?
```
key clip.vision.image_grid_pinpoints not found in file
key clip.vision.mm_patch_merge_type not found in file
key clip.vision.image_crop_resolution not found in file
```
Is the most i can get out of a log but that's about it... no metrics, no nothing :/
|
{
"login": "LumiWasTaken",
"id": 49376128,
"node_id": "MDQ6VXNlcjQ5Mzc2MTI4",
"avatar_url": "https://avatars.githubusercontent.com/u/49376128?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LumiWasTaken",
"html_url": "https://github.com/LumiWasTaken",
"followers_url": "https://api.github.com/users/LumiWasTaken/followers",
"following_url": "https://api.github.com/users/LumiWasTaken/following{/other_user}",
"gists_url": "https://api.github.com/users/LumiWasTaken/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LumiWasTaken/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LumiWasTaken/subscriptions",
"organizations_url": "https://api.github.com/users/LumiWasTaken/orgs",
"repos_url": "https://api.github.com/users/LumiWasTaken/repos",
"events_url": "https://api.github.com/users/LumiWasTaken/events{/privacy}",
"received_events_url": "https://api.github.com/users/LumiWasTaken/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2779/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2779/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/500
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/500/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/500/comments
|
https://api.github.com/repos/ollama/ollama/issues/500/events
|
https://github.com/ollama/ollama/pull/500
| 1,888,511,084
|
PR_kwDOJ0Z1Ps5Z7A9e
| 500
|
use cmake toolchain to configure build
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2023-09-09T00:36:39
| 2023-09-11T16:39:42
| 2023-09-11T16:39:37
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/500",
"html_url": "https://github.com/ollama/ollama/pull/500",
"diff_url": "https://github.com/ollama/ollama/pull/500.diff",
"patch_url": "https://github.com/ollama/ollama/pull/500.patch",
"merged_at": null
}
| null |
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/500/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/500/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5921
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5921/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5921/comments
|
https://api.github.com/repos/ollama/ollama/issues/5921/events
|
https://github.com/ollama/ollama/issues/5921
| 2,428,198,576
|
I_kwDOJ0Z1Ps6Qu16w
| 5,921
|
failed installation script on ubuntu 24
|
{
"login": "vikyw89",
"id": 112059651,
"node_id": "U_kgDOBq3lAw",
"avatar_url": "https://avatars.githubusercontent.com/u/112059651?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vikyw89",
"html_url": "https://github.com/vikyw89",
"followers_url": "https://api.github.com/users/vikyw89/followers",
"following_url": "https://api.github.com/users/vikyw89/following{/other_user}",
"gists_url": "https://api.github.com/users/vikyw89/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vikyw89/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vikyw89/subscriptions",
"organizations_url": "https://api.github.com/users/vikyw89/orgs",
"repos_url": "https://api.github.com/users/vikyw89/repos",
"events_url": "https://api.github.com/users/vikyw89/events{/privacy}",
"received_events_url": "https://api.github.com/users/vikyw89/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5755339642,
"node_id": "LA_kwDOJ0Z1Ps8AAAABVwuDeg",
"url": "https://api.github.com/repos/ollama/ollama/labels/linux",
"name": "linux",
"color": "516E70",
"default": false,
"description": ""
},
{
"id": 6678628138,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjhPHKg",
"url": "https://api.github.com/repos/ollama/ollama/labels/install",
"name": "install",
"color": "E0B88D",
"default": false,
"description": ""
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 4
| 2024-07-24T18:15:14
| 2024-07-26T16:49:01
| 2024-07-26T16:49:01
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
when running installation script this error occured:
```$ curl -fsSL https://ollama.com/install.sh | sh
>>> Downloading ollama...
######################################################################## 100.0%######################################################################### 100.0%
>>> Installing ollama to /usr/local/bin...
>>> Adding ollama user to render group...
>>> Adding ollama user to video group...
>>> Adding current user to ollama group...
>>> Creating ollama systemd service...
>>> Enabling and starting ollama service...
>>> Downloading AMD GPU dependencies...
chmod: cannot access '/usr/share/ollama': No such file or directory```
### OS
Linux
### GPU
AMD
### CPU
AMD
### Ollama version
_No response_
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5921/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5921/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2329
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2329/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2329/comments
|
https://api.github.com/repos/ollama/ollama/issues/2329/events
|
https://github.com/ollama/ollama/pull/2329
| 2,115,134,062
|
PR_kwDOJ0Z1Ps5l17_y
| 2,329
|
docs: add tenere to terminal clients
|
{
"login": "pythops",
"id": 57548585,
"node_id": "MDQ6VXNlcjU3NTQ4NTg1",
"avatar_url": "https://avatars.githubusercontent.com/u/57548585?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pythops",
"html_url": "https://github.com/pythops",
"followers_url": "https://api.github.com/users/pythops/followers",
"following_url": "https://api.github.com/users/pythops/following{/other_user}",
"gists_url": "https://api.github.com/users/pythops/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pythops/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pythops/subscriptions",
"organizations_url": "https://api.github.com/users/pythops/orgs",
"repos_url": "https://api.github.com/users/pythops/repos",
"events_url": "https://api.github.com/users/pythops/events{/privacy}",
"received_events_url": "https://api.github.com/users/pythops/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-02-02T15:02:06
| 2024-02-20T04:13:03
| 2024-02-20T04:13:03
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/2329",
"html_url": "https://github.com/ollama/ollama/pull/2329",
"diff_url": "https://github.com/ollama/ollama/pull/2329.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2329.patch",
"merged_at": "2024-02-20T04:13:03"
}
| null |
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2329/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2329/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2535
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2535/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2535/comments
|
https://api.github.com/repos/ollama/ollama/issues/2535/events
|
https://github.com/ollama/ollama/issues/2535
| 2,137,859,838
|
I_kwDOJ0Z1Ps5_bSb-
| 2,535
|
how to set up an ollama model storage directory
|
{
"login": "bangundwir",
"id": 17474376,
"node_id": "MDQ6VXNlcjE3NDc0Mzc2",
"avatar_url": "https://avatars.githubusercontent.com/u/17474376?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bangundwir",
"html_url": "https://github.com/bangundwir",
"followers_url": "https://api.github.com/users/bangundwir/followers",
"following_url": "https://api.github.com/users/bangundwir/following{/other_user}",
"gists_url": "https://api.github.com/users/bangundwir/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bangundwir/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bangundwir/subscriptions",
"organizations_url": "https://api.github.com/users/bangundwir/orgs",
"repos_url": "https://api.github.com/users/bangundwir/repos",
"events_url": "https://api.github.com/users/bangundwir/events{/privacy}",
"received_events_url": "https://api.github.com/users/bangundwir/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 4
| 2024-02-16T04:40:48
| 2024-02-18T21:58:49
| 2024-02-18T06:14:19
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
make it so that you can move the model storage directory on windows ollama
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2535/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2535/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4345
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4345/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4345/comments
|
https://api.github.com/repos/ollama/ollama/issues/4345/events
|
https://github.com/ollama/ollama/issues/4345
| 2,290,694,794
|
I_kwDOJ0Z1Ps6IiTqK
| 4,345
|
Feature Request: Support asynchronous pull API endpoint
|
{
"login": "moracca",
"id": 7213746,
"node_id": "MDQ6VXNlcjcyMTM3NDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/7213746?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/moracca",
"html_url": "https://github.com/moracca",
"followers_url": "https://api.github.com/users/moracca/followers",
"following_url": "https://api.github.com/users/moracca/following{/other_user}",
"gists_url": "https://api.github.com/users/moracca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/moracca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/moracca/subscriptions",
"organizations_url": "https://api.github.com/users/moracca/orgs",
"repos_url": "https://api.github.com/users/moracca/repos",
"events_url": "https://api.github.com/users/moracca/events{/privacy}",
"received_events_url": "https://api.github.com/users/moracca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 7706482389,
"node_id": "LA_kwDOJ0Z1Ps8AAAABy1eW1Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/api",
"name": "api",
"color": "bfdadc",
"default": false,
"description": ""
}
] |
open
| false
| null |
[] | null | 0
| 2024-05-11T05:58:40
| 2024-11-06T17:38:04
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
It would be helpful if we could instruct ollama to download a model without having to wait for the completion, since the model can be quite large in some cases. Ideally subsequent requests to pull the same model would avoid doing anything (maybe return the current status message?). Eventually once it finishes downloading, it shows up in the /api/tags output.
Thanks!
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4345/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4345/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/1341
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1341/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1341/comments
|
https://api.github.com/repos/ollama/ollama/issues/1341/events
|
https://github.com/ollama/ollama/issues/1341
| 2,020,342,004
|
I_kwDOJ0Z1Ps54a_j0
| 1,341
|
MultiGPU: not splitting model to multiple GPUs - CUDA out of memory
|
{
"login": "chymian",
"id": 1899961,
"node_id": "MDQ6VXNlcjE4OTk5NjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1899961?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chymian",
"html_url": "https://github.com/chymian",
"followers_url": "https://api.github.com/users/chymian/followers",
"following_url": "https://api.github.com/users/chymian/following{/other_user}",
"gists_url": "https://api.github.com/users/chymian/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chymian/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chymian/subscriptions",
"organizations_url": "https://api.github.com/users/chymian/orgs",
"repos_url": "https://api.github.com/users/chymian/repos",
"events_url": "https://api.github.com/users/chymian/events{/privacy}",
"received_events_url": "https://api.github.com/users/chymian/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg",
"url": "https://api.github.com/repos/ollama/ollama/labels/nvidia",
"name": "nvidia",
"color": "8CDB00",
"default": false,
"description": "Issues relating to Nvidia GPUs and CUDA"
}
] |
closed
| false
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 10
| 2023-12-01T08:12:03
| 2024-05-09T22:25:10
| 2024-05-09T22:25:10
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
trying to load a model (deepseek-coder) to 2 GPUs fails with OOM-error.
__the setup:__
Linux: ubu 22.04
HW: i5-7400 (AVX, AVX2), 32GB
GPU: 4 x 3070 8GB
ollama: 0.1.12, running in docker
nvidia-smi from within the container shows 2 x 3070.
Because of the big contect-size, I want to load the model on 2 GPUs, but it never uses the second one and fails, after reaching OOM at the first GPU.
__modelfile:__
```
ollama show --modelfile coder-16k
# Modelfile generated by "ollama show"
# To build a new Modelfile based on this one, replace the FROM line with:
# FROM coder-16k:latest
FROM deepseek-coder:6.7b-base-q5_0
TEMPLATE """{{ .Prompt }}"""
PARAMETER num_ctx 16384
PARAMETER num_gpu 128
PARAMETER num_predict 756
PARAMETER seed 42
PARAMETER temperature 0.1
PARAMETER top_k 22
PARAMETER top_p 0.5
```
__AVX:__
it does not recognize/report AVX2 as you can see in the log.
__HINT:__
`num_gpus`, describing "layers to offload" ist most missleading.
your paramter `num_gpus` which is used at all other loaders, like fastchat, oooba's, vllm, etc. to describes the numbers of GPUS to use is very missleading.
IMHO, parameter-names like that, would be more telling:
- tensor_split: amount of GPUs to use
- offload_layers: number of layers to offload
- gpus: which GPUs to use like `CUDA_VISIBLE_DEVICES`
here the log of the failure.
thats the part where it OOM-erros on GPU0 and start loading to CPU
```log
...
ollama-GPU23 | llm_load_print_meta: LF token = 126 'Ä'
ollama-GPU23 | llm_load_tensors: ggml ctx size = 0.11 MiB
ollama-GPU23 | llm_load_tensors: using CUDA for GPU acceleration
ollama-GPU23 | ggml_cuda_set_main_device: using device 0 (NVIDIA GeForce RTX 3070) as main device
ollama-GPU23 | llm_load_tensors: mem required = 86.73 MiB
ollama-GPU23 | llm_load_tensors: offloading 32 repeating layers to GPU
ollama-GPU23 | llm_load_tensors: offloading non-repeating layers to GPU
ollama-GPU23 | llm_load_tensors: offloaded 35/35 layers to GPU
ollama-GPU23 | llm_load_tensors: VRAM used: 4350.38 MiB
ollama-GPU23 | ..................................................................................................
ollama-GPU23 | llama_new_context_with_model: n_ctx = 16384
ollama-GPU23 | llama_new_context_with_model: freq_base = 100000.0
ollama-GPU23 | llama_new_context_with_model: freq_scale = 0.25
ollama-GPU23 | llama_kv_cache_init: offloading v cache to GPU
ollama-GPU23 |
ollama-GPU23 | CUDA error 2 at /go/src/github.com/jmorganca/ollama/llm/llama.cpp/gguf/ggml-cuda.cu:7957: out of memory
ollama-GPU23 | current device: 0
ollama-GPU23 | 2023/12/01 07:27:54 llama.go:436: 2 at /go/src/github.com/jmorganca/ollama/llm/llama.cpp/gguf/ggml-cuda.cu:7957: out
of memory
ollama-GPU23 | current device: 0
ollama-GPU23 | 2023/12/01 07:27:54 llama.go:444: error starting llama runner: llama runner process has terminated
ollama-GPU23 | 2023/12/01 07:27:54 llama.go:510: llama runner stopped successfully
ollama-GPU23 | 2023/12/01 07:27:54 llama.go:421: starting llama runner
ollama-GPU23 | 2023/12/01 07:27:54 llama.go:479: waiting for llama runner to start responding
ollama-GPU23 | {"timestamp":1701415674,"level":"WARNING","function":"server_params_parse","line":2035,"message":"Not compiled with
GPU offload support, --n-gpu-layers option will be ignored. See main README.md for information on enabling GPU BLAS support","n_gpu_
layers":-1}
ollama-GPU23 | {"timestamp":1701415674,"level":"INFO","function":"main","line":2534,"message":"build info","build":375,"commit":"96
56026"}
ollama-GPU23 | {"timestamp":1701415674,"level":"INFO","function":"main","line":2537,"message":"system info","n_threads":4,"n_thread
s_batch":-1,"total_threads":4,"system_info":"AVX = 1 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON =
0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | "}
ollama-GPU23 | llama_model_loader: loaded meta data with 22 key-value pairs and 291 tensors from /root/.ollama/models/blobs/sha256:
5d80d0c539a5c90b360fbb2bc49261f3e28fae0e937452aea3948788c40cbba7 (version GGUF V2)
ollama-GPU23 |
...
```
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1341/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/ollama/ollama/issues/1341/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1865
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1865/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1865/comments
|
https://api.github.com/repos/ollama/ollama/issues/1865/events
|
https://github.com/ollama/ollama/issues/1865
| 2,072,299,939
|
I_kwDOJ0Z1Ps57hMmj
| 1,865
|
Add GPU support for CUDA Compute Capability 5.0 and 5.2 cards
|
{
"login": "Subie1",
"id": 133152722,
"node_id": "U_kgDOB--_0g",
"avatar_url": "https://avatars.githubusercontent.com/u/133152722?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Subie1",
"html_url": "https://github.com/Subie1",
"followers_url": "https://api.github.com/users/Subie1/followers",
"following_url": "https://api.github.com/users/Subie1/following{/other_user}",
"gists_url": "https://api.github.com/users/Subie1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Subie1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Subie1/subscriptions",
"organizations_url": "https://api.github.com/users/Subie1/orgs",
"repos_url": "https://api.github.com/users/Subie1/repos",
"events_url": "https://api.github.com/users/Subie1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Subie1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg",
"url": "https://api.github.com/repos/ollama/ollama/labels/nvidia",
"name": "nvidia",
"color": "8CDB00",
"default": false,
"description": "Issues relating to Nvidia GPUs and CUDA"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 12
| 2024-01-09T12:39:57
| 2024-12-10T19:30:15
| 2024-01-27T18:28:39
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
The `ollama serve` command runs as normally with the detection of my GPU:
```
2024/01/09 14:37:45 gpu.go:34: Detecting GPU type
ama 2024/01/09 14:37:45 gpu.go:53: Nvidia GPU detected
ggml_init_cublas: found 1 CUDA devices:
Device 0: Quadro M1000M, compute capability 5.0
```
Lines which lead me to believe it's loading CUDA:
```
llm_load_tensors: using CUDA for GPU acceleration
llm_load_tensors: mem required = 35.23 MiB
llm_load_tensors: offloading 22 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 23/23 layers to GPU
llm_load_tensors: VRAM used: 571.37 MiB
⠴ .
llama_new_context_with_model: n_ctx = 2048
llama_new_context_with_model: freq_base = 10000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: VRAM kv self = 44.00 MB
llama_new_context_with_model: KV self size = 44.00 MiB, K (f16): 22.00 MiB, V (f16): 22.00 MiB
⠦ llama_build_graph: non-view tensors processed: 466/466
llama_new_context_with_model: compute buffer total size = 147.19 MiB
⠧ llama_new_context_with_model: VRAM scratch buffer: 144.00 MiB
llama_new_context_with_model: total VRAM used: 759.38 MiB (model: 571.37 MiB, context: 188.00 MiB)
```
The once I run a model it starts normally, then before the finish it crashes with this error:
```
CUDA error 209 at /go/src/github.com/jmorganca/ollama/llm/llama.cpp/ggml-cuda.cu:7801: no kernel image is available for execution on the device
current device: 0
GGML_ASSERT: /go/src/github.com/jmorganca/ollama/llm/llama.cpp/ggml-cuda.cu:7801: !"CUDA error"
⠼ SIGABRT: abort
PC=0x7f7fa7b7b9fc m=11 sigcode=18446744073709551610
signal arrived during cgo execution
```
Then continues with this huge error:
```
CUDA error 209 at /go/src/github.com/jmorganca/ollama/llm/llama.cpp/ggml-cuda.cu:7801: no kernel image is available for execution on the device
current device: 0
GGML_ASSERT: /go/src/github.com/jmorganca/ollama/llm/llama.cpp/ggml-cuda.cu:7801: !"CUDA error"
⠋ SIGABRT: abort
PC=0x7fa1d727b9fc m=4 sigcode=18446744073709551610
signal arrived during cgo execution
goroutine 10 [syscall]:
runtime.cgocall(0x9c1470, 0xc0004ca608)
/usr/local/go/src/runtime/cgocall.go:157 +0x4b fp=0xc0004ca5e0 sp=0xc0004ca5a8 pc=0x4291ab
github.com/jmorganca/ollama/llm._Cfunc_dynamic_shim_llama_server_init({0x7fa1840014a0, 0x7fa13ce7b2e0, 0x7fa13ce6da80, 0x7fa13ce71270, 0x7fa13ce83770, 0x7fa13ce78900, 0x7fa13ce71430, 0x7fa13ce6db00, 0x7fa13ce7ea00, 0x7fa13ce7e5b0, ...}, ...)
_cgo_gotypes.go:287 +0x45 fp=0xc0004ca608 sp=0xc0004ca5e0 pc=0x7cdd85
github.com/jmorganca/ollama/llm.(*shimExtServer).llama_server_init.func1(0x45971b?, 0x80?, 0x80?)
/go/src/github.com/jmorganca/ollama/llm/shim_ext_server.go:40 +0xec fp=0xc0004ca6f8 sp=0xc0004ca608 pc=0x7d326c
github.com/jmorganca/ollama/llm.(*shimExtServer).llama_server_init(0xc0000982d0?, 0x0?, 0x200?)
/go/src/github.com/jmorganca/ollama/llm/shim_ext_server.go:40 +0x13 fp=0xc0004ca720 sp=0xc0004ca6f8 pc=0x7d3153
github.com/jmorganca/ollama/llm.newExtServer({0x17842518, 0xc0000f8360}, {0xc0004b6230, _}, {_, _, _}, {0x0, 0x0, 0x0}, ...)
/go/src/github.com/jmorganca/ollama/llm/ext_server_common.go:146 +0x7f8 fp=0xc0004ca9a8 sp=0xc0004ca720 pc=0x7cf3b8
github.com/jmorganca/ollama/llm.newDynamicShimExtServer({0xc00047cf00, 0x2b}, {0xc0004b6230, _}, {_, _, _}, {0x0, 0x0, 0x0}, ...)
/go/src/github.com/jmorganca/ollama/llm/shim_ext_server.go:93 +0x54f fp=0xc0004cabd0 sp=0xc0004ca9a8 pc=0x7d45af
github.com/jmorganca/ollama/llm.newLlmServer({0xc3d801, 0x4}, {0xc0004b6230, _}, {_, _, _}, {0x0, 0x0, 0x0}, ...)
/go/src/github.com/jmorganca/ollama/llm/llm.go:86 +0x16b fp=0xc0004cad60 sp=0xc0004cabd0 pc=0x7ccecb
github.com/jmorganca/ollama/llm.New({0xc00048e180?, 0x0?}, {0xc0004b6230, _}, {_, _, _}, {0x0, 0x0, 0x0}, ...)
/go/src/github.com/jmorganca/ollama/llm/llm.go:76 +0x233 fp=0xc0004caef0 sp=0xc0004cad60 pc=0x7ccb33
github.com/jmorganca/ollama/server.load(0xc0004d2000?, 0xc0004d2000, {{0x0, 0x800, 0x200, 0x1, 0xffffffffffffffff, 0x0, 0x0, 0x1, ...}, ...}, ...)
/go/src/github.com/jmorganca/ollama/server/routes.go:84 +0x425 fp=0xc0004cb0a0 sp=0xc0004caef0 pc=0x99d825
github.com/jmorganca/ollama/server.GenerateHandler(0xc000522200)
/go/src/github.com/jmorganca/ollama/server/routes.go:191 +0x8c8 fp=0xc0004cb748 sp=0xc0004cb0a0 pc=0x99e5c8
github.com/gin-gonic/gin.(*Context).Next(...)
/root/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/context.go:174
github.com/jmorganca/ollama/server.(*Server).GenerateRoutes.func1(0xc000522200)
/go/src/github.com/jmorganca/ollama/server/routes.go:876 +0x68 fp=0xc0004cb780 sp=0xc0004cb748 pc=0x9a79c8
github.com/gin-gonic/gin.(*Context).Next(...)
/root/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/context.go:174
github.com/gin-gonic/gin.CustomRecoveryWithWriter.func1(0xc000522200)
/root/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/recovery.go:102 +0x7a fp=0xc0004cb7d0 sp=0xc0004cb780 pc=0x9813ba
github.com/gin-gonic/gin.(*Context).Next(...)
/root/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/context.go:174
github.com/gin-gonic/gin.LoggerWithConfig.func1(0xc000522200)
/root/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/logger.go:240 +0xde fp=0xc0004cb980 sp=0xc0004cb7d0 pc=0x98055e
github.com/gin-gonic/gin.(*Context).Next(...)
/root/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/context.go:174
github.com/gin-gonic/gin.(*Engine).handleHTTPRequest(0xc0000d1ba0, 0xc000522200)
/root/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/gin.go:620 +0x65b fp=0xc0004cbb08 sp=0xc0004cb980 pc=0x97f61b
github.com/gin-gonic/gin.(*Engine).ServeHTTP(0xc0000d1ba0, {0x1783c860?, 0xc00041e0e0}, 0xc000522300)
/root/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/gin.go:576 +0x1dd fp=0xc0004cbb48 sp=0xc0004cbb08 pc=0x97eddd
net/http.serverHandler.ServeHTTP({0x1783ab80?}, {0x1783c860?, 0xc00041e0e0?}, 0x6?)
/usr/local/go/src/net/http/server.go:2938 +0x8e fp=0xc0004cbb78 sp=0xc0004cbb48 pc=0x6ee3ee
net/http.(*conn).serve(0xc0004b4120, {0x1783ded8, 0xc0005800f0})
/usr/local/go/src/net/http/server.go:2009 +0x5f4 fp=0xc0004cbfb8 sp=0xc0004cbb78 pc=0x6ea2d4
net/http.(*Server).Serve.func3()
/usr/local/go/src/net/http/server.go:3086 +0x28 fp=0xc0004cbfe0 sp=0xc0004cbfb8 pc=0x6eec08
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0004cbfe8 sp=0xc0004cbfe0 pc=0x48d961
created by net/http.(*Server).Serve in goroutine 1
/usr/local/go/src/net/http/server.go:3086 +0x5cb
goroutine 1 [IO wait]:
runtime.gopark(0x4a05b0?, 0xc000533828?, 0x78?, 0x38?, 0x5166dd?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000615808 sp=0xc0006157e8 pc=0x45de8e
runtime.netpollblock(0x48b9d2?, 0x428946?, 0x0?)
/usr/local/go/src/runtime/netpoll.go:564 +0xf7 fp=0xc000615840 sp=0xc000615808 pc=0x456917
internal/poll.runtime_pollWait(0x7fa18ddcbe80, 0x72)
/usr/local/go/src/runtime/netpoll.go:343 +0x85 fp=0xc000615860 sp=0xc000615840 pc=0x4880a5
internal/poll.(*pollDesc).wait(0xc000462000?, 0x4?, 0x0)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc000615888 sp=0xc000615860 pc=0x50f327
internal/poll.(*pollDesc).waitRead(...)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc000462000)
/usr/local/go/src/internal/poll/fd_unix.go:611 +0x2ac fp=0xc000615930 sp=0xc000615888 pc=0x51480c
net.(*netFD).accept(0xc000462000)
/usr/local/go/src/net/fd_unix.go:172 +0x29 fp=0xc0006159e8 sp=0xc000615930 pc=0x58b3e9
net.(*TCPListener).accept(0xc00043b5a0)
/usr/local/go/src/net/tcpsock_posix.go:152 +0x1e fp=0xc000615a10 sp=0xc0006159e8 pc=0x5a01fe
)
⠙ /usr/local/go/src/net/tcpsock.go:315 +0x30 fp=0xc000615a40 sp=0xc000615a10 pc=0x59f3b0
net/http.(*onceCloseListener).Accept(0xc0004b4120?)
<autogenerated>:1 +0x24 fp=0xc000615a58 sp=0xc000615a40 pc=0x711184
net/http.(*Server).Serve(0xc000366ff0, {0x1783c650, 0xc00043b5a0})
/usr/local/go/src/net/http/server.go:3056 +0x364 fp=0xc000615b88 sp=0xc000615a58 pc=0x6ee844
github.com/jmorganca/ollama/server.Serve({0x1783c650, 0xc00043b5a0})
/go/src/github.com/jmorganca/ollama/server/routes.go:956 +0x389 fp=0xc000615c98 sp=0xc000615b88 pc=0x9a7da9
github.com/jmorganca/ollama/cmd.RunServer(0xc000460300?, {0x17d9db40?, 0x4?, 0xc3d4f5?})
/go/src/github.com/jmorganca/ollama/cmd/cmd.go:634 +0x199 fp=0xc000615d30 sp=0xc000615c98 pc=0x9b9f99
github.com/spf13/cobra.(*Command).execute(0xc000419800, {0x17d9db40, 0x0, 0x0})
/root/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:940 +0x87c fp=0xc000615e68 sp=0xc000615d30 pc=0x783fbc
github.com/spf13/cobra.(*Command).ExecuteC(0xc000418c00)
/root/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068 +0x3a5 fp=0xc000615f20 sp=0xc000615e68 pc=0x7847e5
github.com/spf13/cobra.(*Command).Execute(...)
/root/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992
github.com/spf13/cobra.(*Command).ExecuteContext(...)
/root/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:985
main.main()
/go/src/github.com/jmorganca/ollama/main.go:11 +0x4d fp=0xc000615f40 sp=0xc000615f20 pc=0x9c04cd
runtime.main()
/usr/local/go/src/runtime/proc.go:267 +0x2bb fp=0xc000615fe0 sp=0xc000615f40 pc=0x45da3b
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc000615fe8 sp=0xc000615fe0 pc=0x48d961
goroutine 2 [force gc (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000058fa8 sp=0xc000058f88 pc=0x45de8e
runtime.goparkunlock(...)
/usr/local/go/src/runtime/proc.go:404
runtime.forcegchelper()
/usr/local/go/src/runtime/proc.go:322 +0xb3 fp=0xc000058fe0 sp=0xc000058fa8 pc=0x45dd13
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc000058fe8 sp=0xc000058fe0 pc=0x48d961
created by runtime.init.6 in goroutine 1
/usr/local/go/src/runtime/proc.go:310 +0x1a
goroutine 3 [GC sweep wait]:
runtime.gopark(0x1?, 0x0?, 0x0?, 0x0?, 0x0?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000059778 sp=0xc000059758 pc=0x45de8e
runtime.goparkunlock(...)
/usr/local/go/src/runtime/proc.go:404
runtime.bgsweep(0x0?)
/usr/local/go/src/runtime/mgcsweep.go:321 +0xdf fp=0xc0000597c8 sp=0xc000059778 pc=0x449ddf
runtime.gcenable.func1()
/usr/local/go/src/runtime/mgc.go:200 +0x25 fp=0xc0000597e0 sp=0xc0000597c8 pc=0x43ef05
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0000597e8 sp=0xc0000597e0 pc=0x48d961
created by runtime.gcenable in goroutine 1
/usr/local/go/src/runtime/mgc.go:200 +0x66
goroutine 4 [GC scavenge wait]:
runtime.gopark(0x19a55b4?, 0x188b346?, 0x0?, 0x0?, 0x0?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000059f70 sp=0xc000059f50 pc=0x45de8e
runtime.goparkunlock(...)
/usr/local/go/src/runtime/proc.go:404
runtime.(*scavengerState).park(0x17ca7640)
/usr/local/go/src/runtime/mgcscavenge.go:425 +0x49 fp=0xc000059fa0 sp=0xc000059f70 pc=0x447609
runtime.bgscavenge(0x0?)
/usr/local/go/src/runtime/mgcscavenge.go:658 +0x59 fp=0xc000059fc8 sp=0xc000059fa0 pc=0x447bb9
runtime.gcenable.func2()
/usr/local/go/src/runtime/mgc.go:201 +0x25 fp=0xc000059fe0 sp=0xc000059fc8 pc=0x43eea5
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc000059fe8 sp=0xc000059fe0 pc=0x48d961
created by runtime.gcenable in goroutine 1
/usr/local/go/src/runtime/mgc.go:201 +0xa5
goroutine 5 [finalizer wait]:
runtime.gopark(0xc364c0?, 0x10045f001?, 0x0?, 0x0?, 0x466045?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000058628 sp=0xc000058608 pc=0x45de8e
runtime.runfinq()
/usr/local/go/src/runtime/mfinal.go:193 +0x107 fp=0xc0000587e0 sp=0xc000058628 pc=0x43df87
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0000587e8 sp=0xc0000587e0 pc=0x48d961
created by runtime.createfing in goroutine 1
/usr/local/go/src/runtime/mfinal.go:163 +0x3d
goroutine 6 [select, locked to thread]:
runtime.gopark(0xc00005a7a8?, 0x2?, 0x29?, 0xe1?, 0xc00005a7a4?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc00005a638 sp=0xc00005a618 pc=0x45de8e
runtime.selectgo(0xc00005a7a8, 0xc00005a7a0, 0x0?, 0x0, 0x0?, 0x1)
/usr/local/go/src/runtime/select.go:327 +0x725 fp=0xc00005a758 sp=0xc00005a638 pc=0x46d9c5
runtime.ensureSigM.func1()
/usr/local/go/src/runtime/signal_unix.go:1014 +0x19f fp=0xc00005a7e0 sp=0xc00005a758 pc=0x4849ff
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc00005a7e8 sp=0xc00005a7e0 pc=0x48d961
created by runtime.ensureSigM in goroutine 1
/usr/local/go/src/runtime/signal_unix.go:997 +0xc8
goroutine 18 [syscall]:
runtime.notetsleepg(0x0?, 0x0?)
/usr/local/go/src/runtime/lock_futex.go:236 +0x29 fp=0xc0000547a0 sp=0xc000054768 pc=0x4309e9
os/signal.signal_recv()
/usr/local/go/src/runtime/sigqueue.go:152 +0x29 fp=0xc0000547c0 sp=0xc0000547a0 pc=0x48a329
os/signal.loop()
/usr/local/go/src/os/signal/signal_unix.go:23 +0x13 fp=0xc0000547e0 sp=0xc0000547c0 pc=0x713bb3
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0000547e8 sp=0xc0000547e0 pc=0x48d961
created by os/signal.Notify.func1.1 in goroutine 1
/usr/local/go/src/os/signal/signal.go:151 +0x1f
goroutine 7 [chan receive]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc00005af18 sp=0xc00005aef8 pc=0x45de8e
runtime.chanrecv(0xc00018da40, 0x0, 0x1)
/usr/local/go/src/runtime/chan.go:583 +0x3cd fp=0xc00005af90 sp=0xc00005af18 pc=0x42b58d
runtime.chanrecv1(0x0?, 0x0?)
/usr/local/go/src/runtime/chan.go:442 +0x12 fp=0xc00005afb8 sp=0xc00005af90 pc=0x42b192
github.com/jmorganca/ollama/server.Serve.func1()
/go/src/github.com/jmorganca/ollama/server/routes.go:938 +0x25 fp=0xc00005afe0 sp=0xc00005afb8 pc=0x9a7ea5
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc00005afe8 sp=0xc00005afe0 pc=0x48d961
created by github.com/jmorganca/ollama/server.Serve in goroutine 1
/go/src/github.com/jmorganca/ollama/server/routes.go:937 +0x285
goroutine 34 [GC worker (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc00058c750 sp=0xc00058c730 pc=0x45de8e
runtime.gcBgMarkWorker()
/usr/local/go/src/runtime/mgc.go:1293 +0xe5 fp=0xc00058c7e0 sp=0xc00058c750 pc=0x440a85
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc00058c7e8 sp=0xc00058c7e0 pc=0x48d961
created by runtime.gcBgMarkStartWorkers in goroutine 1
/usr/local/go/src/runtime/mgc.go:1217 +0x1c
goroutine 35 [GC worker (idle)]:
runtime.gopark(0x2d8659670228?, 0x3?, 0xb4?, 0x48?, 0x0?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc00058cf50 sp=0xc00058cf30 pc=0x45de8e
runtime.gcBgMarkWorker()
/usr/local/go/src/runtime/mgc.go:1293 +0xe5 fp=0xc00058cfe0 sp=0xc00058cf50 pc=0x440a85
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc00058cfe8 sp=0xc00058cfe0 pc=0x48d961
created by runtime.gcBgMarkStartWorkers in goroutine 1
/usr/local/go/src/runtime/mgc.go:1217 +0x1c
goroutine 19 [GC worker (idle)]:
runtime.gopark(0x2d8657fbbc81?, 0x3?, 0x24?, 0xe7?, 0x0?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000054f50 sp=0xc000054f30 pc=0x45de8e
runtime.gcBgMarkWorker()
/usr/local/go/src/runtime/mgc.go:1293 +0xe5 fp=0xc000054fe0 sp=0xc000054f50 pc=0x440a85
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc000054fe8 sp=0xc000054fe0 pc=0x48d961
created by runtime.gcBgMarkStartWorkers in goroutine 1
/usr/local/go/src/runtime/mgc.go:1217 +0x1c
goroutine 20 [GC worker (idle)]:
runtime.gopark(0x2d865966fa58?, 0x1?, 0xa0?, 0xd3?, 0x0?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000055750 sp=0xc000055730 pc=0x45de8e
runtime.gcBgMarkWorker()
/usr/local/go/src/runtime/mgc.go:1293 +0xe5 fp=0xc0000557e0 sp=0xc000055750 pc=0x440a85
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0000557e8 sp=0xc0000557e0 pc=0x48d961
created by runtime.gcBgMarkStartWorkers in goroutine 1
/usr/local/go/src/runtime/mgc.go:1217 +0x1c
goroutine 50 [GC worker (idle)]:
runtime.gopark(0x2d8657fbb961?, 0x3?, 0x58?, 0x66?, 0x0?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000588750 sp=0xc000588730 pc=0x45de8e
runtime.gcBgMarkWorker()
/usr/local/go/src/runtime/mgc.go:1293 +0xe5 fp=0xc0005887e0 sp=0xc000588750 pc=0x440a85
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0005887e8 sp=0xc0005887e0 pc=0x48d961
created by runtime.gcBgMarkStartWorkers in goroutine 1
/usr/local/go/src/runtime/mgc.go:1217 +0x1c
goroutine 8 [GC worker (idle)]:
runtime.gopark(0x17d9f7a0?, 0x1?, 0xe4?, 0xfa?, 0x0?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc00005b750 sp=0xc00005b730 pc=0x45de8e
runtime.gcBgMarkWorker()
/usr/local/go/src/runtime/mgc.go:1293 +0xe5 fp=0xc00005b7e0 sp=0xc00005b750 pc=0x440a85
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc00005b7e8 sp=0xc00005b7e0 pc=0x48d961
created by runtime.gcBgMarkStartWorkers in goroutine 1
/usr/local/go/src/runtime/mgc.go:1217 +0x1c
goroutine 21 [GC worker (idle)]:
runtime.gopark(0x2d865963289c?, 0x3?, 0xc0?, 0x89?, 0x0?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000055f50 sp=0xc000055f30 pc=0x45de8e
runtime.gcBgMarkWorker()
/usr/local/go/src/runtime/mgc.go:1293 +0xe5 fp=0xc000055fe0 sp=0xc000055f50 pc=0x440a85
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc000055fe8 sp=0xc000055fe0 pc=0x48d961
created by runtime.gcBgMarkStartWorkers in goroutine 1
/usr/local/go/src/runtime/mgc.go:1217 +0x1c
goroutine 51 [GC worker (idle)]:
runtime.gopark(0x2d865966f800?, 0x3?, 0xd8?, 0x8c?, 0x0?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000588f50 sp=0xc000588f30 pc=0x45de8e
runtime.gcBgMarkWorker()
/usr/local/go/src/runtime/mgc.go:1293 +0xe5 fp=0xc000588fe0 sp=0xc000588f50 pc=0x440a85
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc000588fe8 sp=0xc000588fe0 pc=0x48d961
created by runtime.gcBgMarkStartWorkers in goroutine 1
/usr/local/go/src/runtime/mgc.go:1217 +0x1c
goroutine 52 [IO wait]:
runtime.gopark(0x75?, 0xb?, 0x0?, 0x0?, 0x8?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc0004bd8f8 sp=0xc0004bd8d8 pc=0x45de8e
runtime.netpollblock(0x49e718?, 0x428946?, 0x0?)
/usr/local/go/src/runtime/netpoll.go:564 +0xf7 fp=0xc0004bd930 sp=0xc0004bd8f8 pc=0x456917
internal/poll.runtime_pollWait(0x7fa18ddcbd88, 0x72)
/usr/local/go/src/runtime/netpoll.go:343 +0x85 fp=0xc0004bd950 sp=0xc0004bd930 pc=0x4880a5
internal/poll.(*pollDesc).wait(0xc000462080?, 0xc0004b2000?, 0x0)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc0004bd978 sp=0xc0004bd950 pc=0x50f327
internal/poll.(*pollDesc).waitRead(...)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc000462080, {0xc0004b2000, 0x1000, 0x1000})
/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a fp=0xc0004bda10 sp=0xc0004bd978 pc=0x51061a
net.(*netFD).Read(0xc000462080, {0xc0004b2000?, 0x50f7e5?, 0x0?})
/usr/local/go/src/net/fd_posix.go:55 +0x25 fp=0xc0004bda58 sp=0xc0004bda10 pc=0x5893c5
net.(*conn).Read(0xc000592010, {0xc0004b2000?, 0x0?, 0xc00048a0f8?})
/usr/local/go/src/net/net.go:179 +0x45 fp=0xc0004bdaa0 sp=0xc0004bda58 pc=0x597665
net.(*TCPConn).Read(0xc00048a0f0?, {0xc0004b2000?, 0x0?, 0xc0004bdac0?})
<autogenerated>:1 +0x25 fp=0xc0004bdad0 sp=0xc0004bdaa0 pc=0x5a9565
net/http.(*connReader).Read(0xc00048a0f0, {0xc0004b2000, 0x1000, 0x1000})
/usr/local/go/src/net/http/server.go:791 +0x14b fp=0xc0004bdb20 sp=0xc0004bdad0 pc=0x6e458b
bufio.(*Reader).fill(0xc000516060)
/usr/local/go/src/bufio/bufio.go:113 +0x103 fp=0xc0004bdb58 sp=0xc0004bdb20 pc=0x6741c3
bufio.(*Reader).Peek(0xc000516060, 0x4)
/usr/local/go/src/bufio/bufio.go:151 +0x53 fp=0xc0004bdb78 sp=0xc0004bdb58 pc=0x6742f3
net/http.(*conn).serve(0xc0000fc1b0, {0x1783ded8, 0xc0005800f0})
/usr/local/go/src/net/http/server.go:2044 +0x75c fp=0xc0004bdfb8 sp=0xc0004bdb78 pc=0x6ea43c
net/http.(*Server).Serve.func3()
/usr/local/go/src/net/http/server.go:3086 +0x28 fp=0xc0004bdfe0 sp=0xc0004bdfb8 pc=0x6eec08
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0004bdfe8 sp=0xc0004bdfe0 pc=0x48d961
created by net/http.(*Server).Serve in goroutine 1
/usr/local/go/src/net/http/server.go:3086 +0x5cb
goroutine 53 [IO wait]:
runtime.gopark(0x4f8?, 0xb?, 0x0?, 0x0?, 0x9?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc0004bf8f8 sp=0xc0004bf8d8 pc=0x45de8e
runtime.netpollblock(0x49e718?, 0x428946?, 0x0?)
/usr/local/go/src/runtime/netpoll.go:564 +0xf7 fp=0xc0004bf930 sp=0xc0004bf8f8 pc=0x456917
internal/poll.runtime_pollWait(0x7fa18ddcbc90, 0x72)
/usr/local/go/src/runtime/netpoll.go:343 +0x85 fp=0xc0004bf950 sp=0xc0004bf930 pc=0x4880a5
internal/poll.(*pollDesc).wait(0xc000462180?, 0xc00053c000?, 0x0)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc0004bf978 sp=0xc0004bf950 pc=0x50f327
internal/poll.(*pollDesc).waitRead(...)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc000462180, {0xc00053c000, 0x1000, 0x1000})
/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a fp=0xc0004bfa10 sp=0xc0004bf978 pc=0x51061a
net.(*netFD).Read(0xc000462180, {0xc00053c000?, 0x50f7e5?, 0x0?})
/usr/local/go/src/net/fd_posix.go:55 +0x25 fp=0xc0004bfa58 sp=0xc0004bfa10 pc=0x5893c5
net.(*conn).Read(0xc000592018, {0xc00053c000?, 0x0?, 0xc000580218?})
/usr/local/go/src/net/net.go:179 +0x45 fp=0xc0004bfaa0 sp=0xc0004bfa58 pc=0x597665
net.(*TCPConn).Read(0xc000580210?, {0xc00053c000?, 0x0?, 0xc000551ac0?})
<autogenerated>:1 +0x25 fp=0xc0004bfad0 sp=0xc0004bfaa0 pc=0x5a9565
net/http.(*connReader).Read(0xc000580210, {0xc00053c000, 0x1000, 0x1000})
/usr/local/go/src/net/http/server.go:791 +0x14b fp=0xc0004bfb20 sp=0xc0004bfad0 pc=0x6e458b
bufio.(*Reader).fill(0xc00010e060)
/usr/local/go/src/bufio/bufio.go:113 +0x103 fp=0xc0004bfb58 sp=0xc0004bfb20 pc=0x6741c3
bufio.(*Reader).Peek(0xc00010e060, 0x4)
/usr/local/go/src/bufio/bufio.go:151 +0x53 fp=0xc0004bfb78 sp=0xc0004bfb58 pc=0x6742f3
net/http.(*conn).serve(0xc0000fc240, {0x1783ded8, 0xc0005800f0})
/usr/local/go/src/net/http/server.go:2044 +0x75c fp=0xc0004bffb8 sp=0xc0004bfb78 pc=0x6ea43c
net/http.(*Server).Serve.func3()
/usr/local/go/src/net/http/server.go:3086 +0x28 fp=0xc0004bffe0 sp=0xc0004bffb8 pc=0x6eec08
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0004bffe8 sp=0xc0004bffe0 pc=0x48d961
created by net/http.(*Server).Serve in goroutine 1
/usr/local/go/src/net/http/server.go:3086 +0x5cb
goroutine 22 [IO wait]:
runtime.gopark(0x4f8?, 0xb?, 0x0?, 0x0?, 0xa?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc0004b98f8 sp=0xc0004b98d8 pc=0x45de8e
runtime.netpollblock(0x49e718?, 0x428946?, 0x0?)
/usr/local/go/src/runtime/netpoll.go:564 +0xf7 fp=0xc0004b9930 sp=0xc0004b98f8 pc=0x456917
internal/poll.runtime_pollWait(0x7fa18ddcbb98, 0x72)
/usr/local/go/src/runtime/netpoll.go:343 +0x85 fp=0xc0004b9950 sp=0xc0004b9930 pc=0x4880a5
internal/poll.(*pollDesc).wait(0xc000186000?, 0xc0000c8000?, 0x0)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc0004b9978 sp=0xc0004b9950 pc=0x50f327
internal/poll.(*pollDesc).waitRead(...)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc000186000, {0xc0000c8000, 0x1000, 0x1000})
/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a fp=0xc0004b9a10 sp=0xc0004b9978 pc=0x51061a
net.(*netFD).Read(0xc000186000, {0xc0000c8000?, 0x50f7e5?, 0x0?})
/usr/local/go/src/net/fd_posix.go:55 +0x25 fp=0xc0004b9a58 sp=0xc0004b9a10 pc=0x5893c5
net.(*conn).Read(0xc000080000, {0xc0000c8000?, 0x0?, 0xc000098518?})
/usr/local/go/src/net/net.go:179 +0x45 fp=0xc0004b9aa0 sp=0xc0004b9a58 pc=0x597665
net.(*TCPConn).Read(0xc000098510?, {0xc0000c8000?, 0x0?, 0xc00054dac0?})
<autogenerated>:1 +0x25 fp=0xc0004b9ad0 sp=0xc0004b9aa0 pc=0x5a9565
net/http.(*connReader).Read(0xc000098510, {0xc0000c8000, 0x1000, 0x1000})
/usr/local/go/src/net/http/server.go:791 +0x14b fp=0xc0004b9b20 sp=0xc0004b9ad0 pc=0x6e458b
bufio.(*Reader).fill(0xc00018c7e0)
/usr/local/go/src/bufio/bufio.go:113 +0x103 fp=0xc0004b9b58 sp=0xc0004b9b20 pc=0x6741c3
bufio.(*Reader).Peek(0xc00018c7e0, 0x4)
/usr/local/go/src/bufio/bufio.go:151 +0x53 fp=0xc0004b9b78 sp=0xc0004b9b58 pc=0x6742f3
net/http.(*conn).serve(0xc0000c6000, {0x1783ded8, 0xc0005800f0})
/usr/local/go/src/net/http/server.go:2044 +0x75c fp=0xc0004b9fb8 sp=0xc0004b9b78 pc=0x6ea43c
net/http.(*Server).Serve.func3()
/usr/local/go/src/net/http/server.go:3086 +0x28 fp=0xc0004b9fe0 sp=0xc0004b9fb8 pc=0x6eec08
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0004b9fe8 sp=0xc0004b9fe0 pc=0x48d961
created by net/http.(*Server).Serve in goroutine 1
/usr/local/go/src/net/http/server.go:3086 +0x5cb
goroutine 11 [IO wait]:
runtime.gopark(0x0?, 0xb?, 0x0?, 0x0?, 0xb?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc00058eda0 sp=0xc00058ed80 pc=0x45de8e
runtime.netpollblock(0x49e718?, 0x428946?, 0x0?)
/usr/local/go/src/runtime/netpoll.go:564 +0xf7 fp=0xc00058edd8 sp=0xc00058eda0 pc=0x456917
internal/poll.runtime_pollWait(0x7fa18ddcbaa0, 0x72)
/usr/local/go/src/runtime/netpoll.go:343 +0x85 fp=0xc00058edf8 sp=0xc00058edd8 pc=0x4880a5
internal/poll.(*pollDesc).wait(0xc00041a000?, 0xc00048a6a1?, 0x0)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc00058ee20 sp=0xc00058edf8 pc=0x50f327
internal/poll.(*pollDesc).waitRead(...)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc00041a000, {0xc00048a6a1, 0x1, 0x1})
/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a fp=0xc00058eeb8 sp=0xc00058ee20 pc=0x51061a
net.(*netFD).Read(0xc00041a000, {0xc00048a6a1?, 0xc00058ef40?, 0x48a030?})
/usr/local/go/src/net/fd_posix.go:55 +0x25 fp=0xc00058ef00 sp=0xc00058eeb8 pc=0x5893c5
net.(*conn).Read(0xc00005c040, {0xc00048a6a1?, 0x1?, 0xc0004240a0?})
/usr/local/go/src/net/net.go:179 +0x45 fp=0xc00058ef48 sp=0xc00058ef00 pc=0x597665
net.(*TCPConn).Read(0xc00048a0f0?, {0xc00048a6a1?, 0xc0004240a0?, 0x0?})
<autogenerated>:1 +0x25 fp=0xc00058ef78 sp=0xc00058ef48 pc=0x5a9565
net/http.(*connReader).backgroundRead(0xc00048a690)
/usr/local/go/src/net/http/server.go:683 +0x37 fp=0xc00058efc8 sp=0xc00058ef78 pc=0x6e4157
net/http.(*connReader).startBackgroundRead.func2()
/usr/local/go/src/net/http/server.go:679 +0x25 fp=0xc00058efe0 sp=0xc00058efc8 pc=0x6e4085
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc00058efe8 sp=0xc00058efe0 pc=0x48d961
created by net/http.(*connReader).startBackgroundRead in goroutine 10
/usr/local/go/src/net/http/server.go:679 +0xba
rax 0x0
rbx 0x7fa18f649640
rcx 0x7fa1d727b9fc
rdx 0x6
rdi 0x211
rsi 0x211
rbp 0x211
rsp 0x7fa18f6481f0
r8 0x7fa18f6482c0
r9 0x1
r10 0x8
r11 0x246
r12 0x6
r13 0x16
r14 0x60c3f8000
r15 0x0
rip 0x7fa1d727b9fc
rflags 0x246
cs 0x33
fs 0x0
gs 0x0
SIGABRT: abort
PC=0x7fa1d727b9fc m=4 sigcode=18446744073709551610
signal arrived during cgo execution
goroutine 10 [syscall]:
runtime.cgocall(0x9c1470, 0xc0004ca608)
/usr/local/go/src/runtime/cgocall.go:157 +0x4b fp=0xc0004ca5e0 sp=0xc0004ca5a8 pc=0x4291ab
github.com/jmorganca/ollama/llm._Cfunc_dynamic_shim_llama_server_init({0x7fa1840014a0, 0x7fa13ce7b2e0, 0x7fa13ce6da80, 0x7fa13ce71270, 0x7fa13ce83770, 0x7fa13ce78900, 0x7fa13ce71430, 0x7fa13ce6db00, 0x7fa13ce7ea00, 0x7fa13ce7e5b0, ...}, ...)
_cgo_gotypes.go:287 +0x45 fp=0xc0004ca608 sp=0xc0004ca5e0 pc=0x7cdd85
github.com/jmorganca/ollama/llm.(*shimExtServer).llama_server_init.func1(0x45971b?, 0x80?, 0x80?)
/go/src/github.com/jmorganca/ollama/llm/shim_ext_server.go:40 +0xec fp=0xc0004ca6f8 sp=0xc0004ca608 pc=0x7d326c
github.com/jmorganca/ollama/llm.(*shimExtServer).llama_server_init(0xc0000982d0?, 0x0?, 0x200?)
/go/src/github.com/jmorganca/ollama/llm/shim_ext_server.go:40 +0x13 fp=0xc0004ca720 sp=0xc0004ca6f8 pc=0x7d3153
github.com/jmorganca/ollama/llm.newExtServer({0x17842518, 0xc0000f8360}, {0xc0004b6230, _}, {_, _, _}, {0x0, 0x0, 0x0}, ...)
/go/src/github.com/jmorganca/ollama/llm/ext_server_common.go:146 +0x7f8 fp=0xc0004ca9a8 sp=0xc0004ca720 pc=0x7cf3b8
github.com/jmorganca/ollama/llm.newDynamicShimExtServer({0xc00047cf00, 0x2b}, {0xc0004b6230, _}, {_, _, _}, {0x0, 0x0, 0x0}, ...)
/go/src/github.com/jmorganca/ollama/llm/shim_ext_server.go:93 +0x54f fp=0xc0004cabd0 sp=0xc0004ca9a8 pc=0x7d45af
github.com/jmorganca/ollama/llm.newLlmServer({0xc3d801, 0x4}, {0xc0004b6230, _}, {_, _, _}, {0x0, 0x0, 0x0}, ...)
/go/src/github.com/jmorganca/ollama/llm/llm.go:86 +0x16b fp=0xc0004cad60 sp=0xc0004cabd0 pc=0x7ccecb
github.com/jmorganca/ollama/llm.New({0xc00048e180?, 0x0?}, {0xc0004b6230, _}, {_, _, _}, {0x0, 0x0, 0x0}, ...)
/go/src/github.com/jmorganca/ollama/llm/llm.go:76 +0x233 fp=0xc0004caef0 sp=0xc0004cad60 pc=0x7ccb33
github.com/jmorganca/ollama/server.load(0xc0004d2000?, 0xc0004d2000, {{0x0, 0x800, 0x200, 0x1, 0xffffffffffffffff, 0x0, 0x0, 0x1, ...}, ...}, ...)
/go/src/github.com/jmorganca/ollama/server/routes.go:84 +0x425 fp=0xc0004cb0a0 sp=0xc0004caef0 pc=0x99d825
github.com/jmorganca/ollama/server.GenerateHandler(0xc000522200)
/go/src/github.com/jmorganca/ollama/server/routes.go:191 +0x8c8 fp=0xc0004cb748 sp=0xc0004cb0a0 pc=0x99e5c8
github.com/gin-gonic/gin.(*Context).Next(...)
/root/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/context.go:174
github.com/jmorganca/ollama/server.(*Server).GenerateRoutes.func1(0xc000522200)
/go/src/github.com/jmorganca/ollama/server/routes.go:876 +0x68 fp=0xc0004cb780 sp=0xc0004cb748 pc=0x9a79c8
github.com/gin-gonic/gin.(*Context).Next(...)
/root/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/context.go:174
github.com/gin-gonic/gin.CustomRecoveryWithWriter.func1(0xc000522200)
/root/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/recovery.go:102 +0x7a fp=0xc0004cb7d0 sp=0xc0004cb780 pc=0x9813ba
github.com/gin-gonic/gin.(*Context).Next(...)
/root/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/context.go:174
github.com/gin-gonic/gin.LoggerWithConfig.func1(0xc000522200)
/root/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/logger.go:240 +0xde fp=0xc0004cb980 sp=0xc0004cb7d0 pc=0x98055e
github.com/gin-gonic/gin.(*Context).Next(...)
/root/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/context.go:174
github.com/gin-gonic/gin.(*Engine).handleHTTPRequest(0xc0000d1ba0, 0xc000522200)
/root/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/gin.go:620 +0x65b fp=0xc0004cbb08 sp=0xc0004cb980 pc=0x97f61b
github.com/gin-gonic/gin.(*Engine).ServeHTTP(0xc0000d1ba0, {0x1783c860?, 0xc00041e0e0}, 0xc000522300)
/root/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/gin.go:576 +0x1dd fp=0xc0004cbb48 sp=0xc0004cbb08 pc=0x97eddd
net/http.serverHandler.ServeHTTP({0x1783ab80?}, {0x1783c860?, 0xc00041e0e0?}, 0x6?)
/usr/local/go/src/net/http/server.go:2938 +0x8e fp=0xc0004cbb78 sp=0xc0004cbb48 pc=0x6ee3ee
net/http.(*conn).serve(0xc0004b4120, {0x1783ded8, 0xc0005800f0})
/usr/local/go/src/net/http/server.go:2009 +0x5f4 fp=0xc0004cbfb8 sp=0xc0004cbb78 pc=0x6ea2d4
net/http.(*Server).Serve.func3()
/usr/local/go/src/net/http/server.go:3086 +0x28 fp=0xc0004cbfe0 sp=0xc0004cbfb8 pc=0x6eec08
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0004cbfe8 sp=0xc0004cbfe0 pc=0x48d961
created by net/http.(*Server).Serve in goroutine 1
/usr/local/go/src/net/http/server.go:3086 +0x5cb
goroutine 1 [IO wait]:
runtime.gopark(0x4a05b0?, 0xc000533828?, 0x78?, 0x38?, 0x5166dd?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000615808 sp=0xc0006157e8 pc=0x45de8e
runtime.netpollblock(0x48b9d2?, 0x428946?, 0x0?)
/usr/local/go/src/runtime/netpoll.go:564 +0xf7 fp=0xc000615840 sp=0xc000615808 pc=0x456917
internal/poll.runtime_pollWait(0x7fa18ddcbe80, 0x72)
/usr/local/go/src/runtime/netpoll.go:343 +0x85 fp=0xc000615860 sp=0xc000615840 pc=0x4880a5
internal/poll.(*pollDesc).wait(0xc000462000?, 0x4?, 0x0)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc000615888 sp=0xc000615860 pc=0x50f327
internal/poll.(*pollDesc).waitRead(...)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc000462000)
/usr/local/go/src/internal/poll/fd_unix.go:611 +0x2ac fp=0xc000615930 sp=0xc000615888 pc=0x51480c
net.(*netFD).accept(0xc000462000)
/usr/local/go/src/net/fd_unix.go:172 +0x29 fp=0xc0006159e8 sp=0xc000615930 pc=0x58b3e9
net.(*TCPListener).accept(0xc00043b5a0)
/usr/local/go/src/net/tcpsock_posix.go:152 +0x1e fp=0xc000615a10 sp=0xc0006159e8 pc=0x5a01fe
net.(*TCPListener).Accept(0xc00043b5a0)
/usr/local/go/src/net/tcpsock.go:315 +0x30 fp=0xc000615a40 sp=0xc000615a10 pc=0x59f3b0
net/http.(*onceCloseListener).Accept(0xc0004b4120?)
<autogenerated>:1 +0x24 fp=0xc000615a58 sp=0xc000615a40 pc=0x711184
net/http.(*Server).Serve(0xc000366ff0, {0x1783c650, 0xc00043b5a0})
/usr/local/go/src/net/http/server.go:3056 +0x364 fp=0xc000615b88 sp=0xc000615a58 pc=0x6ee844
github.com/jmorganca/ollama/server.Serve({0x1783c650, 0xc00043b5a0})
/go/src/github.com/jmorganca/ollama/server/routes.go:956 +0x389 fp=0xc000615c98 sp=0xc000615b88 pc=0x9a7da9
github.com/jmorganca/ollama/cmd.RunServer(0xc000460300?, {0x17d9db40?, 0x4?, 0xc3d4f5?})
/go/src/github.com/jmorganca/ollama/cmd/cmd.go:634 +0x199 fp=0xc000615d30 sp=0xc000615c98 pc=0x9b9f99
github.com/spf13/cobra.(*Command).execute(0xc000419800, {0x17d9db40, 0x0, 0x0})
/root/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:940 +0x87c fp=0xc000615e68 sp=0xc000615d30 pc=0x783fbc
github.com/spf13/cobra.(*Command).ExecuteC(0xc000418c00)
/root/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068 +0x3a5 fp=0xc000615f20 sp=0xc000615e68 pc=0x7847e5
github.com/spf13/cobra.(*Command).Execute(...)
/root/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992
github.com/spf13/cobra.(*Command).ExecuteContext(...)
/root/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:985
main.main()
/go/src/github.com/jmorganca/ollama/main.go:11 +0x4d fp=0xc000615f40 sp=0xc000615f20 pc=0x9c04cd
runtime.main()
/usr/local/go/src/runtime/proc.go:267 +0x2bb fp=0xc000615fe0 sp=0xc000615f40 pc=0x45da3b
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc000615fe8 sp=0xc000615fe0 pc=0x48d961
goroutine 2 [force gc (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000058fa8 sp=0xc000058f88 pc=0x45de8e
runtime.goparkunlock(...)
/usr/local/go/src/runtime/proc.go:404
runtime.forcegchelper()
/usr/local/go/src/runtime/proc.go:322 +0xb3 fp=0xc000058fe0 sp=0xc000058fa8 pc=0x45dd13
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc000058fe8 sp=0xc000058fe0 pc=0x48d961
created by runtime.init.6 in goroutine 1
/usr/local/go/src/runtime/proc.go:310 +0x1a
goroutine 3 [GC sweep wait]:
runtime.gopark(0x1?, 0x0?, 0x0?, 0x0?, 0x0?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000059778 sp=0xc000059758 pc=0x45de8e
runtime.goparkunlock(...)
/usr/local/go/src/runtime/proc.go:404
runtime.bgsweep(0x0?)
/usr/local/go/src/runtime/mgcsweep.go:321 +0xdf fp=0xc0000597c8 sp=0xc000059778 pc=0x449ddf
runtime.gcenable.func1()
/usr/local/go/src/runtime/mgc.go:200 +0x25 fp=0xc0000597e0 sp=0xc0000597c8 pc=0x43ef05
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0000597e8 sp=0xc0000597e0 pc=0x48d961
created by runtime.gcenable in goroutine 1
/usr/local/go/src/runtime/mgc.go:200 +0x66
goroutine 4 [GC scavenge wait]:
runtime.gopark(0x19a55b4?, 0x188b346?, 0x0?, 0x0?, 0x0?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000059f70 sp=0xc000059f50 pc=0x45de8e
runtime.goparkunlock(...)
/usr/local/go/src/runtime/proc.go:404
runtime.(*scavengerState).park(0x17ca7640)
/usr/local/go/src/runtime/mgcscavenge.go:425 +0x49 fp=0xc000059fa0 sp=0xc000059f70 pc=0x447609
runtime.bgscavenge(0x0?)
/usr/local/go/src/runtime/mgcscavenge.go:658 +0x59 fp=0xc000059fc8 sp=0xc000059fa0 pc=0x447bb9
runtime.gcenable.func2()
/usr/local/go/src/runtime/mgc.go:201 +0x25 fp=0xc000059fe0 sp=0xc000059fc8 pc=0x43eea5
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc000059fe8 sp=0xc000059fe0 pc=0x48d961
created by runtime.gcenable in goroutine 1
/usr/local/go/src/runtime/mgc.go:201 +0xa5
goroutine 5 [finalizer wait]:
runtime.gopark(0xc364c0?, 0x10045f001?, 0x0?, 0x0?, 0x466045?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000058628 sp=0xc000058608 pc=0x45de8e
runtime.runfinq()
/usr/local/go/src/runtime/mfinal.go:193 +0x107 fp=0xc0000587e0 sp=0xc000058628 pc=0x43df87
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0000587e8 sp=0xc0000587e0 pc=0x48d961
created by runtime.createfing in goroutine 1
/usr/local/go/src/runtime/mfinal.go:163 +0x3d
goroutine 6 [select, locked to thread]:
runtime.gopark(0xc00005a7a8?, 0x2?, 0x29?, 0xe1?, 0xc00005a7a4?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc00005a638 sp=0xc00005a618 pc=0x45de8e
runtime.selectgo(0xc00005a7a8, 0xc00005a7a0, 0x0?, 0x0, 0x0?, 0x1)
/usr/local/go/src/runtime/select.go:327 +0x725 fp=0xc00005a758 sp=0xc00005a638 pc=0x46d9c5
runtime.ensureSigM.func1()
/usr/local/go/src/runtime/signal_unix.go:1014 +0x19f fp=0xc00005a7e0 sp=0xc00005a758 pc=0x4849ff
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc00005a7e8 sp=0xc00005a7e0 pc=0x48d961
created by runtime.ensureSigM in goroutine 1
/usr/local/go/src/runtime/signal_unix.go:997 +0xc8
goroutine 18 [syscall]:
runtime.notetsleepg(0x0?, 0x0?)
/usr/local/go/src/runtime/lock_futex.go:236 +0x29 fp=0xc0000547a0 sp=0xc000054768 pc=0x4309e9
os/signal.signal_recv()
/usr/local/go/src/runtime/sigqueue.go:152 +0x29 fp=0xc0000547c0 sp=0xc0000547a0 pc=0x48a329
os/signal.loop()
/usr/local/go/src/os/signal/signal_unix.go:23 +0x13 fp=0xc0000547e0 sp=0xc0000547c0 pc=0x713bb3
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0000547e8 sp=0xc0000547e0 pc=0x48d961
created by os/signal.Notify.func1.1 in goroutine 1
/usr/local/go/src/os/signal/signal.go:151 +0x1f
goroutine 7 [chan receive]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc00005af18 sp=0xc00005aef8 pc=0x45de8e
runtime.chanrecv(0xc00018da40, 0x0, 0x1)
/usr/local/go/src/runtime/chan.go:583 +0x3cd fp=0xc00005af90 sp=0xc00005af18 pc=0x42b58d
runtime.chanrecv1(0x0?, 0x0?)
/usr/local/go/src/runtime/chan.go:442 +0x12 fp=0xc00005afb8 sp=0xc00005af90 pc=0x42b192
github.com/jmorganca/ollama/server.Serve.func1()
/go/src/github.com/jmorganca/ollama/server/routes.go:938 +0x25 fp=0xc00005afe0 sp=0xc00005afb8 pc=0x9a7ea5
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc00005afe8 sp=0xc00005afe0 pc=0x48d961
created by github.com/jmorganca/ollama/server.Serve in goroutine 1
/go/src/github.com/jmorganca/ollama/server/routes.go:937 +0x285
goroutine 34 [GC worker (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc00058c750 sp=0xc00058c730 pc=0x45de8e
runtime.gcBgMarkWorker()
/usr/local/go/src/runtime/mgc.go:1293 +0xe5 fp=0xc00058c7e0 sp=0xc00058c750 pc=0x440a85
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc00058c7e8 sp=0xc00058c7e0 pc=0x48d961
created by runtime.gcBgMarkStartWorkers in goroutine 1
/usr/local/go/src/runtime/mgc.go:1217 +0x1c
goroutine 35 [GC worker (idle)]:
runtime.gopark(0x2d8659670228?, 0x3?, 0xb4?, 0x48?, 0x0?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc00058cf50 sp=0xc00058cf30 pc=0x45de8e
runtime.gcBgMarkWorker()
/usr/local/go/src/runtime/mgc.go:1293 +0xe5 fp=0xc00058cfe0 sp=0xc00058cf50 pc=0x440a85
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc00058cfe8 sp=0xc00058cfe0 pc=0x48d961
created by runtime.gcBgMarkStartWorkers in goroutine 1
/usr/local/go/src/runtime/mgc.go:1217 +0x1c
goroutine 19 [GC worker (idle)]:
runtime.gopark(0x2d8657fbbc81?, 0x3?, 0x24?, 0xe7?, 0x0?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000054f50 sp=0xc000054f30 pc=0x45de8e
runtime.gcBgMarkWorker()
/usr/local/go/src/runtime/mgc.go:1293 +0xe5 fp=0xc000054fe0 sp=0xc000054f50 pc=0x440a85
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc000054fe8 sp=0xc000054fe0 pc=0x48d961
created by runtime.gcBgMarkStartWorkers in goroutine 1
/usr/local/go/src/runtime/mgc.go:1217 +0x1c
goroutine 20 [GC worker (idle)]:
runtime.gopark(0x2d865966fa58?, 0x1?, 0xa0?, 0xd3?, 0x0?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000055750 sp=0xc000055730 pc=0x45de8e
runtime.gcBgMarkWorker()
/usr/local/go/src/runtime/mgc.go:1293 +0xe5 fp=0xc0000557e0 sp=0xc000055750 pc=0x440a85
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0000557e8 sp=0xc0000557e0 pc=0x48d961
created by runtime.gcBgMarkStartWorkers in goroutine 1
/usr/local/go/src/runtime/mgc.go:1217 +0x1c
goroutine 50 [GC worker (idle)]:
runtime.gopark(0x2d8657fbb961?, 0x3?, 0x58?, 0x66?, 0x0?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000588750 sp=0xc000588730 pc=0x45de8e
runtime.gcBgMarkWorker()
/usr/local/go/src/runtime/mgc.go:1293 +0xe5 fp=0xc0005887e0 sp=0xc000588750 pc=0x440a85
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0005887e8 sp=0xc0005887e0 pc=0x48d961
created by runtime.gcBgMarkStartWorkers in goroutine 1
/usr/local/go/src/runtime/mgc.go:1217 +0x1c
goroutine 8 [GC worker (idle)]:
runtime.gopark(0x17d9f7a0?, 0x1?, 0xe4?, 0xfa?, 0x0?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc00005b750 sp=0xc00005b730 pc=0x45de8e
runtime.gcBgMarkWorker()
/usr/local/go/src/runtime/mgc.go:1293 +0xe5 fp=0xc00005b7e0 sp=0xc00005b750 pc=0x440a85
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc00005b7e8 sp=0xc00005b7e0 pc=0x48d961
created by runtime.gcBgMarkStartWorkers in goroutine 1
/usr/local/go/src/runtime/mgc.go:1217 +0x1c
goroutine 21 [GC worker (idle)]:
runtime.gopark(0x2d865963289c?, 0x3?, 0xc0?, 0x89?, 0x0?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000055f50 sp=0xc000055f30 pc=0x45de8e
runtime.gcBgMarkWorker()
/usr/local/go/src/runtime/mgc.go:1293 +0xe5 fp=0xc000055fe0 sp=0xc000055f50 pc=0x440a85
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc000055fe8 sp=0xc000055fe0 pc=0x48d961
created by runtime.gcBgMarkStartWorkers in goroutine 1
/usr/local/go/src/runtime/mgc.go:1217 +0x1c
goroutine 51 [GC worker (idle)]:
runtime.gopark(0x2d865966f800?, 0x3?, 0xd8?, 0x8c?, 0x0?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000588f50 sp=0xc000588f30 pc=0x45de8e
runtime.gcBgMarkWorker()
/usr/local/go/src/runtime/mgc.go:1293 +0xe5 fp=0xc000588fe0 sp=0xc000588f50 pc=0x440a85
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc000588fe8 sp=0xc000588fe0 pc=0x48d961
created by runtime.gcBgMarkStartWorkers in goroutine 1
/usr/local/go/src/runtime/mgc.go:1217 +0x1c
goroutine 52 [IO wait]:
runtime.gopark(0x75?, 0xb?, 0x0?, 0x0?, 0x8?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc0004bd8f8 sp=0xc0004bd8d8 pc=0x45de8e
runtime.netpollblock(0x49e718?, 0x428946?, 0x0?)
/usr/local/go/src/runtime/netpoll.go:564 +0xf7 fp=0xc0004bd930 sp=0xc0004bd8f8 pc=0x456917
internal/poll.runtime_pollWait(0x7fa18ddcbd88, 0x72)
/usr/local/go/src/runtime/netpoll.go:343 +0x85 fp=0xc0004bd950 sp=0xc0004bd930 pc=0x4880a5
internal/poll.(*pollDesc).wait(0xc000462080?, 0xc0004b2000?, 0x0)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc0004bd978 sp=0xc0004bd950 pc=0x50f327
internal/poll.(*pollDesc).waitRead(...)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc000462080, {0xc0004b2000, 0x1000, 0x1000})
/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a fp=0xc0004bda10 sp=0xc0004bd978 pc=0x51061a
net.(*netFD).Read(0xc000462080, {0xc0004b2000?, 0x50f7e5?, 0x0?})
/usr/local/go/src/net/fd_posix.go:55 +0x25 fp=0xc0004bda58 sp=0xc0004bda10 pc=0x5893c5
net.(*conn).Read(0xc000592010, {0xc0004b2000?, 0x0?, 0xc00048a0f8?})
/usr/local/go/src/net/net.go:179 +0x45 fp=0xc0004bdaa0 sp=0xc0004bda58 pc=0x597665
net.(*TCPConn).Read(0xc00048a0f0?, {0xc0004b2000?, 0x0?, 0xc0004bdac0?})
<autogenerated>:1 +0x25 fp=0xc0004bdad0 sp=0xc0004bdaa0 pc=0x5a9565
net/http.(*connReader).Read(0xc00048a0f0, {0xc0004b2000, 0x1000, 0x1000})
/usr/local/go/src/net/http/server.go:791 +0x14b fp=0xc0004bdb20 sp=0xc0004bdad0 pc=0x6e458b
bufio.(*Reader).fill(0xc000516060)
/usr/local/go/src/bufio/bufio.go:113 +0x103 fp=0xc0004bdb58 sp=0xc0004bdb20 pc=0x6741c3
bufio.(*Reader).Peek(0xc000516060, 0x4)
/usr/local/go/src/bufio/bufio.go:151 +0x53 fp=0xc0004bdb78 sp=0xc0004bdb58 pc=0x6742f3
net/http.(*conn).serve(0xc0000fc1b0, {0x1783ded8, 0xc0005800f0})
/usr/local/go/src/net/http/server.go:2044 +0x75c fp=0xc0004bdfb8 sp=0xc0004bdb78 pc=0x6ea43c
net/http.(*Server).Serve.func3()
/usr/local/go/src/net/http/server.go:3086 +0x28 fp=0xc0004bdfe0 sp=0xc0004bdfb8 pc=0x6eec08
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0004bdfe8 sp=0xc0004bdfe0 pc=0x48d961
created by net/http.(*Server).Serve in goroutine 1
/usr/local/go/src/net/http/server.go:3086 +0x5cb
goroutine 53 [IO wait]:
runtime.gopark(0x4f8?, 0xb?, 0x0?, 0x0?, 0x9?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc0004bf8f8 sp=0xc0004bf8d8 pc=0x45de8e
runtime.netpollblock(0x49e718?, 0x428946?, 0x0?)
/usr/local/go/src/runtime/netpoll.go:564 +0xf7 fp=0xc0004bf930 sp=0xc0004bf8f8 pc=0x456917
internal/poll.runtime_pollWait(0x7fa18ddcbc90, 0x72)
/usr/local/go/src/runtime/netpoll.go:343 +0x85 fp=0xc0004bf950 sp=0xc0004bf930 pc=0x4880a5
internal/poll.(*pollDesc).wait(0xc000462180?, 0xc00053c000?, 0x0)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc0004bf978 sp=0xc0004bf950 pc=0x50f327
internal/poll.(*pollDesc).waitRead(...)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc000462180, {0xc00053c000, 0x1000, 0x1000})
/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a fp=0xc0004bfa10 sp=0xc0004bf978 pc=0x51061a
net.(*netFD).Read(0xc000462180, {0xc00053c000?, 0x50f7e5?, 0x0?})
/usr/local/go/src/net/fd_posix.go:55 +0x25 fp=0xc0004bfa58 sp=0xc0004bfa10 pc=0x5893c5
net.(*conn).Read(0xc000592018, {0xc00053c000?, 0x0?, 0xc000580218?})
/usr/local/go/src/net/net.go:179 +0x45 fp=0xc0004bfaa0 sp=0xc0004bfa58 pc=0x597665
net.(*TCPConn).Read(0xc000580210?, {0xc00053c000?, 0x0?, 0xc000551ac0?})
<autogenerated>:1 +0x25 fp=0xc0004bfad0 sp=0xc0004bfaa0 pc=0x5a9565
net/http.(*connReader).Read(0xc000580210, {0xc00053c000, 0x1000, 0x1000})
/usr/local/go/src/net/http/server.go:791 +0x14b fp=0xc0004bfb20 sp=0xc0004bfad0 pc=0x6e458b
bufio.(*Reader).fill(0xc00010e060)
/usr/local/go/src/bufio/bufio.go:113 +0x103 fp=0xc0004bfb58 sp=0xc0004bfb20 pc=0x6741c3
bufio.(*Reader).Peek(0xc00010e060, 0x4)
/usr/local/go/src/bufio/bufio.go:151 +0x53 fp=0xc0004bfb78 sp=0xc0004bfb58 pc=0x6742f3
net/http.(*conn).serve(0xc0000fc240, {0x1783ded8, 0xc0005800f0})
/usr/local/go/src/net/http/server.go:2044 +0x75c fp=0xc0004bffb8 sp=0xc0004bfb78 pc=0x6ea43c
net/http.(*Server).Serve.func3()
/usr/local/go/src/net/http/server.go:3086 +0x28 fp=0xc0004bffe0 sp=0xc0004bffb8 pc=0x6eec08
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0004bffe8 sp=0xc0004bffe0 pc=0x48d961
created by net/http.(*Server).Serve in goroutine 1
/usr/local/go/src/net/http/server.go:3086 +0x5cb
goroutine 22 [IO wait]:
runtime.gopark(0x4f8?, 0xb?, 0x0?, 0x0?, 0xa?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc0004b98f8 sp=0xc0004b98d8 pc=0x45de8e
runtime.netpollblock(0x49e718?, 0x428946?, 0x0?)
/usr/local/go/src/runtime/netpoll.go:564 +0xf7 fp=0xc0004b9930 sp=0xc0004b98f8 pc=0x456917
internal/poll.runtime_pollWait(0x7fa18ddcbb98, 0x72)
/usr/local/go/src/runtime/netpoll.go:343 +0x85 fp=0xc0004b9950 sp=0xc0004b9930 pc=0x4880a5
internal/poll.(*pollDesc).wait(0xc000186000?, 0xc0000c8000?, 0x0)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc0004b9978 sp=0xc0004b9950 pc=0x50f327
internal/poll.(*pollDesc).waitRead(...)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc000186000, {0xc0000c8000, 0x1000, 0x1000})
/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a fp=0xc0004b9a10 sp=0xc0004b9978 pc=0x51061a
net.(*netFD).Read(0xc000186000, {0xc0000c8000?, 0x50f7e5?, 0x0?})
/usr/local/go/src/net/fd_posix.go:55 +0x25 fp=0xc0004b9a58 sp=0xc0004b9a10 pc=0x5893c5
net.(*conn).Read(0xc000080000, {0xc0000c8000?, 0x0?, 0xc000098518?})
/usr/local/go/src/net/net.go:179 +0x45 fp=0xc0004b9aa0 sp=0xc0004b9a58 pc=0x597665
net.(*TCPConn).Read(0xc000098510?, {0xc0000c8000?, 0x0?, 0xc00054dac0?})
<autogenerated>:1 +0x25 fp=0xc0004b9ad0 sp=0xc0004b9aa0 pc=0x5a9565
net/http.(*connReader).Read(0xc000098510, {0xc0000c8000, 0x1000, 0x1000})
/usr/local/go/src/net/http/server.go:791 +0x14b fp=0xc0004b9b20 sp=0xc0004b9ad0 pc=0x6e458b
bufio.(*Reader).fill(0xc00018c7e0)
/usr/local/go/src/bufio/bufio.go:113 +0x103 fp=0xc0004b9b58 sp=0xc0004b9b20 pc=0x6741c3
bufio.(*Reader).Peek(0xc00018c7e0, 0x4)
/usr/local/go/src/bufio/bufio.go:151 +0x53 fp=0xc0004b9b78 sp=0xc0004b9b58 pc=0x6742f3
net/http.(*conn).serve(0xc0000c6000, {0x1783ded8, 0xc0005800f0})
/usr/local/go/src/net/http/server.go:2044 +0x75c fp=0xc0004b9fb8 sp=0xc0004b9b78 pc=0x6ea43c
net/http.(*Server).Serve.func3()
/usr/local/go/src/net/http/server.go:3086 +0x28 fp=0xc0004b9fe0 sp=0xc0004b9fb8 pc=0x6eec08
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0004b9fe8 sp=0xc0004b9fe0 pc=0x48d961
created by net/http.(*Server).Serve in goroutine 1
/usr/local/go/src/net/http/server.go:3086 +0x5cb
goroutine 11 [IO wait]:
runtime.gopark(0x0?, 0xb?, 0x0?, 0x0?, 0xb?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc00058eda0 sp=0xc00058ed80 pc=0x45de8e
runtime.netpollblock(0x49e718?, 0x428946?, 0x0?)
/usr/local/go/src/runtime/netpoll.go:564 +0xf7 fp=0xc00058edd8 sp=0xc00058eda0 pc=0x456917
internal/poll.runtime_pollWait(0x7fa18ddcbaa0, 0x72)
/usr/local/go/src/runtime/netpoll.go:343 +0x85 fp=0xc00058edf8 sp=0xc00058edd8 pc=0x4880a5
internal/poll.(*pollDesc).wait(0xc00041a000?, 0xc00048a6a1?, 0x0)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc00058ee20 sp=0xc00058edf8 pc=0x50f327
internal/poll.(*pollDesc).waitRead(...)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc00041a000, {0xc00048a6a1, 0x1, 0x1})
/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a fp=0xc00058eeb8 sp=0xc00058ee20 pc=0x51061a
net.(*netFD).Read(0xc00041a000, {0xc00048a6a1?, 0xc00058ef40?, 0x48a030?})
/usr/local/go/src/net/fd_posix.go:55 +0x25 fp=0xc00058ef00 sp=0xc00058eeb8 pc=0x5893c5
net.(*conn).Read(0xc00005c040, {0xc00048a6a1?, 0x1?, 0xc0004240a0?})
/usr/local/go/src/net/net.go:179 +0x45 fp=0xc00058ef48 sp=0xc00058ef00 pc=0x597665
net.(*TCPConn).Read(0xc00048a0f0?, {0xc00048a6a1?, 0xc0004240a0?, 0x0?})
<autogenerated>:1 +0x25 fp=0xc00058ef78 sp=0xc00058ef48 pc=0x5a9565
net/http.(*connReader).backgroundRead(0xc00048a690)
/usr/local/go/src/net/http/server.go:683 +0x37 fp=0xc00058efc8 sp=0xc00058ef78 pc=0x6e4157
net/http.(*connReader).startBackgroundRead.func2()
/usr/local/go/src/net/http/server.go:679 +0x25 fp=0xc00058efe0 sp=0xc00058efc8 pc=0x6e4085
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc00058efe8 sp=0xc00058efe0 pc=0x48d961
created by net/http.(*connReader).startBackgroundRead in goroutine 10
/usr/local/go/src/net/http/server.go:679 +0xba
rax 0x0
rbx 0x7fa18f649640
rcx 0x7fa1d727b9fc
rdx 0x6
rdi 0x1f7
rsi 0x1fa
rbp 0x1fa
rsp 0x7fa18f6481f0
r8 0x7fa18f6482c0
r9 0x7fa18f648260
r10 0x8
r11 0x246
r12 0x6
r13 0x16
r14 0x60c3f8000
r15 0x0
rip 0x7fa1d727b9fc
rflags 0x246
cs 0x33
fs 0x0
gs 0x0
Error: Post "http://127.0.0.1:11434/api/generate": EOF
```
Help would be appreciated
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1865/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1865/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8362
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8362/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8362/comments
|
https://api.github.com/repos/ollama/ollama/issues/8362/events
|
https://github.com/ollama/ollama/issues/8362
| 2,777,448,800
|
I_kwDOJ0Z1Ps6ljIFg
| 8,362
|
please add model:QVQ-Preview 72B!
|
{
"login": "twythebest",
"id": 89891289,
"node_id": "MDQ6VXNlcjg5ODkxMjg5",
"avatar_url": "https://avatars.githubusercontent.com/u/89891289?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/twythebest",
"html_url": "https://github.com/twythebest",
"followers_url": "https://api.github.com/users/twythebest/followers",
"following_url": "https://api.github.com/users/twythebest/following{/other_user}",
"gists_url": "https://api.github.com/users/twythebest/gists{/gist_id}",
"starred_url": "https://api.github.com/users/twythebest/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/twythebest/subscriptions",
"organizations_url": "https://api.github.com/users/twythebest/orgs",
"repos_url": "https://api.github.com/users/twythebest/repos",
"events_url": "https://api.github.com/users/twythebest/events{/privacy}",
"received_events_url": "https://api.github.com/users/twythebest/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
open
| false
| null |
[] | null | 0
| 2025-01-09T10:37:06
| 2025-01-09T10:37:06
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
please add model:QVQ-Preview 72B!
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8362/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8362/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/1987
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1987/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1987/comments
|
https://api.github.com/repos/ollama/ollama/issues/1987/events
|
https://github.com/ollama/ollama/pull/1987
| 2,080,723,968
|
PR_kwDOJ0Z1Ps5kBdw2
| 1,987
|
Let gpu.go and gen_linux.sh also find CUDA on Arch Linux
|
{
"login": "xyproto",
"id": 52813,
"node_id": "MDQ6VXNlcjUyODEz",
"avatar_url": "https://avatars.githubusercontent.com/u/52813?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xyproto",
"html_url": "https://github.com/xyproto",
"followers_url": "https://api.github.com/users/xyproto/followers",
"following_url": "https://api.github.com/users/xyproto/following{/other_user}",
"gists_url": "https://api.github.com/users/xyproto/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xyproto/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xyproto/subscriptions",
"organizations_url": "https://api.github.com/users/xyproto/orgs",
"repos_url": "https://api.github.com/users/xyproto/repos",
"events_url": "https://api.github.com/users/xyproto/events{/privacy}",
"received_events_url": "https://api.github.com/users/xyproto/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-01-14T13:13:16
| 2024-01-19T00:01:04
| 2024-01-18T21:32:10
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1987",
"html_url": "https://github.com/ollama/ollama/pull/1987",
"diff_url": "https://github.com/ollama/ollama/pull/1987.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1987.patch",
"merged_at": "2024-01-18T21:32:10"
}
|
* Let gpu.go and gen_linux.sh find CUDA on Arch Linux.
* These changes were needed to let the [ollama-cuda](https://archlinux.org/packages/extra/x86_64/ollama-cuda/) package on Arch Linux find CUDA when building.
* Also, use `find` instead of `ls` in `gen_linux.sh`.
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1987/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1987/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5408
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5408/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5408/comments
|
https://api.github.com/repos/ollama/ollama/issues/5408/events
|
https://github.com/ollama/ollama/pull/5408
| 2,384,163,596
|
PR_kwDOJ0Z1Ps50Foy8
| 5,408
|
cmd: create context
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-07-01T15:39:40
| 2024-11-22T00:53:50
| 2024-11-22T00:53:50
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5408",
"html_url": "https://github.com/ollama/ollama/pull/5408",
"diff_url": "https://github.com/ollama/ollama/pull/5408.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5408.patch",
"merged_at": null
}
|
restrict create file references to a directory context, default the parent directory of the Modelfile but configurable with `-C/--context `
this allows follow up changes like #4240 without exposing more information than is requested
Note: this is a breaking CLI change since arbitrary file paths will no longer be supported
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5408/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5408/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1879
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1879/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1879/comments
|
https://api.github.com/repos/ollama/ollama/issues/1879/events
|
https://github.com/ollama/ollama/issues/1879
| 2,073,277,911
|
I_kwDOJ0Z1Ps57k7XX
| 1,879
|
Jetson Orin NX 16gb not seeing much CUDA usage with Ubuntu 22 and Jetpack 6 even after applying documented LD path work around
|
{
"login": "carolynhudson",
"id": 59717105,
"node_id": "MDQ6VXNlcjU5NzE3MTA1",
"avatar_url": "https://avatars.githubusercontent.com/u/59717105?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/carolynhudson",
"html_url": "https://github.com/carolynhudson",
"followers_url": "https://api.github.com/users/carolynhudson/followers",
"following_url": "https://api.github.com/users/carolynhudson/following{/other_user}",
"gists_url": "https://api.github.com/users/carolynhudson/gists{/gist_id}",
"starred_url": "https://api.github.com/users/carolynhudson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/carolynhudson/subscriptions",
"organizations_url": "https://api.github.com/users/carolynhudson/orgs",
"repos_url": "https://api.github.com/users/carolynhudson/repos",
"events_url": "https://api.github.com/users/carolynhudson/events{/privacy}",
"received_events_url": "https://api.github.com/users/carolynhudson/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 4
| 2024-01-09T22:32:01
| 2024-01-11T02:02:03
| 2024-01-10T23:21:58
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I recently rebuilt my Orin NX and chose the newest release OS and Jetpack edition as I wanted a clean slate to try ollama in. I saw no difference in the performance before or after following the given workaround. When I close the service instance and intentionally opened a new terminal window to run ollama serve in the service loads, says it sees CUDA but when it does the GPU check it looks in the modified LD path for a libnvidia-ml.so, fails, and then reports no GPUs available. I conformed using jtop that all CPU cores were at or near 100% when running mistral and the CUDA cores were mostly idle with only occasional usage blips. I also tried other paths such as the cuda12.2 folder rather than the base CUDA and where I did see a libnvida-ml.so which just causes another error over libnvidia.so.1 and still no “GPU” detection and no CUDA usage. I went so far as to run through the Nvidia portion of the setup script and made sure everything was installed as directed by it.
I think I will try rebuilding it again with Jetpack 5.1 just to see if it works there. But I wanted to report it anyway just in case it is a Jetpack 6.0 vs. 5.1 issue. I will update if that fixes it.
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1879/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1879/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6196
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6196/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6196/comments
|
https://api.github.com/repos/ollama/ollama/issues/6196/events
|
https://github.com/ollama/ollama/issues/6196
| 2,450,533,084
|
I_kwDOJ0Z1Ps6SECrc
| 6,196
|
llm decode error: 500 Internal Server Error - detokenize doesn't handle unicode characters from server.cpp properly on windows
|
{
"login": "iBog",
"id": 168304,
"node_id": "MDQ6VXNlcjE2ODMwNA==",
"avatar_url": "https://avatars.githubusercontent.com/u/168304?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iBog",
"html_url": "https://github.com/iBog",
"followers_url": "https://api.github.com/users/iBog/followers",
"following_url": "https://api.github.com/users/iBog/following{/other_user}",
"gists_url": "https://api.github.com/users/iBog/gists{/gist_id}",
"starred_url": "https://api.github.com/users/iBog/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iBog/subscriptions",
"organizations_url": "https://api.github.com/users/iBog/orgs",
"repos_url": "https://api.github.com/users/iBog/repos",
"events_url": "https://api.github.com/users/iBog/events{/privacy}",
"received_events_url": "https://api.github.com/users/iBog/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg",
"url": "https://api.github.com/repos/ollama/ollama/labels/windows",
"name": "windows",
"color": "0052CC",
"default": false,
"description": ""
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 1
| 2024-08-06T10:31:14
| 2024-10-22T19:07:53
| 2024-10-22T19:07:53
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Happened few times when same chat log (history) was used after local model was switched.
For example chat was started with "llava:7b-v1.6" when switched to "llama3.1:latest" without clear context array (not sure with exact same llm's pair)
LOG:
```
time=2024-08-05T14:54:58.212+03:00 level=INFO source=server.go:623 msg="llama runner started in 2.92 seconds"
time=2024-08-05T14:54:58.214+03:00 level=INFO source=server.go:1028 msg="llm decode error: 500 Internal Server Error\n[json.exception.type_error.316] invalid UTF-8 byte at index 181: 0x6C"
[GIN] 2024/08/05 - 14:54:58 | 500 | 3.3204013s | 127.0.0.1 | POST "/api/generate"
```
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.3.3
|
{
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users/jessegross/followers",
"following_url": "https://api.github.com/users/jessegross/following{/other_user}",
"gists_url": "https://api.github.com/users/jessegross/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jessegross/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jessegross/subscriptions",
"organizations_url": "https://api.github.com/users/jessegross/orgs",
"repos_url": "https://api.github.com/users/jessegross/repos",
"events_url": "https://api.github.com/users/jessegross/events{/privacy}",
"received_events_url": "https://api.github.com/users/jessegross/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6196/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6196/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3413
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3413/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3413/comments
|
https://api.github.com/repos/ollama/ollama/issues/3413/events
|
https://github.com/ollama/ollama/issues/3413
| 2,216,345,527
|
I_kwDOJ0Z1Ps6EGr-3
| 3,413
|
Template cannot work
|
{
"login": "LiuChaoXD",
"id": 39954067,
"node_id": "MDQ6VXNlcjM5OTU0MDY3",
"avatar_url": "https://avatars.githubusercontent.com/u/39954067?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LiuChaoXD",
"html_url": "https://github.com/LiuChaoXD",
"followers_url": "https://api.github.com/users/LiuChaoXD/followers",
"following_url": "https://api.github.com/users/LiuChaoXD/following{/other_user}",
"gists_url": "https://api.github.com/users/LiuChaoXD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LiuChaoXD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LiuChaoXD/subscriptions",
"organizations_url": "https://api.github.com/users/LiuChaoXD/orgs",
"repos_url": "https://api.github.com/users/LiuChaoXD/repos",
"events_url": "https://api.github.com/users/LiuChaoXD/events{/privacy}",
"received_events_url": "https://api.github.com/users/LiuChaoXD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-03-30T09:05:43
| 2024-05-16T23:38:41
| 2024-05-16T23:38:40
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I create the modelfile follow the document.
```
FROM Path/to/mixtral/gguf
PARAMETER temperature 0.9
PARAMETER num_ctx 32000
PARAMETER stop "[INST]"
PARAMETER stop "[/INST]"
TEMPLATE """
{{ if .First }}<s>{{ if .System }}[INST]{{ .System }}[/INST]{{ end }}</s>{{ end }}[INST] {{ .Prompt }} [/INST]
"""
```
and after create the model. The model information can be found as
`ollama show mixtral --modelfile`
```bash
# Modelfile generated by "ollama show"
# To build a new Modelfile based on this one, replace the FROM line with:
# FROM mixtral:latest
FROM .ollama/models/blobs/sha256-bcb3beeabc9bfdd496ecd5d01e83b8e4e380ff778d4f58224b6fd96d5c8cbfc4
TEMPLATE """
{{ if .First }}<s>{{ if .System }}[INST]{{ .System }}[/INST]{{ end }}</s>{{ end }}[INST] {{ .Prompt }} [/INST]
"""
PARAMETER num_ctx 32000
PARAMETER stop "[INST]"
PARAMETER stop "[/INST]"
PARAMETER temperature 0.9
```
### What did you expect to see?
I use openAI API to call this model
The code as follows
```python
from openai import OpenAI
client = OpenAI(base_url="http://localhost:11435/v1/", api_key="sk-xxx")
stream = client.chat.completions.create(
messages=[
{"role": "system", "content": "You are Chatgpt"},
{"role": "user", "content": "who made you?"},
],
model="mixtral",
stream=True,
)
for chunk in stream:
print(chunk.choices[0].delta.content or "", end="")
```
The response is
```bash
I was created by Mistral AI, a leading AI company based in Paris, France.
```
You can see the `system` cannot work.
### Steps to reproduce
I use the latest Ollama
### Are there any recent changes that introduced the issue?
_No response_
### OS
macOS
### Architecture
arm64
### Platform
_No response_
### Ollama version
0.1.30
### GPU
Apple
### GPU info
_No response_
### CPU
Apple
### Other software
_No response_
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3413/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3413/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5724
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5724/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5724/comments
|
https://api.github.com/repos/ollama/ollama/issues/5724/events
|
https://github.com/ollama/ollama/issues/5724
| 2,411,246,192
|
I_kwDOJ0Z1Ps6PuLJw
| 5,724
|
Avoid blocking requests to already loaded models while loading another model
|
{
"login": "ghost",
"id": 10137,
"node_id": "MDQ6VXNlcjEwMTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghost",
"html_url": "https://github.com/ghost",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.github.com/users/ghost/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghost/subscriptions",
"organizations_url": "https://api.github.com/users/ghost/orgs",
"repos_url": "https://api.github.com/users/ghost/repos",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghost/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 1
| 2024-07-16T14:07:00
| 2024-07-16T20:41:55
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I have noticed that when GPU VRAM gets near-full, but ollama has decided to load 2 models into VRAM, incoming requests to one model simply stall until the other model pops out of memory. This is most noticeable with an embedding model plus a larger model that takes up most of my 16 GB of VRAM. When the embedding model goes out of VRAM, the request to the other model processes immediately. So it's pretty clear that ollama is queueing up the request, but it seems like some tweaks need to be made to the algorithm for dispatching the requests in concurrent scenarios, or the logic for deciding to load multiple models into VRAM.
### OS
Linux
### GPU
AMD
### CPU
AMD
### Ollama version
0.2.5
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5724/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5724/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/3301
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3301/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3301/comments
|
https://api.github.com/repos/ollama/ollama/issues/3301/events
|
https://github.com/ollama/ollama/issues/3301
| 2,203,340,007
|
I_kwDOJ0Z1Ps6DVEzn
| 3,301
|
Question: GPU not fully utilized when not all layers are offloaded
|
{
"login": "TomTom101",
"id": 872712,
"node_id": "MDQ6VXNlcjg3MjcxMg==",
"avatar_url": "https://avatars.githubusercontent.com/u/872712?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TomTom101",
"html_url": "https://github.com/TomTom101",
"followers_url": "https://api.github.com/users/TomTom101/followers",
"following_url": "https://api.github.com/users/TomTom101/following{/other_user}",
"gists_url": "https://api.github.com/users/TomTom101/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TomTom101/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TomTom101/subscriptions",
"organizations_url": "https://api.github.com/users/TomTom101/orgs",
"repos_url": "https://api.github.com/users/TomTom101/repos",
"events_url": "https://api.github.com/users/TomTom101/events{/privacy}",
"received_events_url": "https://api.github.com/users/TomTom101/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] |
closed
| false
| null |
[] | null | 13
| 2024-03-22T21:13:57
| 2024-06-01T21:27:54
| 2024-06-01T21:27:34
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I am running Mixtral 8x7B Q4 on a RTX 3090 with 24GB VRAM. 23/33 layers are offloaded to the GPU:
```
llm_load_tensors: offloading 23 repeating layers to GPU
llm_load_tensors: offloaded 23/33 layers to GPU
llm_load_tensors: CPU buffer size = 25215.87 MiB
llm_load_tensors: CUDA0 buffer size = 17999.66 MiB
```
Now during interference, the GPU utilization does never exceed 15%. I get ~15 tokens/s and mostly see this:
```
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.161.07 Driver Version: 535.161.07 CUDA Version: 12.2 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 NVIDIA GeForce RTX 3090 Off | 00000000:08:00.0 Off | N/A |
| 30% 44C P2 150W / 370W | 22070MiB / 24576MiB | 15% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
```
Utilization is always >90% when I load Mistral 7B which is fully offloaded to the GPU (and is pretty fast with ~110 tokens/s)
## Questions
* Is such a low GPU utilization normal when "only" 70% of layers are offloaded?
* What options do I have to increase GPU utilization? That thing is too expensive to have it sit idle ;)
Thanks!
<details>
<summary>Here is the full log ollama startup log:</summary>
```
time=2024-03-22T20:48:20.367Z level=INFO source=routes.go:76 msg="changing loaded model"
time=2024-03-22T20:48:20.624Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-03-22T20:48:20.624Z level=INFO source=gpu.go:119 msg="CUDA Compute Capability detected: 8.6"
time=2024-03-22T20:48:20.624Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-03-22T20:48:20.624Z level=INFO source=gpu.go:119 msg="CUDA Compute Capability detected: 8.6"
time=2024-03-22T20:48:20.624Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-03-22T20:48:20.624Z level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: /root/.ollama/assets/0.1.28/cuda_v11/libext_server.so"
time=2024-03-22T20:48:20.624Z level=INFO source=dyn_ext_server.go:150 msg="Initializing llama server"
llama_model_loader: loaded meta data with 26 key-value pairs and 995 tensors from /root/.ollama/models/blobs/sha256:3a17f7cde150070bbc815645693fb93c311cc42e7deaf198364acadcf08458f8 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.name str = mistralai
llama_model_loader: - kv 2: llama.context_length u32 = 32768
llama_model_loader: - kv 3: llama.embedding_length u32 = 4096
llama_model_loader: - kv 4: llama.block_count u32 = 32
llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336
llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 7: llama.attention.head_count u32 = 32
llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 9: llama.expert_count u32 = 8
llama_model_loader: - kv 10: llama.expert_used_count u32 = 2
llama_model_loader: - kv 11: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 12: llama.rope.freq_base f32 = 1000000.000000
llama_model_loader: - kv 13: general.file_type u32 = 15
llama_model_loader: - kv 14: tokenizer.ggml.model str = llama
llama_model_loader: - kv 15: tokenizer.ggml.tokens arr[str,32000] = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv 16: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv 17: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv 18: tokenizer.ggml.merges arr[str,58980] = ["▁ t", "i n", "e r", "▁ a", "h e...
llama_model_loader: - kv 19: tokenizer.ggml.bos_token_id u32 = 1
llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 2
llama_model_loader: - kv 21: tokenizer.ggml.unknown_token_id u32 = 0
llama_model_loader: - kv 22: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 23: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 24: tokenizer.chat_template str = {{ bos_token }}{% for message in mess...
llama_model_loader: - kv 25: general.quantization_version u32 = 2
llama_model_loader: - type f32: 65 tensors
llama_model_loader: - type f16: 32 tensors
llama_model_loader: - type q8_0: 64 tensors
llama_model_loader: - type q4_K: 833 tensors
llama_model_loader: - type q6_K: 1 tensors
llm_load_vocab: special tokens definition check successful ( 259/32000 ).
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = SPM
llm_load_print_meta: n_vocab = 32000
llm_load_print_meta: n_merges = 0
llm_load_print_meta: n_ctx_train = 32768
llm_load_print_meta: n_embd = 4096
llm_load_print_meta: n_head = 32
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_layer = 32
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 4
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: n_ff = 14336
llm_load_print_meta: n_expert = 8
llm_load_print_meta: n_expert_used = 2
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx = 32768
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: model type = 7B
llm_load_print_meta: model ftype = Q4_K - Medium
llm_load_print_meta: model params = 46.70 B
llm_load_print_meta: model size = 24.62 GiB (4.53 BPW)
llm_load_print_meta: general.name = mistralai
llm_load_print_meta: BOS token = 1 '<s>'
llm_load_print_meta: EOS token = 2 '</s>'
llm_load_print_meta: UNK token = 0 '<unk>'
llm_load_print_meta: LF token = 13 '<0x0A>'
llm_load_tensors: ggml ctx size = 0.76 MiB
llm_load_tensors: offloading 23 repeating layers to GPU
llm_load_tensors: offloaded 23/33 layers to GPU
llm_load_tensors: CPU buffer size = 25215.87 MiB
llm_load_tensors: CUDA0 buffer size = 17999.66 MiB
....................................................................................................
llama_new_context_with_model: n_ctx = 2048
llama_new_context_with_model: freq_base = 1000000.0
llama_new_context_with_model: freq_scale = 1
ggml_init_cublas: GGML_CUDA_FORCE_MMQ: yes
ggml_init_cublas: CUDA_USE_TENSOR_CORES: no
ggml_init_cublas: found 1 CUDA devices:
Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
llama_kv_cache_init: CUDA_Host KV buffer size = 72.00 MiB
llama_kv_cache_init: CUDA0 KV buffer size = 184.00 MiB
llama_new_context_with_model: KV self size = 256.00 MiB, K (f16): 128.00 MiB, V (f16): 128.00 MiB
llama_new_context_with_model: CUDA_Host input buffer size = 13.02 MiB
llama_new_context_with_model: CUDA0 compute buffer size = 192.01 MiB
llama_new_context_with_model: CUDA_Host compute buffer size = 188.03 MiB
llama_new_context_with_model: graph splits (measure): 3
time=2024-03-22T20:48:22.828Z level=INFO source=dyn_ext_server.go:162 msg="Starting llama main loop"
```
</details>
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3301/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3301/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5467
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5467/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5467/comments
|
https://api.github.com/repos/ollama/ollama/issues/5467/events
|
https://github.com/ollama/ollama/pull/5467
| 2,389,385,467
|
PR_kwDOJ0Z1Ps50XhQh
| 5,467
|
Fix corner cases on tmp cleaner on mac
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-07-03T20:10:59
| 2024-07-03T20:39:39
| 2024-07-03T20:39:36
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5467",
"html_url": "https://github.com/ollama/ollama/pull/5467",
"diff_url": "https://github.com/ollama/ollama/pull/5467.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5467.patch",
"merged_at": "2024-07-03T20:39:36"
}
|
When ollama is running a long time, tmp cleaners can remove the runners. This tightens up a few corner cases on arm macs where we failed with "server cpu not listed in available servers map[]"
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5467/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5467/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2412
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2412/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2412/comments
|
https://api.github.com/repos/ollama/ollama/issues/2412/events
|
https://github.com/ollama/ollama/pull/2412
| 2,125,558,627
|
PR_kwDOJ0Z1Ps5mZazd
| 2,412
|
Added `/screenshot` command for multimodal model chats
|
{
"login": "ac-99",
"id": 47637771,
"node_id": "MDQ6VXNlcjQ3NjM3Nzcx",
"avatar_url": "https://avatars.githubusercontent.com/u/47637771?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ac-99",
"html_url": "https://github.com/ac-99",
"followers_url": "https://api.github.com/users/ac-99/followers",
"following_url": "https://api.github.com/users/ac-99/following{/other_user}",
"gists_url": "https://api.github.com/users/ac-99/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ac-99/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ac-99/subscriptions",
"organizations_url": "https://api.github.com/users/ac-99/orgs",
"repos_url": "https://api.github.com/users/ac-99/repos",
"events_url": "https://api.github.com/users/ac-99/events{/privacy}",
"received_events_url": "https://api.github.com/users/ac-99/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-02-08T16:10:51
| 2024-05-08T00:20:55
| 2024-05-08T00:20:54
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/2412",
"html_url": "https://github.com/ollama/ollama/pull/2412",
"diff_url": "https://github.com/ollama/ollama/pull/2412.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2412.patch",
"merged_at": null
}
|
Added ability to feed current screen directly to multimodal models with a `/screenshot` command.
This enables a more dynamic experience for users who can more quickly and easily get contextual responses from their multimodal assistants.
**Example use cases**
1. Research assistant -- allows the multimodal LM to use your current screen as context and suggest ideas e.g "what's this animal?"
2. Study assistant -- allows to multimodal LM to provide explanations, clarifications and examples based on current text or "explain this diagram"
3. Design assistant -- get quick, direct input on designs
**Usage**
User types `/screenshot` into the terminal, identically to the existing `path/to/image` functionality. Includes support for multiple displays.
**Implementation**
1. `/screenshot` command appearing in user input
2. `captureScreenshots` is called
3. `screenshot` is saved in a tempdir (as identified by `os.TempDir`) with name based on the image size and screen index number
4. These paths are appended to the user input `line` variable
As a result, these paths are then processed in the same way as existing `path/to/file.png` images are
I also added some basic sanity checks with tests.
**Issues**
I dont seem to be able to run the tests locally for some reason, so I'd appreciate some support on that.
Requesting review and input from @jmorganca. I'm more than open to making changes or updates -- this is my first OS contribution!
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2412/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2412/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7748
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7748/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7748/comments
|
https://api.github.com/repos/ollama/ollama/issues/7748/events
|
https://github.com/ollama/ollama/issues/7748
| 2,673,803,581
|
I_kwDOJ0Z1Ps6fXwE9
| 7,748
|
ggml.c:4044: GGML_ASSERT(view_src == NULL || data_size == 0 || data_size + view_offs <= ggml_nbytes(view_src)) failed
|
{
"login": "pavelruzicka",
"id": 23432593,
"node_id": "MDQ6VXNlcjIzNDMyNTkz",
"avatar_url": "https://avatars.githubusercontent.com/u/23432593?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pavelruzicka",
"html_url": "https://github.com/pavelruzicka",
"followers_url": "https://api.github.com/users/pavelruzicka/followers",
"following_url": "https://api.github.com/users/pavelruzicka/following{/other_user}",
"gists_url": "https://api.github.com/users/pavelruzicka/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pavelruzicka/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pavelruzicka/subscriptions",
"organizations_url": "https://api.github.com/users/pavelruzicka/orgs",
"repos_url": "https://api.github.com/users/pavelruzicka/repos",
"events_url": "https://api.github.com/users/pavelruzicka/events{/privacy}",
"received_events_url": "https://api.github.com/users/pavelruzicka/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 4
| 2024-11-19T22:54:26
| 2025-01-27T12:52:24
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
On certain API requests, the server throws a segmentation fault error and the API responds with a HTTP 500. So far, I have encountered this twice in thousands of requests. Unfortunately I do not have the particular prompts that resulted in this logged but I do not expect this to be directly reproducible based on a prompt.
Full stack trace:
```
ggml.c:4044: GGML_ASSERT(view_src == NULL || data_size == 0 || data_size + view_offs <= ggml_nbytes(view_src)) failed
SIGSEGV: segmentation violation
PC=0x7ae06884d1d7 m=4 sigcode=1 addr=0x204803fbc
signal arrived during cgo execution
goroutine 7 gp=0xc000156000 m=4 mp=0xc00004d808 [syscall]:
runtime.cgocall(0x5bb738602e90, 0xc000056b60)
runtime/cgocall.go:157 +0x4b fp=0xc000056b38 sp=0xc000056b00 pc=0x5bb7383853cb
github.com/ollama/ollama/llama._Cfunc_llama_decode(0x7adfec006460, {0x1, 0x7adfec3acb70, 0x0, 0x0, 0x7adfec3ad380, 0x7adfec3adb90, 0x7adfec17b380, 0x7adfd10c2910,
0x0, ...})
_cgo_gotypes.go:543 +0x52 fp=0xc000056b60 sp=0xc000056b38 pc=0x5bb738482952
github.com/ollama/ollama/llama.(*Context).Decode.func1(0x5bb7385fed4b?, 0x7adfec006460?)
github.com/ollama/ollama/llama/llama.go:167 +0xd8 fp=0xc000056c80 sp=0xc000056b60 pc=0x5bb738484e78
github.com/ollama/ollama/llama.(*Context).Decode(0xc000056d68?, 0x1?)
github.com/ollama/ollama/llama/llama.go:167 +0x17 fp=0xc000056cc8 sp=0xc000056c80 pc=0x5bb738484cd7
main.(*Server).processBatch(0xc000128120, 0xc000126150, 0xc0001261c0)
github.com/ollama/ollama/llama/runner/runner.go:424 +0x29e fp=0xc000056ed0 sp=0xc000056cc8 pc=0x5bb7385fdd7e
main.(*Server).run(0xc000128120, {0x5bb73893ca40, 0xc00007c050})
github.com/ollama/ollama/llama/runner/runner.go:338 +0x1a5 fp=0xc000056fb8 sp=0xc000056ed0 pc=0x5bb7385fd765
main.main.gowrap2()
github.com/ollama/ollama/llama/runner/runner.go:901 +0x28 fp=0xc000056fe0 sp=0xc000056fb8 pc=0x5bb738601ec8
runtime.goexit({})
runtime/asm_amd64.s:1695 +0x1 fp=0xc000056fe8 sp=0xc000056fe0 pc=0x5bb7383edde1
created by main.main in goroutine 1
github.com/ollama/ollama/llama/runner/runner.go:901 +0xc2b
goroutine 1 gp=0xc0000061c0 m=nil [IO wait, 520 minutes]:
runtime.gopark(0xc000038a08?, 0xc00014b908?, 0xb1?, 0x7a?, 0x2000?)
runtime/proc.go:402 +0xce fp=0xc00014b888 sp=0xc00014b868 pc=0x5bb7383bc00e
runtime.netpollblock(0xc00014b920?, 0x38384b26?, 0xb7?)
runtime/netpoll.go:573 +0xf7 fp=0xc00014b8c0 sp=0xc00014b888 pc=0x5bb7383b4257
internal/poll.runtime_pollWait(0x7ae067dc7fe0, 0x72)
runtime/netpoll.go:345 +0x85 fp=0xc00014b8e0 sp=0xc00014b8c0 pc=0x5bb7383e8aa5
internal/poll.(*pollDesc).wait(0x3?, 0x7c?, 0x0)
internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc00014b908 sp=0xc00014b8e0 pc=0x5bb7384389c7
internal/poll.(*pollDesc).waitRead(...)
internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc000150080)
internal/poll/fd_unix.go:611 +0x2ac fp=0xc00014b9b0 sp=0xc00014b908 pc=0x5bb738439e8c
net.(*netFD).accept(0xc000150080)
net/fd_unix.go:172 +0x29 fp=0xc00014ba68 sp=0xc00014b9b0 pc=0x5bb7384a88a9
net.(*TCPListener).accept(0xc0000321e0)
net/tcpsock_posix.go:159 +0x1e fp=0xc00014ba90 sp=0xc00014ba68 pc=0x5bb7384b95de
net.(*TCPListener).Accept(0xc0000321e0)
net/tcpsock.go:327 +0x30 fp=0xc00014bac0 sp=0xc00014ba90 pc=0x5bb7384b8930
net/http.(*onceCloseListener).Accept(0xc000190090?)
<autogenerated>:1 +0x24 fp=0xc00014bad8 sp=0xc00014bac0 pc=0x5bb7385dfa44
net/http.(*Server).Serve(0xc000168000, {0x5bb73893c400, 0xc0000321e0})
net/http/server.go:3260 +0x33e fp=0xc00014bc08 sp=0xc00014bad8 pc=0x5bb7385d685e
main.main()
github.com/ollama/ollama/llama/runner/runner.go:921 +0xfcc fp=0xc00014bf50 sp=0xc00014bc08 pc=0x5bb738601c4c
runtime.main()
runtime/proc.go:271 +0x29d fp=0xc00014bfe0 sp=0xc00014bf50 pc=0x5bb7383bbbdd
runtime.goexit({})
runtime/asm_amd64.s:1695 +0x1 fp=0xc00014bfe8 sp=0xc00014bfe0 pc=0x5bb7383edde1
goroutine 2 gp=0xc000006c40 m=nil [force gc (idle), 3 minutes]:
runtime.gopark(0x1dd19be52e23?, 0x0?, 0x0?, 0x0?, 0x0?)
runtime/proc.go:402 +0xce fp=0xc000046fa8 sp=0xc000046f88 pc=0x5bb7383bc00e
runtime.goparkunlock(...)
runtime/proc.go:408
runtime.forcegchelper()
runtime/proc.go:326 +0xb8 fp=0xc000046fe0 sp=0xc000046fa8 pc=0x5bb7383bbe98
runtime.goexit({})
runtime/asm_amd64.s:1695 +0x1 fp=0xc000046fe8 sp=0xc000046fe0 pc=0x5bb7383edde1
created by runtime.init.6 in goroutine 1
runtime/proc.go:314 +0x1a
goroutine 3 gp=0xc000007180 m=nil [GC sweep wait]:
runtime.gopark(0x5bb738b09e01?, 0x5bb738b09e40?, 0xc?, 0x9?, 0x1?)
runtime/proc.go:402 +0xce fp=0xc000047780 sp=0xc000047760 pc=0x5bb7383bc00e
runtime.goparkunlock(...)
runtime/proc.go:408
runtime.bgsweep(0xc00006e000)
runtime/mgcsweep.go:318 +0xdf fp=0xc0000477c8 sp=0xc000047780 pc=0x5bb7383a6b9f
runtime.gcenable.gowrap1()
runtime/mgc.go:203 +0x25 fp=0xc0000477e0 sp=0xc0000477c8 pc=0x5bb73839b685
runtime.goexit({})
runtime/asm_amd64.s:1695 +0x1 fp=0xc0000477e8 sp=0xc0000477e0 pc=0x5bb7383edde1
created by runtime.gcenable in goroutine 1
runtime/mgc.go:203 +0x66
goroutine 4 gp=0xc000007340 m=nil [GC scavenge wait]:
runtime.gopark(0x10000?, 0x166b9ea?, 0x0?, 0x0?, 0x0?)
runtime/proc.go:402 +0xce fp=0xc000047f78 sp=0xc000047f58 pc=0x5bb7383bc00e
runtime.goparkunlock(...)
runtime/proc.go:408
runtime.(*scavengerState).park(0x5bb738b0a4c0)
runtime/mgcscavenge.go:425 +0x49 fp=0xc000047fa8 sp=0xc000047f78 pc=0x5bb7383a4549
runtime.bgscavenge(0xc00006e000)
runtime/mgcscavenge.go:658 +0x59 fp=0xc000047fc8 sp=0xc000047fa8 pc=0x5bb7383a4af9
runtime.gcenable.gowrap2()
runtime/mgc.go:204 +0x25 fp=0xc000047fe0 sp=0xc000047fc8 pc=0x5bb73839b625
runtime.goexit({})
runtime/asm_amd64.s:1695 +0x1 fp=0xc000047fe8 sp=0xc000047fe0 pc=0x5bb7383edde1
created by runtime.gcenable in goroutine 1
runtime/mgc.go:204 +0xa5
goroutine 5 gp=0xc000007c00 m=nil [finalizer wait, 3 minutes]:
runtime.gopark(0x0?, 0x5bb7389381a0?, 0x0?, 0x60?, 0x1000000010?)
runtime/proc.go:402 +0xce fp=0xc000046620 sp=0xc000046600 pc=0x5bb7383bc00e
runtime.runfinq()
runtime/mfinal.go:194 +0x107 fp=0xc0000467e0 sp=0xc000046620 pc=0x5bb73839a6c7
runtime.goexit({})
runtime/asm_amd64.s:1695 +0x1 fp=0xc0000467e8 sp=0xc0000467e0 pc=0x5bb7383edde1
created by runtime.createfing in goroutine 1
runtime/mfinal.go:164 +0x3d
goroutine 22 gp=0xc000196000 m=nil [select]:
runtime.gopark(0xc000147a80?, 0x2?, 0x18?, 0x77?, 0xc000147824?)
runtime/proc.go:402 +0xce fp=0xc000147698 sp=0xc000147678 pc=0x5bb7383bc00e
runtime.selectgo(0xc000147a80, 0xc000147820, 0xc00037de00?, 0x0, 0x2?, 0x1)
runtime/select.go:327 +0x725 fp=0xc0001477b8 sp=0xc000147698 pc=0x5bb7383cd3e5
main.(*Server).completion(0xc000128120, {0x5bb73893c5b0, 0xc0000e22a0}, 0xc0000c0360)
github.com/ollama/ollama/llama/runner/runner.go:652 +0x8fe fp=0xc000147ab8 sp=0xc0001477b8 pc=0x5bb7385ff6de
main.(*Server).completion-fm({0x5bb73893c5b0?, 0xc0000e22a0?}, 0x5bb7385dab8d?)
<autogenerated>:1 +0x36 fp=0xc000147ae8 sp=0xc000147ab8 pc=0x5bb7386026b6
net/http.HandlerFunc.ServeHTTP(0xc00010cb60?, {0x5bb73893c5b0?, 0xc0000e22a0?}, 0x10?)
net/http/server.go:2171 +0x29 fp=0xc000147b10 sp=0xc000147ae8 pc=0x5bb7385d3629
net/http.(*ServeMux).ServeHTTP(0x5bb73838ef85?, {0x5bb73893c5b0, 0xc0000e22a0}, 0xc0000c0360)
net/http/server.go:2688 +0x1ad fp=0xc000147b60 sp=0xc000147b10 pc=0x5bb7385d54ad
net/http.serverHandler.ServeHTTP({0x5bb73893b900?}, {0x5bb73893c5b0?, 0xc0000e22a0?}, 0x6?)
net/http/server.go:3142 +0x8e fp=0xc000147b90 sp=0xc000147b60 pc=0x5bb7385d64ce
net/http.(*conn).serve(0xc000190090, {0x5bb73893ca08, 0xc00010adb0})
net/http/server.go:2044 +0x5e8 fp=0xc000147fb8 sp=0xc000147b90 pc=0x5bb7385d2268
net/http.(*Server).Serve.gowrap3()
net/http/server.go:3290 +0x28 fp=0xc000147fe0 sp=0xc000147fb8 pc=0x5bb7385d6c48
runtime.goexit({})
runtime/asm_amd64.s:1695 +0x1 fp=0xc000147fe8 sp=0xc000147fe0 pc=0x5bb7383edde1
created by net/http.(*Server).Serve in goroutine 1
net/http/server.go:3290 +0x4b4
goroutine 21 gp=0xc000082a80 m=nil [GC worker (idle), 4 minutes]:
runtime.gopark(0x1db5aaee6275?, 0x3?, 0x58?, 0xf?, 0x0?)
runtime/proc.go:402 +0xce fp=0xc0000cdf50 sp=0xc0000cdf30 pc=0x5bb7383bc00e
runtime.gcBgMarkWorker()
runtime/mgc.go:1310 +0xe5 fp=0xc0000cdfe0 sp=0xc0000cdf50 pc=0x5bb73839d585
runtime.goexit({})
runtime/asm_amd64.s:1695 +0x1 fp=0xc0000cdfe8 sp=0xc0000cdfe0 pc=0x5bb7383edde1
created by runtime.gcBgMarkStartWorkers in goroutine 18
runtime/mgc.go:1234 +0x1c
goroutine 41 gp=0xc000082fc0 m=nil [GC worker (idle), 3 minutes]:
runtime.gopark(0x1db5aaee606c?, 0x3?, 0xc?, 0xfe?, 0x0?)
runtime/proc.go:402 +0xce fp=0xc0000cf750 sp=0xc0000cf730 pc=0x5bb7383bc00e
runtime.gcBgMarkWorker()
runtime/mgc.go:1310 +0xe5 fp=0xc0000cf7e0 sp=0xc0000cf750 pc=0x5bb73839d585
runtime.goexit({})
runtime/asm_amd64.s:1695 +0x1 fp=0xc0000cf7e8 sp=0xc0000cf7e0 pc=0x5bb7383edde1
created by runtime.gcBgMarkStartWorkers in goroutine 18
runtime/mgc.go:1234 +0x1c
goroutine 50 gp=0xc0005e2000 m=nil [GC worker (idle), 3 minutes]:
runtime.gopark(0x1dd19bf2c88b?, 0x3?, 0xbf?, 0x40?, 0x0?)
runtime/proc.go:402 +0xce fp=0xc0000c8750 sp=0xc0000c8730 pc=0x5bb7383bc00e
runtime.gcBgMarkWorker()
runtime/mgc.go:1310 +0xe5 fp=0xc0000c87e0 sp=0xc0000c8750 pc=0x5bb73839d585
runtime.goexit({})
runtime/asm_amd64.s:1695 +0x1 fp=0xc0000c87e8 sp=0xc0000c87e0 pc=0x5bb7383edde1
created by runtime.gcBgMarkStartWorkers in goroutine 18
runtime/mgc.go:1234 +0x1c
goroutine 42 gp=0xc000083180 m=nil [GC worker (idle), 66 minutes]:
runtime.gopark(0x1a5228359582?, 0x3?, 0x1c?, 0x4?, 0x0?)
runtime/proc.go:402 +0xce fp=0xc0000cff50 sp=0xc0000cff30 pc=0x5bb7383bc00e
runtime.gcBgMarkWorker()
runtime/mgc.go:1310 +0xe5 fp=0xc0000cffe0 sp=0xc0000cff50 pc=0x5bb73839d585
runtime.goexit({})
runtime/asm_amd64.s:1695 +0x1 fp=0xc0000cffe8 sp=0xc0000cffe0 pc=0x5bb7383edde1
created by runtime.gcBgMarkStartWorkers in goroutine 18
runtime/mgc.go:1234 +0x1c
goroutine 11112 gp=0xc000156a80 m=nil [IO wait, 4 minutes]:
runtime.gopark(0x10?, 0x10?, 0xf0?, 0xbd?, 0xb?)
runtime/proc.go:402 +0xce fp=0xc00019bda8 sp=0xc00019bd88 pc=0x5bb7383bc00e
runtime.netpollblock(0x5bb738422558?, 0x38384b26?, 0xb7?)
runtime/netpoll.go:573 +0xf7 fp=0xc00019bde0 sp=0xc00019bda8 pc=0x5bb7383b4257
internal/poll.runtime_pollWait(0x7ae067dc7ee8, 0x72)
runtime/netpoll.go:345 +0x85 fp=0xc00019be00 sp=0xc00019bde0 pc=0x5bb7383e8aa5
internal/poll.(*pollDesc).wait(0xc000164a00?, 0xc00010ab81?, 0x0)
internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc00019be28 sp=0xc00019be00 pc=0x5bb7384389c7
internal/poll.(*pollDesc).waitRead(...)
internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc000164a00, {0xc00010ab81, 0x1, 0x1})
internal/poll/fd_unix.go:164 +0x27a fp=0xc00019bec0 sp=0xc00019be28 pc=0x5bb73843951a
net.(*netFD).Read(0xc000164a00, {0xc00010ab81?, 0xc00019bf48?, 0x5bb7383ea6d0?})
net/fd_posix.go:55 +0x25 fp=0xc00019bf08 sp=0xc00019bec0 pc=0x5bb7384a77a5
net.(*conn).Read(0xc00004a000, {0xc00010ab81?, 0x385041544f792f41?, 0xc00010ab78?})
net/net.go:185 +0x45 fp=0xc00019bf50 sp=0xc00019bf08 pc=0x5bb7384b1a65
net.(*TCPConn).Read(0xc00010ab70?, {0xc00010ab81?, 0x3450472f58332f59?, 0x636f422b44786847?})
<autogenerated>:1 +0x25 fp=0xc00019bf80 sp=0xc00019bf50 pc=0x5bb7384bd445
net/http.(*connReader).backgroundRead(0xc00010ab70)
net/http/server.go:681 +0x37 fp=0xc00019bfc8 sp=0xc00019bf80 pc=0x5bb7385cc1d7
net/http.(*connReader).startBackgroundRead.gowrap2()
net/http/server.go:677 +0x25 fp=0xc00019bfe0 sp=0xc00019bfc8 pc=0x5bb7385cc105
runtime.goexit({})
runtime/asm_amd64.s:1695 +0x1 fp=0xc00019bfe8 sp=0xc00019bfe0 pc=0x5bb7383edde1
created by net/http.(*connReader).startBackgroundRead in goroutine 22
net/http/server.go:677 +0xba
rax 0x204803fbc
rbx 0x7adfd17adce0
rcx 0xfef
rdx 0x7adfd10edb90
rdi 0x7adfd10edba0
rsi 0x0
rbp 0x7adffa7ddeb0
rsp 0x7adffa7dde90
r8 0x1
r9 0x7adfd16203b8
r10 0x0
r11 0x246
r12 0x7ade6000ccc0
r13 0x7adfd10edba0
r14 0x0
r15 0x7ae0b4ef57d0
rip 0x7ae06884d1d7
rflags 0x10297
cs 0x33
fs 0x0
gs 0x0
SIGABRT: abort
PC=0x7ae04269eb1c m=4 sigcode=18446744073709551610
signal arrived during cgo execution
goroutine 7 gp=0xc000156000 m=4 mp=0xc00004d808 [syscall]:
runtime.cgocall(0x5bb738602e90, 0xc000056b60)
runtime/cgocall.go:157 +0x4b fp=0xc000056b38 sp=0xc000056b00 pc=0x5bb7383853cb
github.com/ollama/ollama/llama._Cfunc_llama_decode(0x7adfec006460, {0x1, 0x7adfec3acb70, 0x0, 0x0, 0x7adfec3ad380, 0x7adfec3adb90, 0x7adfec17b380, 0x7adfd10c2910,
0x0, ...})
_cgo_gotypes.go:543 +0x52 fp=0xc000056b60 sp=0xc000056b38 pc=0x5bb738482952
github.com/ollama/ollama/llama.(*Context).Decode.func1(0x5bb7385fed4b?, 0x7adfec006460?)
github.com/ollama/ollama/llama/llama.go:167 +0xd8 fp=0xc000056c80 sp=0xc000056b60 pc=0x5bb738484e78
github.com/ollama/ollama/llama.(*Context).Decode(0xc000056d68?, 0x1?)
github.com/ollama/ollama/llama/llama.go:167 +0x17 fp=0xc000056cc8 sp=0xc000056c80 pc=0x5bb738484cd7
main.(*Server).processBatch(0xc000128120, 0xc000126150, 0xc0001261c0)
github.com/ollama/ollama/llama/runner/runner.go:424 +0x29e fp=0xc000056ed0 sp=0xc000056cc8 pc=0x5bb7385fdd7e
main.(*Server).run(0xc000128120, {0x5bb73893ca40, 0xc00007c050})
github.com/ollama/ollama/llama/runner/runner.go:338 +0x1a5 fp=0xc000056fb8 sp=0xc000056ed0 pc=0x5bb7385fd765
main.main.gowrap2()
github.com/ollama/ollama/llama/runner/runner.go:901 +0x28 fp=0xc000056fe0 sp=0xc000056fb8 pc=0x5bb738601ec8
runtime.goexit({})
runtime/asm_amd64.s:1695 +0x1 fp=0xc000056fe8 sp=0xc000056fe0 pc=0x5bb7383edde1
created by main.main in goroutine 1
github.com/ollama/ollama/llama/runner/runner.go:901 +0xc2b
goroutine 1 gp=0xc0000061c0 m=nil [IO wait, 520 minutes]:
runtime.gopark(0xc000038a08?, 0xc00014b908?, 0xb1?, 0x7a?, 0x2000?)
runtime/proc.go:402 +0xce fp=0xc00014b888 sp=0xc00014b868 pc=0x5bb7383bc00e
runtime.netpollblock(0xc00014b920?, 0x38384b26?, 0xb7?)
runtime/netpoll.go:573 +0xf7 fp=0xc00014b8c0 sp=0xc00014b888 pc=0x5bb7383b4257
internal/poll.runtime_pollWait(0x7ae067dc7fe0, 0x72)
runtime/netpoll.go:345 +0x85 fp=0xc00014b8e0 sp=0xc00014b8c0 pc=0x5bb7383e8aa5
internal/poll.(*pollDesc).wait(0x3?, 0x7c?, 0x0)
internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc00014b908 sp=0xc00014b8e0 pc=0x5bb7384389c7
internal/poll.(*pollDesc).waitRead(...)
internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc000150080)
internal/poll/fd_unix.go:611 +0x2ac fp=0xc00014b9b0 sp=0xc00014b908 pc=0x5bb738439e8c
net.(*netFD).accept(0xc000150080)
net/fd_unix.go:172 +0x29 fp=0xc00014ba68 sp=0xc00014b9b0 pc=0x5bb7384a88a9
net.(*TCPListener).accept(0xc0000321e0)
net/tcpsock_posix.go:159 +0x1e fp=0xc00014ba90 sp=0xc00014ba68 pc=0x5bb7384b95de
net.(*TCPListener).Accept(0xc0000321e0)
net/tcpsock.go:327 +0x30 fp=0xc00014bac0 sp=0xc00014ba90 pc=0x5bb7384b8930
net/http.(*onceCloseListener).Accept(0xc000190090?)
<autogenerated>:1 +0x24 fp=0xc00014bad8 sp=0xc00014bac0 pc=0x5bb7385dfa44
net/http.(*Server).Serve(0xc000168000, {0x5bb73893c400, 0xc0000321e0})
net/http/server.go:3260 +0x33e fp=0xc00014bc08 sp=0xc00014bad8 pc=0x5bb7385d685e
main.main()
github.com/ollama/ollama/llama/runner/runner.go:921 +0xfcc fp=0xc00014bf50 sp=0xc00014bc08 pc=0x5bb738601c4c
runtime.main()
runtime/proc.go:271 +0x29d fp=0xc00014bfe0 sp=0xc00014bf50 pc=0x5bb7383bbbdd
runtime.goexit({})
runtime/asm_amd64.s:1695 +0x1 fp=0xc00014bfe8 sp=0xc00014bfe0 pc=0x5bb7383edde1
goroutine 2 gp=0xc000006c40 m=nil [force gc (idle), 3 minutes]:
runtime.gopark(0x1dd19be52e23?, 0x0?, 0x0?, 0x0?, 0x0?)
runtime/proc.go:402 +0xce fp=0xc000046fa8 sp=0xc000046f88 pc=0x5bb7383bc00e
runtime.goparkunlock(...)
runtime/proc.go:408
runtime.forcegchelper()
runtime/proc.go:326 +0xb8 fp=0xc000046fe0 sp=0xc000046fa8 pc=0x5bb7383bbe98
runtime.goexit({})
runtime/asm_amd64.s:1695 +0x1 fp=0xc000046fe8 sp=0xc000046fe0 pc=0x5bb7383edde1
created by runtime.init.6 in goroutine 1
runtime/proc.go:314 +0x1a
goroutine 3 gp=0xc000007180 m=nil [GC sweep wait]:
runtime.gopark(0x5bb738b09e01?, 0x5bb738b09e40?, 0xc?, 0x9?, 0x1?)
runtime/proc.go:402 +0xce fp=0xc000047780 sp=0xc000047760 pc=0x5bb7383bc00e
runtime.goparkunlock(...)
runtime/proc.go:408
runtime.bgsweep(0xc00006e000)
runtime/mgcsweep.go:318 +0xdf fp=0xc0000477c8 sp=0xc000047780 pc=0x5bb7383a6b9f
runtime.gcenable.gowrap1()
runtime/mgc.go:203 +0x25 fp=0xc0000477e0 sp=0xc0000477c8 pc=0x5bb73839b685
runtime.goexit({})
runtime/asm_amd64.s:1695 +0x1 fp=0xc0000477e8 sp=0xc0000477e0 pc=0x5bb7383edde1
created by runtime.gcenable in goroutine 1
runtime/mgc.go:203 +0x66
goroutine 4 gp=0xc000007340 m=nil [GC scavenge wait]:
runtime.gopark(0x10000?, 0x166b9ea?, 0x0?, 0x0?, 0x0?)
runtime/proc.go:402 +0xce fp=0xc000047f78 sp=0xc000047f58 pc=0x5bb7383bc00e
runtime.goparkunlock(...)
runtime/proc.go:408
runtime.(*scavengerState).park(0x5bb738b0a4c0)
runtime/mgcscavenge.go:425 +0x49 fp=0xc000047fa8 sp=0xc000047f78 pc=0x5bb7383a4549
runtime.bgscavenge(0xc00006e000)
runtime/mgcscavenge.go:658 +0x59 fp=0xc000047fc8 sp=0xc000047fa8 pc=0x5bb7383a4af9
runtime.gcenable.gowrap2()
runtime/mgc.go:204 +0x25 fp=0xc000047fe0 sp=0xc000047fc8 pc=0x5bb73839b625
runtime.goexit({})
runtime/asm_amd64.s:1695 +0x1 fp=0xc000047fe8 sp=0xc000047fe0 pc=0x5bb7383edde1
created by runtime.gcenable in goroutine 1
runtime/mgc.go:204 +0xa5
goroutine 5 gp=0xc000007c00 m=nil [finalizer wait, 3 minutes]:
runtime.gopark(0x0?, 0x5bb7389381a0?, 0x0?, 0x60?, 0x1000000010?)
runtime/proc.go:402 +0xce fp=0xc000046620 sp=0xc000046600 pc=0x5bb7383bc00e
runtime.runfinq()
runtime/mfinal.go:194 +0x107 fp=0xc0000467e0 sp=0xc000046620 pc=0x5bb73839a6c7
runtime.goexit({})
runtime/asm_amd64.s:1695 +0x1 fp=0xc0000467e8 sp=0xc0000467e0 pc=0x5bb7383edde1
created by runtime.createfing in goroutine 1
runtime/mfinal.go:164 +0x3d
goroutine 22 gp=0xc000196000 m=nil [select]:
runtime.gopark(0xc000147a80?, 0x2?, 0x18?, 0x77?, 0xc000147824?)
runtime/proc.go:402 +0xce fp=0xc000147698 sp=0xc000147678 pc=0x5bb7383bc00e
runtime.selectgo(0xc000147a80, 0xc000147820, 0xc00037de00?, 0x0, 0x2?, 0x1)
runtime/select.go:327 +0x725 fp=0xc0001477b8 sp=0xc000147698 pc=0x5bb7383cd3e5
main.(*Server).completion(0xc000128120, {0x5bb73893c5b0, 0xc0000e22a0}, 0xc0000c0360)
github.com/ollama/ollama/llama/runner/runner.go:652 +0x8fe fp=0xc000147ab8 sp=0xc0001477b8 pc=0x5bb7385ff6de
main.(*Server).completion-fm({0x5bb73893c5b0?, 0xc0000e22a0?}, 0x5bb7385dab8d?)
<autogenerated>:1 +0x36 fp=0xc000147ae8 sp=0xc000147ab8 pc=0x5bb7386026b6
net/http.HandlerFunc.ServeHTTP(0xc00010cb60?, {0x5bb73893c5b0?, 0xc0000e22a0?}, 0x10?)
net/http/server.go:2171 +0x29 fp=0xc000147b10 sp=0xc000147ae8 pc=0x5bb7385d3629
net/http.(*ServeMux).ServeHTTP(0x5bb73838ef85?, {0x5bb73893c5b0, 0xc0000e22a0}, 0xc0000c0360)
net/http/server.go:2688 +0x1ad fp=0xc000147b60 sp=0xc000147b10 pc=0x5bb7385d54ad
net/http.serverHandler.ServeHTTP({0x5bb73893b900?}, {0x5bb73893c5b0?, 0xc0000e22a0?}, 0x6?)
net/http/server.go:3142 +0x8e fp=0xc000147b90 sp=0xc000147b60 pc=0x5bb7385d64ce
net/http.(*conn).serve(0xc000190090, {0x5bb73893ca08, 0xc00010adb0})
net/http/server.go:2044 +0x5e8 fp=0xc000147fb8 sp=0xc000147b90 pc=0x5bb7385d2268
net/http.(*Server).Serve.gowrap3()
net/http/server.go:3290 +0x28 fp=0xc000147fe0 sp=0xc000147fb8 pc=0x5bb7385d6c48
runtime.goexit({})
runtime/asm_amd64.s:1695 +0x1 fp=0xc000147fe8 sp=0xc000147fe0 pc=0x5bb7383edde1
created by net/http.(*Server).Serve in goroutine 1
net/http/server.go:3290 +0x4b4
goroutine 21 gp=0xc000082a80 m=nil [GC worker (idle), 4 minutes]:
runtime.gopark(0x1db5aaee6275?, 0x3?, 0x58?, 0xf?, 0x0?)
runtime/proc.go:402 +0xce fp=0xc0000cdf50 sp=0xc0000cdf30 pc=0x5bb7383bc00e
runtime.gcBgMarkWorker()
runtime/mgc.go:1310 +0xe5 fp=0xc0000cdfe0 sp=0xc0000cdf50 pc=0x5bb73839d585
runtime.goexit({})
runtime/asm_amd64.s:1695 +0x1 fp=0xc0000cdfe8 sp=0xc0000cdfe0 pc=0x5bb7383edde1
created by runtime.gcBgMarkStartWorkers in goroutine 18
runtime/mgc.go:1234 +0x1c
goroutine 41 gp=0xc000082fc0 m=nil [GC worker (idle), 3 minutes]:
runtime.gopark(0x1db5aaee606c?, 0x3?, 0xc?, 0xfe?, 0x0?)
runtime/proc.go:402 +0xce fp=0xc0000cf750 sp=0xc0000cf730 pc=0x5bb7383bc00e
runtime.gcBgMarkWorker()
runtime/mgc.go:1310 +0xe5 fp=0xc0000cf7e0 sp=0xc0000cf750 pc=0x5bb73839d585
runtime.goexit({})
runtime/asm_amd64.s:1695 +0x1 fp=0xc0000cf7e8 sp=0xc0000cf7e0 pc=0x5bb7383edde1
created by runtime.gcBgMarkStartWorkers in goroutine 18
runtime/mgc.go:1234 +0x1c
goroutine 50 gp=0xc0005e2000 m=nil [GC worker (idle), 3 minutes]:
runtime.gopark(0x1dd19bf2c88b?, 0x3?, 0xbf?, 0x40?, 0x0?)
runtime/proc.go:402 +0xce fp=0xc0000c8750 sp=0xc0000c8730 pc=0x5bb7383bc00e
runtime.gcBgMarkWorker()
runtime/mgc.go:1310 +0xe5 fp=0xc0000c87e0 sp=0xc0000c8750 pc=0x5bb73839d585
runtime.goexit({})
runtime/asm_amd64.s:1695 +0x1 fp=0xc0000c87e8 sp=0xc0000c87e0 pc=0x5bb7383edde1
created by runtime.gcBgMarkStartWorkers in goroutine 18
runtime/mgc.go:1234 +0x1c
goroutine 42 gp=0xc000083180 m=nil [GC worker (idle), 66 minutes]:
runtime.gopark(0x1a5228359582?, 0x3?, 0x1c?, 0x4?, 0x0?)
runtime/proc.go:402 +0xce fp=0xc0000cff50 sp=0xc0000cff30 pc=0x5bb7383bc00e
runtime.gcBgMarkWorker()
runtime/mgc.go:1310 +0xe5 fp=0xc0000cffe0 sp=0xc0000cff50 pc=0x5bb73839d585
runtime.goexit({})
runtime/asm_amd64.s:1695 +0x1 fp=0xc0000cffe8 sp=0xc0000cffe0 pc=0x5bb7383edde1
created by runtime.gcBgMarkStartWorkers in goroutine 18
runtime/mgc.go:1234 +0x1c
goroutine 11112 gp=0xc000156a80 m=nil [IO wait, 4 minutes]:
runtime.gopark(0x10?, 0x10?, 0xf0?, 0xbd?, 0xb?)
runtime/proc.go:402 +0xce fp=0xc00019bda8 sp=0xc00019bd88 pc=0x5bb7383bc00e
runtime.netpollblock(0x5bb738422558?, 0x38384b26?, 0xb7?)
runtime/netpoll.go:573 +0xf7 fp=0xc00019bde0 sp=0xc00019bda8 pc=0x5bb7383b4257
internal/poll.runtime_pollWait(0x7ae067dc7ee8, 0x72)
runtime/netpoll.go:345 +0x85 fp=0xc00019be00 sp=0xc00019bde0 pc=0x5bb7383e8aa5
internal/poll.(*pollDesc).wait(0xc000164a00?, 0xc00010ab81?, 0x0)
internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc00019be28 sp=0xc00019be00 pc=0x5bb7384389c7
internal/poll.(*pollDesc).waitRead(...)
internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc000164a00, {0xc00010ab81, 0x1, 0x1})
internal/poll/fd_unix.go:164 +0x27a fp=0xc00019bec0 sp=0xc00019be28 pc=0x5bb73843951a
net.(*netFD).Read(0xc000164a00, {0xc00010ab81?, 0xc00019bf48?, 0x5bb7383ea6d0?})
net/fd_posix.go:55 +0x25 fp=0xc00019bf08 sp=0xc00019bec0 pc=0x5bb7384a77a5
net.(*conn).Read(0xc00004a000, {0xc00010ab81?, 0x385041544f792f41?, 0xc00010ab78?})
net/net.go:185 +0x45 fp=0xc00019bf50 sp=0xc00019bf08 pc=0x5bb7384b1a65
net.(*TCPConn).Read(0xc00010ab70?, {0xc00010ab81?, 0x3450472f58332f59?, 0x636f422b44786847?})
<autogenerated>:1 +0x25 fp=0xc00019bf80 sp=0xc00019bf50 pc=0x5bb7384bd445
net/http.(*connReader).backgroundRead(0xc00010ab70)
net/http/server.go:681 +0x37 fp=0xc00019bfc8 sp=0xc00019bf80 pc=0x5bb7385cc1d7
net/http.(*connReader).startBackgroundRead.gowrap2()
net/http/server.go:677 +0x25 fp=0xc00019bfe0 sp=0xc00019bfc8 pc=0x5bb7385cc105
runtime.goexit({})
runtime/asm_amd64.s:1695 +0x1 fp=0xc00019bfe8 sp=0xc00019bfe0 pc=0x5bb7383edde1
created by net/http.(*connReader).startBackgroundRead in goroutine 22
net/http/server.go:677 +0xba
rax 0x0
rbx 0xa6ba
rcx 0x7ae04269eb1c
rdx 0x6
rdi 0xa6b7
rsi 0xa6ba
rbp 0x7adffa7de010
rsp 0x7adffa7ddfd0
r8 0x0
r9 0x0
r10 0x8
r11 0x246
r12 0x6
r13 0xfcc
r14 0x16
r15 0x0
rip 0x7ae04269eb1c
rflags 0x246
cs 0x33
fs 0x0
gs 0x0
[GIN] 2024/11/19 - 07:09:34 | 500 | 3m37s | 127.0.0.1 | POST "/api/generate"
```
### OS
Linux
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.4.1
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7748/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7748/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/3398
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3398/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3398/comments
|
https://api.github.com/repos/ollama/ollama/issues/3398/events
|
https://github.com/ollama/ollama/pull/3398
| 2,214,291,344
|
PR_kwDOJ0Z1Ps5rHIGA
| 3,398
|
CI automation for tagging latest images
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-03-28T22:49:53
| 2024-10-29T08:23:41
| 2024-03-28T23:25:54
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3398",
"html_url": "https://github.com/ollama/ollama/pull/3398",
"diff_url": "https://github.com/ollama/ollama/pull/3398.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3398.patch",
"merged_at": "2024-03-28T23:25:54"
}
| null |
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3398/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3398/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/105
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/105/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/105/comments
|
https://api.github.com/repos/ollama/ollama/issues/105/events
|
https://github.com/ollama/ollama/pull/105
| 1,810,818,984
|
PR_kwDOJ0Z1Ps5V1Qu1
| 105
|
attempt two for skipping files in the file walk
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-07-18T22:36:18
| 2023-07-18T22:49:30
| 2023-07-18T22:37:01
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/105",
"html_url": "https://github.com/ollama/ollama/pull/105",
"diff_url": "https://github.com/ollama/ollama/pull/105.diff",
"patch_url": "https://github.com/ollama/ollama/pull/105.patch",
"merged_at": "2023-07-18T22:37:01"
}
| null |
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/105/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/105/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2673
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2673/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2673/comments
|
https://api.github.com/repos/ollama/ollama/issues/2673/events
|
https://github.com/ollama/ollama/issues/2673
| 2,148,705,158
|
I_kwDOJ0Z1Ps6AEqOG
| 2,673
|
Stop tokens appear in the model output.
|
{
"login": "olafgeibig",
"id": 295644,
"node_id": "MDQ6VXNlcjI5NTY0NA==",
"avatar_url": "https://avatars.githubusercontent.com/u/295644?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/olafgeibig",
"html_url": "https://github.com/olafgeibig",
"followers_url": "https://api.github.com/users/olafgeibig/followers",
"following_url": "https://api.github.com/users/olafgeibig/following{/other_user}",
"gists_url": "https://api.github.com/users/olafgeibig/gists{/gist_id}",
"starred_url": "https://api.github.com/users/olafgeibig/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/olafgeibig/subscriptions",
"organizations_url": "https://api.github.com/users/olafgeibig/orgs",
"repos_url": "https://api.github.com/users/olafgeibig/repos",
"events_url": "https://api.github.com/users/olafgeibig/events{/privacy}",
"received_events_url": "https://api.github.com/users/olafgeibig/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 9
| 2024-02-22T10:16:31
| 2024-05-17T22:48:56
| 2024-05-17T22:48:56
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I created my own Ollama model of https://huggingface.co/NousResearch/Nous-Hermes-2-Mistral-7B-DPO-GGUF
Here is my modelfile:
```
FROM ./nous-hermes-2-mistral-7b-dpo.Q5_K_M.gguf
PARAMETER num_ctx 8192
TEMPLATE """<|im_start|>system
{{ .System }}<|im_end|>
<|im_start|>user
{{ .Prompt }}<|im_end|>
<|im_start|>assistant
"""
PARAMETER stop "<|im_start|>"
PARAMETER stop "<|im_end|>"
```
When running the model with crewAI with a coding agent crew then sometimes stop tokens appear in the output. That doesn't happen with the same model hosted at together.ai. What am doing wrong? I think my modelfile is correct, since it is mostly a copy of the official openhermes modelfile.
Example output:
```
Use Tool: Pygame for game development and graphics renderingHere is a valid schema for Pygame tool:
{
"tool_name": "Pygame",
"arguments": {
"window_size": (int, int), # tuple with width and height of window
"frame_rate": float, # frame rate of the game loop
"colors": dict, # dictionary of colors used in the game or graphics
"fonts": dict, # dictionary of fonts used in the game or graphics
"sprites": list, # list of sprite objects used in the game
"sound_effects": dict, # dictionary of sound effects used in the game
"music": str, # path to music file for background music
"additional_features": list, # list of additional features used in the game
}
}
```<|im_end|>{
"tool_name": "Pygame",
"arguments": {
"window_size": (800, 600),
"fps": 60,
"colors": ["red", "blue"],
"sounds": ["sound1.wav", "sound2.mp3"]
}
}
```
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2673/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/ollama/ollama/issues/2673/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2253
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2253/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2253/comments
|
https://api.github.com/repos/ollama/ollama/issues/2253/events
|
https://github.com/ollama/ollama/issues/2253
| 2,105,385,698
|
I_kwDOJ0Z1Ps59faLi
| 2,253
|
Invalid file magic dolphin-2.7-mixtral gguf
|
{
"login": "fschiro",
"id": 75554993,
"node_id": "MDQ6VXNlcjc1NTU0OTkz",
"avatar_url": "https://avatars.githubusercontent.com/u/75554993?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fschiro",
"html_url": "https://github.com/fschiro",
"followers_url": "https://api.github.com/users/fschiro/followers",
"following_url": "https://api.github.com/users/fschiro/following{/other_user}",
"gists_url": "https://api.github.com/users/fschiro/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fschiro/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fschiro/subscriptions",
"organizations_url": "https://api.github.com/users/fschiro/orgs",
"repos_url": "https://api.github.com/users/fschiro/repos",
"events_url": "https://api.github.com/users/fschiro/events{/privacy}",
"received_events_url": "https://api.github.com/users/fschiro/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 3
| 2024-01-29T12:28:50
| 2024-03-11T18:40:03
| 2024-03-11T18:40:03
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hello, I'm having trouble creating dolphin-2.7-mixtral from a GGUF. Is the model supported?
```bash
ollama --version
ollama version is 0.1.22
cat Modelfile
FROM ./dolphin-2.7-mixtral-8x7b.Q4_K_M.gguf
ls
config.json
dolphin-2.7-mixtral-8x7b.Q2_K.gguf
dolphin-2.7-mixtral-8x7b.Q3_K_M.gguf
dolphin-2.7-mixtral-8x7b.Q4_0.gguf
dolphin-2.7-mixtral-8x7b.Q4_K_M.gguf
dolphin-2.7-mixtral-8x7b.Q5_0.gguf
dolphin-2.7-mixtral-8x7b.Q5_K_M.gguf
dolphin-2.7-mixtral-8x7b.Q6_K.gguf
dolphin-2.7-mixtral-8x7b.Q8_0.gguf
Modelfile
README.md
ollama create dm2.7_4km -f Modelfile
transferring model data
creating model layer
Error: invalid file magic
```
|
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2253/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2253/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1763
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1763/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1763/comments
|
https://api.github.com/repos/ollama/ollama/issues/1763/events
|
https://github.com/ollama/ollama/issues/1763
| 2,063,000,734
|
I_kwDOJ0Z1Ps569uSe
| 1,763
|
Resuming to pull a model is not working via API
|
{
"login": "DennisKo",
"id": 9072277,
"node_id": "MDQ6VXNlcjkwNzIyNzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/9072277?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DennisKo",
"html_url": "https://github.com/DennisKo",
"followers_url": "https://api.github.com/users/DennisKo/followers",
"following_url": "https://api.github.com/users/DennisKo/following{/other_user}",
"gists_url": "https://api.github.com/users/DennisKo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DennisKo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DennisKo/subscriptions",
"organizations_url": "https://api.github.com/users/DennisKo/orgs",
"repos_url": "https://api.github.com/users/DennisKo/repos",
"events_url": "https://api.github.com/users/DennisKo/events{/privacy}",
"received_events_url": "https://api.github.com/users/DennisKo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 6
| 2024-01-02T21:55:26
| 2024-01-06T21:19:46
| 2024-01-06T21:19:46
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
If I start to pull a model via `/api/pull` and then abort the request at let's say 2% and re-request it, it will not resume and start from 0%.
If I do it via `ollama pull model` it correctly resumes....
Did some more testing:
Start via `/api/pull`, go to 2%, abort -> run `ollama pull model`, no resume...
Start via `ollama pull model`, go to 2%, abort -> hit `/api/pull`, it resumes...
latest version, macos
|
{
"login": "DennisKo",
"id": 9072277,
"node_id": "MDQ6VXNlcjkwNzIyNzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/9072277?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DennisKo",
"html_url": "https://github.com/DennisKo",
"followers_url": "https://api.github.com/users/DennisKo/followers",
"following_url": "https://api.github.com/users/DennisKo/following{/other_user}",
"gists_url": "https://api.github.com/users/DennisKo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DennisKo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DennisKo/subscriptions",
"organizations_url": "https://api.github.com/users/DennisKo/orgs",
"repos_url": "https://api.github.com/users/DennisKo/repos",
"events_url": "https://api.github.com/users/DennisKo/events{/privacy}",
"received_events_url": "https://api.github.com/users/DennisKo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1763/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1763/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4822
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4822/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4822/comments
|
https://api.github.com/repos/ollama/ollama/issues/4822/events
|
https://github.com/ollama/ollama/pull/4822
| 2,334,528,078
|
PR_kwDOJ0Z1Ps5xexip
| 4,822
|
API PS Documentation
|
{
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjhan/followers",
"following_url": "https://api.github.com/users/royjhan/following{/other_user}",
"gists_url": "https://api.github.com/users/royjhan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/royjhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/royjhan/subscriptions",
"organizations_url": "https://api.github.com/users/royjhan/orgs",
"repos_url": "https://api.github.com/users/royjhan/repos",
"events_url": "https://api.github.com/users/royjhan/events{/privacy}",
"received_events_url": "https://api.github.com/users/royjhan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-06-04T23:10:54
| 2024-06-05T18:06:54
| 2024-06-05T18:06:53
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4822",
"html_url": "https://github.com/ollama/ollama/pull/4822",
"diff_url": "https://github.com/ollama/ollama/pull/4822.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4822.patch",
"merged_at": "2024-06-05T18:06:53"
}
| null |
{
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjhan/followers",
"following_url": "https://api.github.com/users/royjhan/following{/other_user}",
"gists_url": "https://api.github.com/users/royjhan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/royjhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/royjhan/subscriptions",
"organizations_url": "https://api.github.com/users/royjhan/orgs",
"repos_url": "https://api.github.com/users/royjhan/repos",
"events_url": "https://api.github.com/users/royjhan/events{/privacy}",
"received_events_url": "https://api.github.com/users/royjhan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4822/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4822/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1184
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1184/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1184/comments
|
https://api.github.com/repos/ollama/ollama/issues/1184/events
|
https://github.com/ollama/ollama/pull/1184
| 2,000,018,323
|
PR_kwDOJ0Z1Ps5fzGb0
| 1,184
|
adjust download/upload parts
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2023-11-17T22:20:35
| 2024-05-09T22:17:51
| 2023-11-20T19:19:13
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1184",
"html_url": "https://github.com/ollama/ollama/pull/1184",
"diff_url": "https://github.com/ollama/ollama/pull/1184.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1184.patch",
"merged_at": null
}
| null |
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1184/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1184/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1893
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1893/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1893/comments
|
https://api.github.com/repos/ollama/ollama/issues/1893/events
|
https://github.com/ollama/ollama/issues/1893
| 2,074,149,210
|
I_kwDOJ0Z1Ps57oQFa
| 1,893
|
response_json['eval_count'] doesn't exists - llms/ollama.py
|
{
"login": "mongolu",
"id": 5344119,
"node_id": "MDQ6VXNlcjUzNDQxMTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/5344119?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mongolu",
"html_url": "https://github.com/mongolu",
"followers_url": "https://api.github.com/users/mongolu/followers",
"following_url": "https://api.github.com/users/mongolu/following{/other_user}",
"gists_url": "https://api.github.com/users/mongolu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mongolu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mongolu/subscriptions",
"organizations_url": "https://api.github.com/users/mongolu/orgs",
"repos_url": "https://api.github.com/users/mongolu/repos",
"events_url": "https://api.github.com/users/mongolu/events{/privacy}",
"received_events_url": "https://api.github.com/users/mongolu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 3
| 2024-01-10T11:17:49
| 2024-04-08T10:11:23
| 2024-01-10T11:19:57
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
after some time this error pops out.
i think it's related with same situation for `response_json['prompt_eval_count']`
Logs:
```
'created_at': '2024-01-10T08:52:17.111694849Z',
'done': True,
'eval_duration': 516371613757000,
'load_duration': 260310,
'model': 'MixtralOrochi8x7B:latest',
'response': '',
'total_duration': 306412003}
Traceback (most recent call last):
File "/opt/miniconda3/lib/python3.11/site-packages/litellm/llms/ollama.py", line 325, in ollama_acompletion
completion_tokens = response_json["eval_count"]
~~~~~~~~~~~~~^^^^^^^^^^^^^^
KeyError: 'eval_count'
```
|
{
"login": "mongolu",
"id": 5344119,
"node_id": "MDQ6VXNlcjUzNDQxMTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/5344119?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mongolu",
"html_url": "https://github.com/mongolu",
"followers_url": "https://api.github.com/users/mongolu/followers",
"following_url": "https://api.github.com/users/mongolu/following{/other_user}",
"gists_url": "https://api.github.com/users/mongolu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mongolu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mongolu/subscriptions",
"organizations_url": "https://api.github.com/users/mongolu/orgs",
"repos_url": "https://api.github.com/users/mongolu/repos",
"events_url": "https://api.github.com/users/mongolu/events{/privacy}",
"received_events_url": "https://api.github.com/users/mongolu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1893/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1893/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3699
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3699/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3699/comments
|
https://api.github.com/repos/ollama/ollama/issues/3699/events
|
https://github.com/ollama/ollama/pull/3699
| 2,248,263,611
|
PR_kwDOJ0Z1Ps5s6_QE
| 3,699
|
Ollama.md Documentation
|
{
"login": "jedt",
"id": 173964,
"node_id": "MDQ6VXNlcjE3Mzk2NA==",
"avatar_url": "https://avatars.githubusercontent.com/u/173964?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jedt",
"html_url": "https://github.com/jedt",
"followers_url": "https://api.github.com/users/jedt/followers",
"following_url": "https://api.github.com/users/jedt/following{/other_user}",
"gists_url": "https://api.github.com/users/jedt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jedt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jedt/subscriptions",
"organizations_url": "https://api.github.com/users/jedt/orgs",
"repos_url": "https://api.github.com/users/jedt/repos",
"events_url": "https://api.github.com/users/jedt/events{/privacy}",
"received_events_url": "https://api.github.com/users/jedt/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-04-17T13:10:07
| 2024-04-17T13:14:39
| 2024-04-17T13:13:58
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | true
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3699",
"html_url": "https://github.com/ollama/ollama/pull/3699",
"diff_url": "https://github.com/ollama/ollama/pull/3699.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3699.patch",
"merged_at": null
}
|
A guide on setting up a fine-tuned Unsloth FastLanguageModel from a Google Colab notebook to:
1. HF hub
2. GGUF
3. local Ollama
Preview link: https://github.com/ollama/ollama/blob/66f7b5bf9e63e1e98c98e8f487427e19195791e0/docs/ollama.md
|
{
"login": "jedt",
"id": 173964,
"node_id": "MDQ6VXNlcjE3Mzk2NA==",
"avatar_url": "https://avatars.githubusercontent.com/u/173964?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jedt",
"html_url": "https://github.com/jedt",
"followers_url": "https://api.github.com/users/jedt/followers",
"following_url": "https://api.github.com/users/jedt/following{/other_user}",
"gists_url": "https://api.github.com/users/jedt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jedt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jedt/subscriptions",
"organizations_url": "https://api.github.com/users/jedt/orgs",
"repos_url": "https://api.github.com/users/jedt/repos",
"events_url": "https://api.github.com/users/jedt/events{/privacy}",
"received_events_url": "https://api.github.com/users/jedt/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3699/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3699/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2735
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2735/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2735/comments
|
https://api.github.com/repos/ollama/ollama/issues/2735/events
|
https://github.com/ollama/ollama/issues/2735
| 2,152,441,834
|
I_kwDOJ0Z1Ps6AS6fq
| 2,735
|
Build fails on MacOS
|
{
"login": "jrp2014",
"id": 8142876,
"node_id": "MDQ6VXNlcjgxNDI4NzY=",
"avatar_url": "https://avatars.githubusercontent.com/u/8142876?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jrp2014",
"html_url": "https://github.com/jrp2014",
"followers_url": "https://api.github.com/users/jrp2014/followers",
"following_url": "https://api.github.com/users/jrp2014/following{/other_user}",
"gists_url": "https://api.github.com/users/jrp2014/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jrp2014/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jrp2014/subscriptions",
"organizations_url": "https://api.github.com/users/jrp2014/orgs",
"repos_url": "https://api.github.com/users/jrp2014/repos",
"events_url": "https://api.github.com/users/jrp2014/events{/privacy}",
"received_events_url": "https://api.github.com/users/jrp2014/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 3
| 2024-02-24T18:48:45
| 2024-03-04T16:00:47
| 2024-02-25T05:06:42
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Following the instructions in the Developer docs, out of the box I get:
```
(ollama) ➜ AI git clone https://github.com/ollama/ollama.git
Cloning into 'ollama'...
remote: Enumerating objects: 10778, done.
remote: Counting objects: 100% (2489/2489), done.
remote: Compressing objects: 100% (633/633), done.
remote: Total 10778 (delta 2143), reused 1987 (delta 1853), pack-reused 8289
Receiving objects: 100% (10778/10778), 6.65 MiB | 509.00 KiB/s, done.
Resolving deltas: 100% (6743/6743), done.
(ollama) ➜ AI git gc
fatal: not a git repository (or any of the parent directories): .git
(ollama) ➜ AI cd ollama
(ollama) ➜ ollama git:(main) git gc
Enumerating objects: 10778, done.
Counting objects: 100% (10778/10778), done.
Delta compression using up to 16 threads
Compressing objects: 100% (3774/3774), done.
Writing objects: 100% (10778/10778), done.
Total 10778 (delta 6743), reused 10778 (delta 6743), pack-reused 0
(ollama) ➜ ollama git:(main) go generate ./...
go: downloading github.com/gin-gonic/gin v1.9.1
go: downloading golang.org/x/term v0.13.0
go: downloading github.com/emirpasic/gods v1.18.1
go: downloading golang.org/x/sys v0.13.0
go: downloading github.com/containerd/console v1.0.3
go: downloading github.com/olekukonko/tablewriter v0.0.5
go: downloading github.com/spf13/cobra v1.7.0
go: downloading golang.org/x/crypto v0.14.0
go: downloading golang.org/x/exp v0.0.0-20230817173708-d852ddb80c63
go: downloading golang.org/x/sync v0.3.0
go: downloading github.com/gin-contrib/cors v1.4.0
go: downloading github.com/google/uuid v1.0.0
go: downloading github.com/mattn/go-runewidth v0.0.14
go: downloading github.com/gin-contrib/sse v0.1.0
go: downloading github.com/mattn/go-isatty v0.0.19
go: downloading golang.org/x/net v0.17.0
go: downloading github.com/pelletier/go-toml/v2 v2.0.8
go: downloading github.com/ugorji/go/codec v1.2.11
go: downloading google.golang.org/protobuf v1.30.0
go: downloading gopkg.in/yaml.v3 v3.0.1
go: downloading github.com/go-playground/validator/v10 v10.14.0
go: downloading github.com/spf13/pflag v1.0.5
go: downloading github.com/rivo/uniseg v0.2.0
go: downloading golang.org/x/text v0.13.0
go: downloading github.com/leodido/go-urn v1.2.4
go: downloading github.com/gabriel-vasile/mimetype v1.4.2
go: downloading github.com/go-playground/universal-translator v0.18.1
go: downloading github.com/go-playground/locales v0.14.1
+ set -o pipefail
+ echo 'Starting darwin generate script'
Starting darwin generate script
++ dirname ./gen_darwin.sh
+ source ./gen_common.sh
+ init_vars
+ case "${GOARCH}" in
+ ARCH=arm64
+ LLAMACPP_DIR=../llama.cpp
+ CMAKE_DEFS=
+ CMAKE_TARGETS='--target ext_server'
+ echo ''
+ grep -- -g
+ CMAKE_DEFS='-DCMAKE_BUILD_TYPE=Release -DLLAMA_SERVER_VERBOSE=off '
+ case $(uname -s) in
++ uname -s
+ LIB_EXT=dylib
+ WHOLE_ARCHIVE=-Wl,-force_load
+ NO_WHOLE_ARCHIVE=
+ GCC_ARCH='-arch arm64'
+ '[' -z '' ']'
+ CMAKE_CUDA_ARCHITECTURES='50;52;61;70;75;80'
+ git_module_setup
+ '[' -n '' ']'
+ '[' -d ../llama.cpp/gguf ']'
+ git submodule init
Submodule 'llama.cpp' (https://github.com/ggerganov/llama.cpp.git) registered for path '../llama.cpp'
+ git submodule update --force ../llama.cpp
Cloning into '/Users/jrp/Documents/AI/ollama/llm/llama.cpp'...
remote: Enumerating objects: 12034, done.
remote: Counting objects: 100% (12034/12034), done.
remote: Compressing objects: 100% (3577/3577), done.
remote: Total 11732 (delta 8692), reused 11096 (delta 8075), pack-reused 0
Receiving objects: 100% (11732/11732), 8.48 MiB | 391.00 KiB/s, done.
Resolving deltas: 100% (8692/8692), completed with 246 local objects.
From https://github.com/ggerganov/llama.cpp
* branch 96633eeca1265ed03e57230de54032041c58f9cd -> FETCH_HEAD
Submodule path '../llama.cpp': checked out '96633eeca1265ed03e57230de54032041c58f9cd'
+ apply_patches
+ grep ollama ../llama.cpp/examples/server/CMakeLists.txt
+ echo 'include (../../../ext_server/CMakeLists.txt) # ollama'
++ ls -A ../patches/01-cache.diff ../patches/02-cudaleaks.diff
+ '[' -n '../patches/01-cache.diff
../patches/02-cudaleaks.diff' ']'
+ for patch in '../patches/*.diff'
++ grep '^+++ ' ../patches/01-cache.diff
++ cut -f2 '-d '
++ cut -f2- -d/
+ for file in '$(grep "^+++ " ${patch} | cut -f2 -d'\'' '\'' | cut -f2- -d/)'
+ cd ../llama.cpp
+ git checkout examples/server/server.cpp
Updated 0 paths from the index
+ for patch in '../patches/*.diff'
++ grep '^+++ ' ../patches/02-cudaleaks.diff
++ cut -f2 '-d '
++ cut -f2- -d/
+ for file in '$(grep "^+++ " ${patch} | cut -f2 -d'\'' '\'' | cut -f2- -d/)'
+ cd ../llama.cpp
+ git checkout examples/server/server.cpp
Updated 0 paths from the index
+ for file in '$(grep "^+++ " ${patch} | cut -f2 -d'\'' '\'' | cut -f2- -d/)'
+ cd ../llama.cpp
+ git checkout ggml-cuda.cu
Updated 0 paths from the index
+ for file in '$(grep "^+++ " ${patch} | cut -f2 -d'\'' '\'' | cut -f2- -d/)'
+ cd ../llama.cpp
+ git checkout ggml-cuda.h
Updated 0 paths from the index
+ for patch in '../patches/*.diff'
+ cd ../llama.cpp
+ git apply ../patches/01-cache.diff
+ for patch in '../patches/*.diff'
+ cd ../llama.cpp
+ git apply ../patches/02-cudaleaks.diff
+ sed -e 's/int main(/int __main(/g'
+ mv ../llama.cpp/examples/server/server.cpp.tmp ../llama.cpp/examples/server/server.cpp
+ COMMON_DARWIN_DEFS='-DCMAKE_OSX_DEPLOYMENT_TARGET=11.0 -DCMAKE_SYSTEM_NAME=Darwin'
+ case "${GOARCH}" in
+ CMAKE_DEFS='-DCMAKE_OSX_DEPLOYMENT_TARGET=11.0 -DCMAKE_SYSTEM_NAME=Darwin -DLLAMA_ACCELERATE=on -DCMAKE_SYSTEM_PROCESSOR=arm64 -DCMAKE_OSX_ARCHITECTURES=arm64 -DLLAMA_METAL=on -DCMAKE_BUILD_TYPE=Release -DLLAMA_SERVER_VERBOSE=off '
+ BUILD_DIR=../llama.cpp/build/darwin/arm64/metal
+ EXTRA_LIBS=' -framework Accelerate -framework Foundation -framework Metal -framework MetalKit -framework MetalPerformanceShaders'
+ build
+ cmake -S ../llama.cpp -B ../llama.cpp/build/darwin/arm64/metal -DCMAKE_OSX_DEPLOYMENT_TARGET=11.0 -DCMAKE_SYSTEM_NAME=Darwin -DLLAMA_ACCELERATE=on -DCMAKE_SYSTEM_PROCESSOR=arm64 -DCMAKE_OSX_ARCHITECTURES=arm64 -DLLAMA_METAL=on -DCMAKE_BUILD_TYPE=Release -DLLAMA_SERVER_VERBOSE=off
-- The C compiler identification is AppleClang 15.0.0.15000100
-- The CXX compiler identification is AppleClang 15.0.0.15000100
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/c++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Found Git: /usr/bin/git (found version "2.39.3 (Apple Git-145)")
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success
-- Found Threads: TRUE
-- Accelerate framework found
-- Metal framework found
-- Warning: ccache not found - consider installing it for faster compilation or disable this warning with LLAMA_CCACHE=OFF
-- CMAKE_SYSTEM_PROCESSOR: arm64
-- ARM detected
-- Performing Test COMPILER_SUPPORTS_FP16_FORMAT_I3E
-- Performing Test COMPILER_SUPPORTS_FP16_FORMAT_I3E - Failed
-- Configuring done (0.7s)
-- Generating done (0.2s)
-- Build files have been written to: /Users/jrp/Documents/AI/ollama/llm/llama.cpp/build/darwin/arm64/metal
+ cmake --build ../llama.cpp/build/darwin/arm64/metal --target ext_server -j8
[ 6%] Generating build details from Git
[ 12%] Building C object CMakeFiles/ggml.dir/ggml-alloc.c.o
[ 31%] Building C object CMakeFiles/ggml.dir/ggml-quants.c.o
[ 31%] Building C object CMakeFiles/ggml.dir/ggml.c.o
[ 31%] Building C object CMakeFiles/ggml.dir/ggml-metal.m.o
[ 31%] Building C object CMakeFiles/ggml.dir/ggml-backend.c.o
-- Found Git: /usr/bin/git (found version "2.39.3 (Apple Git-145)")
[ 37%] Building CXX object common/CMakeFiles/build_info.dir/build-info.cpp.o
[ 37%] Built target build_info
/Users/jrp/Documents/AI/ollama/llm/llama.cpp/ggml.c:10374:17: warning: 'cblas_sgemm' is only available on macOS 13.3 or newer [-Wunguarded-availability-new]
cblas_sgemm(CblasRowMajor, CblasNoTrans, CblasTrans,
^~~~~~~~~~~
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX14.2.sdk/System/Library/Frameworks/vecLib.framework/Headers/cblas_new.h:891:6: note: 'cblas_sgemm' has been marked as being introduced in macOS 13.3 here, but the deployment target is macOS 11.0.0
void cblas_sgemm(const enum CBLAS_ORDER ORDER,
^
/Users/jrp/Documents/AI/ollama/llm/llama.cpp/ggml.c:10374:17: note: enclose 'cblas_sgemm' in a __builtin_available check to silence this warning
cblas_sgemm(CblasRowMajor, CblasNoTrans, CblasTrans,
^~~~~~~~~~~
/Users/jrp/Documents/AI/ollama/llm/llama.cpp/ggml.c:10810:9: warning: 'cblas_sgemm' is only available on macOS 13.3 or newer [-Wunguarded-availability-new]
cblas_sgemm(CblasRowMajor, transposeA, CblasNoTrans, m, n, k, 1.0, a, lda, b, n, 0.0, c, n);
^~~~~~~~~~~
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX14.2.sdk/System/Library/Frameworks/vecLib.framework/Headers/cblas_new.h:891:6: note: 'cblas_sgemm' has been marked as being introduced in macOS 13.3 here, but the deployment target is macOS 11.0.0
void cblas_sgemm(const enum CBLAS_ORDER ORDER,
^
/Users/jrp/Documents/AI/ollama/llm/llama.cpp/ggml.c:10810:9: note: enclose 'cblas_sgemm' in a __builtin_available check to silence this warning
cblas_sgemm(CblasRowMajor, transposeA, CblasNoTrans, m, n, k, 1.0, a, lda, b, n, 0.0, c, n);
^~~~~~~~~~~
2 warnings generated.
[ 37%] Built target ggml
[ 43%] Building CXX object CMakeFiles/llama.dir/llama.cpp.o
[ 50%] Linking CXX static library libllama.a
[ 50%] Built target llama
[ 56%] Building CXX object common/CMakeFiles/common.dir/common.cpp.o
[ 62%] Building CXX object examples/llava/CMakeFiles/llava.dir/llava.cpp.o
[ 62%] Building CXX object common/CMakeFiles/common.dir/sampling.cpp.o
[ 68%] Building CXX object common/CMakeFiles/common.dir/train.cpp.o
[ 87%] Building CXX object common/CMakeFiles/common.dir/console.cpp.o
[ 87%] Building CXX object examples/llava/CMakeFiles/llava.dir/clip.cpp.o
[ 87%] Building CXX object common/CMakeFiles/common.dir/grammar-parser.cpp.o
[ 87%] Linking CXX static library libcommon.a
[ 87%] Built target common
[ 87%] Built target llava
[100%] Building CXX object examples/server/CMakeFiles/ext_server.dir/Users/jrp/Documents/AI/ollama/llm/ext_server/ext_server.cpp.o
[100%] Building CXX object examples/server/CMakeFiles/ext_server.dir/__/__/llama.cpp.o
[100%] Linking CXX static library libext_server.a
[100%] Built target ext_server
+ mkdir -p ../llama.cpp/build/darwin/arm64/metal/lib/
+ g++ -fPIC -g -shared -o ../llama.cpp/build/darwin/arm64/metal/lib/libext_server.dylib -arch arm64 -Wl,-force_load ../llama.cpp/build/darwin/arm64/metal/examples/server/libext_server.a ../llama.cpp/build/darwin/arm64/metal/common/libcommon.a ../llama.cpp/build/darwin/arm64/metal/libllama.a '-Wl,-rpath,$ORIGIN' -lpthread -ldl -lm -framework Accelerate -framework Foundation -framework Metal -framework MetalKit -framework MetalPerformanceShaders
+ sign ../llama.cpp/build/darwin/arm64/metal/lib/libext_server.dylib
+ '[' -n '' ']'
+ compress_libs
+ echo 'Compressing payloads to reduce overall binary size...'
Compressing payloads to reduce overall binary size...
+ pids=
+ rm -rf '../llama.cpp/build/darwin/arm64/metal/lib/*.dylib*.gz'
+ for lib in '${BUILD_DIR}/lib/*.${LIB_EXT}*'
+ pids+=' 15225'
+ echo
+ for pid in '${pids}'
+ wait 15225
+ gzip --best -f ../llama.cpp/build/darwin/arm64/metal/lib/libext_server.dylib
+ echo 'Finished compression'
Finished compression
+ cleanup
+ cd ../llama.cpp/examples/server/
+ git checkout CMakeLists.txt server.cpp
Updated 2 paths from the index
++ ls -A ../patches/01-cache.diff ../patches/02-cudaleaks.diff
+ '[' -n '../patches/01-cache.diff
../patches/02-cudaleaks.diff' ']'
+ for patch in '../patches/*.diff'
++ grep '^+++ ' ../patches/01-cache.diff
++ cut -f2 '-d '
++ cut -f2- -d/
+ for file in '$(grep "^+++ " ${patch} | cut -f2 -d'\'' '\'' | cut -f2- -d/)'
+ cd ../llama.cpp
+ git checkout examples/server/server.cpp
Updated 0 paths from the index
+ for patch in '../patches/*.diff'
++ grep '^+++ ' ../patches/02-cudaleaks.diff
++ cut -f2 '-d '
++ cut -f2- -d/
+ for file in '$(grep "^+++ " ${patch} | cut -f2 -d'\'' '\'' | cut -f2- -d/)'
+ cd ../llama.cpp
+ git checkout examples/server/server.cpp
Updated 0 paths from the index
+ for file in '$(grep "^+++ " ${patch} | cut -f2 -d'\'' '\'' | cut -f2- -d/)'
+ cd ../llama.cpp
+ git checkout ggml-cuda.cu
Updated 1 path from the index
+ for file in '$(grep "^+++ " ${patch} | cut -f2 -d'\'' '\'' | cut -f2- -d/)'
+ cd ../llama.cpp
+ git checkout ggml-cuda.h
Updated 1 path from the index
(ollama) ➜ ollama git:(main) go build .
# github.com/jmorganca/ollama/llm
llm/llm.go:47:17: undefined: gpu.CheckVRAM
llm/llm.go:58:14: undefined: gpu.GetGPUInfo
llm/llm.go:158:15: undefined: newDynExtServer
(ollama) ➜ ollama git:(main)
```
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2735/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2735/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/922
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/922/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/922/comments
|
https://api.github.com/repos/ollama/ollama/issues/922/events
|
https://github.com/ollama/ollama/pull/922
| 1,964,487,476
|
PR_kwDOJ0Z1Ps5d6xe-
| 922
|
add bracketed paste mode
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-10-26T22:53:45
| 2023-10-26T22:57:01
| 2023-10-26T22:57:00
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/922",
"html_url": "https://github.com/ollama/ollama/pull/922",
"diff_url": "https://github.com/ollama/ollama/pull/922.diff",
"patch_url": "https://github.com/ollama/ollama/pull/922.patch",
"merged_at": "2023-10-26T22:57:00"
}
|
This change allows you to cut/paste into the REPL without have to add the """ around a block of text.
I've tested it out with:
* Terminal.app
* iTerm2
* Warp
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/922/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/922/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7471
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7471/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7471/comments
|
https://api.github.com/repos/ollama/ollama/issues/7471/events
|
https://github.com/ollama/ollama/issues/7471
| 2,630,365,630
|
I_kwDOJ0Z1Ps6cyDG-
| 7,471
|
Cannot generate id_ed25519 - read-only file system
|
{
"login": "duhow",
"id": 1145001,
"node_id": "MDQ6VXNlcjExNDUwMDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1145001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/duhow",
"html_url": "https://github.com/duhow",
"followers_url": "https://api.github.com/users/duhow/followers",
"following_url": "https://api.github.com/users/duhow/following{/other_user}",
"gists_url": "https://api.github.com/users/duhow/gists{/gist_id}",
"starred_url": "https://api.github.com/users/duhow/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/duhow/subscriptions",
"organizations_url": "https://api.github.com/users/duhow/orgs",
"repos_url": "https://api.github.com/users/duhow/repos",
"events_url": "https://api.github.com/users/duhow/events{/privacy}",
"received_events_url": "https://api.github.com/users/duhow/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 5
| 2024-11-02T10:12:14
| 2024-11-17T14:16:02
| 2024-11-17T14:16:02
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Running the service `systemctl start ollama` cannot run due to **immutable system** , using https://github.com/ublue-os/bazzite . User `ollama` with default `$HOME` in `/usr/share/ollama` cannot write there.
Performed normal setup with `sudo` rather than user-based.
```sh
curl -fsSL https://ollama.com/install.sh | sudo sh
```
:memo: Output:
```
nov 02 10:34:04 geekom ollama[227009]: Couldn't find '/usr/share/ollama/.ollama/id_ed25519'. Generating new private key.
nov 02 10:34:04 geekom ollama[227009]: Error: could not create directory mkdir /usr/share/ollama: read-only file system
Process: 241387 ExecStart=/usr/local/bin/ollama serve (code=exited, status=1/FAILURE)
Main PID: 241387 (code=exited, status=1/FAILURE)
```
### OS
Linux
### GPU
AMD
### CPU
AMD
### Ollama version
0.3.14
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7471/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7471/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7667
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7667/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7667/comments
|
https://api.github.com/repos/ollama/ollama/issues/7667/events
|
https://github.com/ollama/ollama/pull/7667
| 2,658,819,958
|
PR_kwDOJ0Z1Ps6B6-pg
| 7,667
|
Support Multiple LoRa Adapters, Closes #7627
|
{
"login": "ItzCrazyKns",
"id": 95534749,
"node_id": "U_kgDOBbG-nQ",
"avatar_url": "https://avatars.githubusercontent.com/u/95534749?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ItzCrazyKns",
"html_url": "https://github.com/ItzCrazyKns",
"followers_url": "https://api.github.com/users/ItzCrazyKns/followers",
"following_url": "https://api.github.com/users/ItzCrazyKns/following{/other_user}",
"gists_url": "https://api.github.com/users/ItzCrazyKns/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ItzCrazyKns/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ItzCrazyKns/subscriptions",
"organizations_url": "https://api.github.com/users/ItzCrazyKns/orgs",
"repos_url": "https://api.github.com/users/ItzCrazyKns/repos",
"events_url": "https://api.github.com/users/ItzCrazyKns/events{/privacy}",
"received_events_url": "https://api.github.com/users/ItzCrazyKns/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-11-14T13:23:40
| 2024-11-27T19:00:41
| 2024-11-27T19:00:05
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7667",
"html_url": "https://github.com/ollama/ollama/pull/7667",
"diff_url": "https://github.com/ollama/ollama/pull/7667.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7667.patch",
"merged_at": "2024-11-27T19:00:04"
}
|
Hi, so I've updated the Llama server by allowing it to handle multiple LoRa adapters. Previously, the server supported only one LoRa adapter, limiting users who needed to apply multiple adapters for advanced fine-tuning.
Changes Made:
- Command-Line Parsing:
- Updated to accept multiple `--lora` flags.
- Introduced `multiLPath` to handle multiple LoRa paths.
- Model Loading:
- Modified the `loadModel` function to loop through and apply each specified LoRa adapter.
- Removed the restriction that only one adapter can be used at a time in `llm/server.go`.
|
{
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users/jessegross/followers",
"following_url": "https://api.github.com/users/jessegross/following{/other_user}",
"gists_url": "https://api.github.com/users/jessegross/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jessegross/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jessegross/subscriptions",
"organizations_url": "https://api.github.com/users/jessegross/orgs",
"repos_url": "https://api.github.com/users/jessegross/repos",
"events_url": "https://api.github.com/users/jessegross/events{/privacy}",
"received_events_url": "https://api.github.com/users/jessegross/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7667/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7667/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5680
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5680/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5680/comments
|
https://api.github.com/repos/ollama/ollama/issues/5680/events
|
https://github.com/ollama/ollama/issues/5680
| 2,407,155,838
|
I_kwDOJ0Z1Ps6Pekh-
| 5,680
|
Extremely slow on Mac M1 chip
|
{
"login": "lulunac27a",
"id": 100660343,
"node_id": "U_kgDOBf_0dw",
"avatar_url": "https://avatars.githubusercontent.com/u/100660343?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lulunac27a",
"html_url": "https://github.com/lulunac27a",
"followers_url": "https://api.github.com/users/lulunac27a/followers",
"following_url": "https://api.github.com/users/lulunac27a/following{/other_user}",
"gists_url": "https://api.github.com/users/lulunac27a/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lulunac27a/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lulunac27a/subscriptions",
"organizations_url": "https://api.github.com/users/lulunac27a/orgs",
"repos_url": "https://api.github.com/users/lulunac27a/repos",
"events_url": "https://api.github.com/users/lulunac27a/events{/privacy}",
"received_events_url": "https://api.github.com/users/lulunac27a/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 7
| 2024-07-13T20:33:58
| 2024-09-26T13:43:22
| 2024-07-23T20:55:50
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I tried chatting using Llama from Meta AI, when the answer is generating, my computer is so slow and sometimes freezes (like my mouse not moving when I move the trackpad). It takes few minutes to completely generate an answer from a question. I use Apple M1 chip with 8GB of RAM memory.
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.2.2
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5680/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5680/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2393
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2393/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2393/comments
|
https://api.github.com/repos/ollama/ollama/issues/2393/events
|
https://github.com/ollama/ollama/issues/2393
| 2,123,591,544
|
I_kwDOJ0Z1Ps5-k294
| 2,393
|
Inquiry on Optimal CPU and GPU Configurations for LLaMA 2(70B)
|
{
"login": "gautam-fairpe",
"id": 127822235,
"node_id": "U_kgDOB55pmw",
"avatar_url": "https://avatars.githubusercontent.com/u/127822235?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gautam-fairpe",
"html_url": "https://github.com/gautam-fairpe",
"followers_url": "https://api.github.com/users/gautam-fairpe/followers",
"following_url": "https://api.github.com/users/gautam-fairpe/following{/other_user}",
"gists_url": "https://api.github.com/users/gautam-fairpe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gautam-fairpe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gautam-fairpe/subscriptions",
"organizations_url": "https://api.github.com/users/gautam-fairpe/orgs",
"repos_url": "https://api.github.com/users/gautam-fairpe/repos",
"events_url": "https://api.github.com/users/gautam-fairpe/events{/privacy}",
"received_events_url": "https://api.github.com/users/gautam-fairpe/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 2
| 2024-02-07T18:04:12
| 2024-05-07T00:10:37
| 2024-05-07T00:10:37
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I am currently exploring the capabilities of LLaMA 2 for various NLP tasks and am in the process of setting up the necessary hardware environment to ensure optimal performance. Given the complexity and resource-intensive nature of LLaMA 2(70B), I am seeking advice on the most suitable CPU and GPU configurations that can deliver the best performance for training and inference tasks with this model.
Specifically, I am interested in:
- Recommendations for CPU and GPU models that are known to work well with LLaMA 2, considering both performance and cost-efficiency.
- Any available estimation charts or benchmarks that illustrate the performance of LLaMA 2 with different hardware configurations. This information would be incredibly helpful for planning hardware investments and understanding the expected model throughput and latency.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2393/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2393/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7957
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7957/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7957/comments
|
https://api.github.com/repos/ollama/ollama/issues/7957/events
|
https://github.com/ollama/ollama/pull/7957
| 2,721,423,176
|
PR_kwDOJ0Z1Ps6EPPqm
| 7,957
|
merge llama/ggml into ml/backend/ggml
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-12-05T21:09:11
| 2025-01-10T19:30:25
| 2025-01-10T19:30:23
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | true
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7957",
"html_url": "https://github.com/ollama/ollama/pull/7957",
"diff_url": "https://github.com/ollama/ollama/pull/7957.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7957.patch",
"merged_at": "2025-01-10T19:30:23"
}
|
Branched from #7954 and #7875
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7957/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7957/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6945
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6945/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6945/comments
|
https://api.github.com/repos/ollama/ollama/issues/6945/events
|
https://github.com/ollama/ollama/pull/6945
| 2,546,654,764
|
PR_kwDOJ0Z1Ps58lhm8
| 6,945
|
Update README.md - Library - Haverscript
|
{
"login": "andygill",
"id": 20696,
"node_id": "MDQ6VXNlcjIwNjk2",
"avatar_url": "https://avatars.githubusercontent.com/u/20696?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/andygill",
"html_url": "https://github.com/andygill",
"followers_url": "https://api.github.com/users/andygill/followers",
"following_url": "https://api.github.com/users/andygill/following{/other_user}",
"gists_url": "https://api.github.com/users/andygill/gists{/gist_id}",
"starred_url": "https://api.github.com/users/andygill/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/andygill/subscriptions",
"organizations_url": "https://api.github.com/users/andygill/orgs",
"repos_url": "https://api.github.com/users/andygill/repos",
"events_url": "https://api.github.com/users/andygill/events{/privacy}",
"received_events_url": "https://api.github.com/users/andygill/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-09-25T00:46:34
| 2024-11-21T08:11:40
| 2024-11-21T08:11:39
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6945",
"html_url": "https://github.com/ollama/ollama/pull/6945",
"diff_url": "https://github.com/ollama/ollama/pull/6945.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6945.patch",
"merged_at": "2024-11-21T08:11:39"
}
|
This PR adds a link to Haverscript.
Haverscript uses classical functional programming techniques to provide a composable interface for interacting with ollama-hosted LLMs.
|
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6945/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6945/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5005
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5005/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5005/comments
|
https://api.github.com/repos/ollama/ollama/issues/5005/events
|
https://github.com/ollama/ollama/issues/5005
| 2,349,538,106
|
I_kwDOJ0Z1Ps6MCxs6
| 5,005
|
ollama creat -f Modelfile doesn't process utf-8 encoding correctly
|
{
"login": "MGdesigner",
"id": 4480740,
"node_id": "MDQ6VXNlcjQ0ODA3NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/4480740?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MGdesigner",
"html_url": "https://github.com/MGdesigner",
"followers_url": "https://api.github.com/users/MGdesigner/followers",
"following_url": "https://api.github.com/users/MGdesigner/following{/other_user}",
"gists_url": "https://api.github.com/users/MGdesigner/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MGdesigner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MGdesigner/subscriptions",
"organizations_url": "https://api.github.com/users/MGdesigner/orgs",
"repos_url": "https://api.github.com/users/MGdesigner/repos",
"events_url": "https://api.github.com/users/MGdesigner/events{/privacy}",
"received_events_url": "https://api.github.com/users/MGdesigner/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 8
| 2024-06-12T19:31:22
| 2024-06-14T07:20:13
| 2024-06-14T07:20:13
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Today I upgrade Ollama version to 0.1.43 from official site. After creating new model, I found my system prompt(written with CJK kanjis ) in modelfile didn't work. Then I check it out by
> ollama show mymodel:latest --modelfile
Then I found that modelfile of the model is not encoded correctly. I have also check the situation by using my old modelfile . The situation is the same. Only models created by 0.1.42 or earlier work correctly . Please fix it.
### OS
Linux
### GPU
AMD
### CPU
AMD
### Ollama version
0.1.43
|
{
"login": "MGdesigner",
"id": 4480740,
"node_id": "MDQ6VXNlcjQ0ODA3NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/4480740?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MGdesigner",
"html_url": "https://github.com/MGdesigner",
"followers_url": "https://api.github.com/users/MGdesigner/followers",
"following_url": "https://api.github.com/users/MGdesigner/following{/other_user}",
"gists_url": "https://api.github.com/users/MGdesigner/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MGdesigner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MGdesigner/subscriptions",
"organizations_url": "https://api.github.com/users/MGdesigner/orgs",
"repos_url": "https://api.github.com/users/MGdesigner/repos",
"events_url": "https://api.github.com/users/MGdesigner/events{/privacy}",
"received_events_url": "https://api.github.com/users/MGdesigner/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5005/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5005/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6310
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6310/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6310/comments
|
https://api.github.com/repos/ollama/ollama/issues/6310/events
|
https://github.com/ollama/ollama/issues/6310
| 2,459,597,443
|
I_kwDOJ0Z1Ps6SmnqD
| 6,310
|
llama3.1 8b template seems to be different from that in huggingface
|
{
"login": "fzyzcjy",
"id": 5236035,
"node_id": "MDQ6VXNlcjUyMzYwMzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/5236035?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fzyzcjy",
"html_url": "https://github.com/fzyzcjy",
"followers_url": "https://api.github.com/users/fzyzcjy/followers",
"following_url": "https://api.github.com/users/fzyzcjy/following{/other_user}",
"gists_url": "https://api.github.com/users/fzyzcjy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fzyzcjy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fzyzcjy/subscriptions",
"organizations_url": "https://api.github.com/users/fzyzcjy/orgs",
"repos_url": "https://api.github.com/users/fzyzcjy/repos",
"events_url": "https://api.github.com/users/fzyzcjy/events{/privacy}",
"received_events_url": "https://api.github.com/users/fzyzcjy/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 4
| 2024-08-11T13:35:21
| 2024-12-25T22:27:41
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Hi thanks for the tool! When reading https://ollama.com/library/llama3.1:8b-instruct-q4_K_M/blobs/11ce4ee3e170, it seems different from https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct/blob/main/tokenizer_config.json#L2053. For example, it does not mention `Cutting Knowledge Date: December 2023`.
Therefore, I wonder whether it is expected or a bug? I have heard some people seems to say small models are sensitive to templates, so making it exactly the same as the official one may be useful.
### OS
_No response_
### GPU
_No response_
### CPU
_No response_
### Ollama version
_No response_
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6310/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 3
}
|
https://api.github.com/repos/ollama/ollama/issues/6310/timeline
| null |
reopened
| false
|
https://api.github.com/repos/ollama/ollama/issues/3002
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3002/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3002/comments
|
https://api.github.com/repos/ollama/ollama/issues/3002/events
|
https://github.com/ollama/ollama/issues/3002
| 2,176,088,229
|
I_kwDOJ0Z1Ps6BtHil
| 3,002
|
Disable Chat History/Logging Option
|
{
"login": "trymeouteh",
"id": 31172274,
"node_id": "MDQ6VXNlcjMxMTcyMjc0",
"avatar_url": "https://avatars.githubusercontent.com/u/31172274?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/trymeouteh",
"html_url": "https://github.com/trymeouteh",
"followers_url": "https://api.github.com/users/trymeouteh/followers",
"following_url": "https://api.github.com/users/trymeouteh/following{/other_user}",
"gists_url": "https://api.github.com/users/trymeouteh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/trymeouteh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/trymeouteh/subscriptions",
"organizations_url": "https://api.github.com/users/trymeouteh/orgs",
"repos_url": "https://api.github.com/users/trymeouteh/repos",
"events_url": "https://api.github.com/users/trymeouteh/events{/privacy}",
"received_events_url": "https://api.github.com/users/trymeouteh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 3
| 2024-03-08T13:53:27
| 2024-05-18T18:51:58
| 2024-05-18T18:51:58
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Please add a setting to disable chat history/logging option and consider to have this disabled by default. This can increase privacy from preventing others to see what they asked the AI in the past.
Would especially be useful feature for Users and users management
https://github.com/ollama/ollama/issues/2863
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3002/reactions",
"total_count": 8,
"+1": 8,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3002/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7042
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7042/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7042/comments
|
https://api.github.com/repos/ollama/ollama/issues/7042/events
|
https://github.com/ollama/ollama/pull/7042
| 2,556,047,910
|
PR_kwDOJ0Z1Ps59Fdbn
| 7,042
|
Updated few typos in this code file build_remote.py
|
{
"login": "vignesh1507",
"id": 143084478,
"node_id": "U_kgDOCIdLvg",
"avatar_url": "https://avatars.githubusercontent.com/u/143084478?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vignesh1507",
"html_url": "https://github.com/vignesh1507",
"followers_url": "https://api.github.com/users/vignesh1507/followers",
"following_url": "https://api.github.com/users/vignesh1507/following{/other_user}",
"gists_url": "https://api.github.com/users/vignesh1507/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vignesh1507/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vignesh1507/subscriptions",
"organizations_url": "https://api.github.com/users/vignesh1507/orgs",
"repos_url": "https://api.github.com/users/vignesh1507/repos",
"events_url": "https://api.github.com/users/vignesh1507/events{/privacy}",
"received_events_url": "https://api.github.com/users/vignesh1507/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-09-30T09:14:11
| 2024-11-21T19:25:40
| 2024-11-21T19:25:39
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7042",
"html_url": "https://github.com/ollama/ollama/pull/7042",
"diff_url": "https://github.com/ollama/ollama/pull/7042.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7042.patch",
"merged_at": null
}
| null |
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7042/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7042/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5872
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5872/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5872/comments
|
https://api.github.com/repos/ollama/ollama/issues/5872/events
|
https://github.com/ollama/ollama/pull/5872
| 2,424,996,900
|
PR_kwDOJ0Z1Ps52NGsP
| 5,872
|
[Ascend ] add ascend npu support
|
{
"login": "zhongTao99",
"id": 56594937,
"node_id": "MDQ6VXNlcjU2NTk0OTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/56594937?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhongTao99",
"html_url": "https://github.com/zhongTao99",
"followers_url": "https://api.github.com/users/zhongTao99/followers",
"following_url": "https://api.github.com/users/zhongTao99/following{/other_user}",
"gists_url": "https://api.github.com/users/zhongTao99/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhongTao99/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhongTao99/subscriptions",
"organizations_url": "https://api.github.com/users/zhongTao99/orgs",
"repos_url": "https://api.github.com/users/zhongTao99/repos",
"events_url": "https://api.github.com/users/zhongTao99/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhongTao99/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 47
| 2024-07-23T11:44:09
| 2025-01-26T09:28:23
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5872",
"html_url": "https://github.com/ollama/ollama/pull/5872",
"diff_url": "https://github.com/ollama/ollama/pull/5872.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5872.patch",
"merged_at": null
}
|
It's a draft for ascend npu support, It can get gpu info for npu, and need to be optimization
fix:
https://github.com/ollama/ollama/issues/5315
The **pre-builded ollama** that support Huawei Atlas A2 series as the backend can be obtained from the following:
**docker image:**
`docker pull leopony/ollama:latest`
docker running command example:
`docker run \
--name ollama \
--device /dev/davinci0 \
--device /dev/davinci_manager \
--device /dev/devmm_svm \
--device /dev/hisi_hdc \
-v /usr/local/dcmi:/usr/local/dcmi \
-v /usr/local/bin/npu-smi:/usr/local/bin/npu-smi \
-v /usr/local/Ascend/driver/lib64/:/usr/local/Ascend/driver/lib64/ \
-v /usr/local/Ascend/driver/version.info:/usr/local/Ascend/driver/version.info \
-v /etc/ascend_install.info:/etc/ascend_install.info \
-p 11434:11434 \
-it leopony/ollama:latest /bin/bash`
**Bianary tar:**
https://github.com/leo-pony/ollama/blob/ollama_bin/ollama_Atlas_A2_series_cann8.0.rc2.bin.gz
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5872/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/ollama/ollama/issues/5872/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1268
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1268/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1268/comments
|
https://api.github.com/repos/ollama/ollama/issues/1268/events
|
https://github.com/ollama/ollama/pull/1268
| 2,010,122,948
|
PR_kwDOJ0Z1Ps5gVHQf
| 1,268
|
complete gguf upgrade
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2023-11-24T18:59:51
| 2023-12-15T19:39:09
| 2023-12-15T19:39:08
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1268",
"html_url": "https://github.com/ollama/ollama/pull/1268",
"diff_url": "https://github.com/ollama/ollama/pull/1268.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1268.patch",
"merged_at": null
}
|
- remove ggml runner
- automatically pull gguf models when ggml detected
- tell users to update to gguf in the case automatic pull fails
On running a ggml model, a gguf model will be automatically pulled before running:
```
ollama run orca-mini
This model is no longer compatible with Ollama. Pulling a new version...
pulling manifest
pulling 66002b78c70a... 100% ▕███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏(2.0 GB/2.0 GB)
pulling dd90d0f2b7ee... 100% ▕███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏(95 B/95 B)
pulling 93ca9b3d83dc... 100% ▕███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏(89 B/89 B)
pulling 33eb43a1488d... 100% ▕███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏(52 B/52 B)
pulling fd52b10ee3ee... 100% ▕███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏(455 B/455 B)
verifying sha256 digest
writing manifest
removing any unused layers
success
>>> hello
```
When a gguf model is not available:
```
ollama run custom-ggml
pulling manifest
Error: pull model manifest: file does not exist
Error: unsupported model, please update this model to gguf format
```
Create from a GGML library model:
```
ollama create mario -f ~/models/mario/Modelfile
transferring model data
reading model metadata
updating base model
pulling manifest
pulling 22f7f8ef5f4c... 100% ▕███████████████████████████████████████▏(3.8 GB/3.8 GB)
pulling 8c17c2ebb0ea... 100% ▕███████████████████████████████████████▏(7.0 KB/7.0 KB)
pulling 7c23fb36d801... 100% ▕███████████████████████████████████████▏(4.8 KB/4.8 KB)
pulling 2e0493f67d0c... 100% ▕███████████████████████████████████████▏(59 B/59 B)
pulling 2759286baa87... 100% ▕███████████████████████████████████████▏(105 B/105 B)
pulling 5407e3188df9... 100% ▕███████████████████████████████████████▏(529 B/529 B)
verifying sha256 digest
writing manifest
removing any unused layers
success
... etc
```
Create from a custom ggml model:
```
ollama create orca-ggml -f ~/models/orca-ggml/Modelfile
transferring model data
creating model layer
Error: model binary specified in FROM field is not a valid gguf format model, unsupported model format
```
API request from a GGML file error (same for embeddings and generate):
```
{
"error": "unsupported model format: this model may be incompatible with your version of ollama. If you previously pulled this model, try updating it by running `ollama pull orca-ggml:latest`"
}
```
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1268/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1268/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1278
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1278/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1278/comments
|
https://api.github.com/repos/ollama/ollama/issues/1278/events
|
https://github.com/ollama/ollama/issues/1278
| 2,011,124,354
|
I_kwDOJ0Z1Ps5331KC
| 1,278
|
Install clobbers /etc/systemd/system/ollama.service file destroying any custom configurations like specifying IP or PORT being served or preventing cors errors
|
{
"login": "Dougie777",
"id": 77511128,
"node_id": "MDQ6VXNlcjc3NTExMTI4",
"avatar_url": "https://avatars.githubusercontent.com/u/77511128?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Dougie777",
"html_url": "https://github.com/Dougie777",
"followers_url": "https://api.github.com/users/Dougie777/followers",
"following_url": "https://api.github.com/users/Dougie777/following{/other_user}",
"gists_url": "https://api.github.com/users/Dougie777/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Dougie777/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Dougie777/subscriptions",
"organizations_url": "https://api.github.com/users/Dougie777/orgs",
"repos_url": "https://api.github.com/users/Dougie777/repos",
"events_url": "https://api.github.com/users/Dougie777/events{/privacy}",
"received_events_url": "https://api.github.com/users/Dougie777/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 4
| 2023-11-26T17:19:20
| 2024-01-20T00:10:10
| 2024-01-20T00:10:10
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Upgrading to the latest version clobbers my /etc/systemd/system/ollama.service file. If the file exists it should not be overwritten. Or the distro should only include a sample file like so /etc/systemd/system/ollama.service.sample
To Reproduce
1 Install ollama as a service using docs.
2 Customize /etc/systemd/system/ollama.service like adding:
Environment=OLLAMA_HOST=0.0.0.0
Environment=OLLAMA_ORIGINS=*
3 Upgrade to the latest ollama version
Expected behavior
/etc/systemd/system/ollama.service is overwritten, destroying any customizations to this file.
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1278/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1278/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5524
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5524/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5524/comments
|
https://api.github.com/repos/ollama/ollama/issues/5524/events
|
https://github.com/ollama/ollama/pull/5524
| 2,393,830,627
|
PR_kwDOJ0Z1Ps50mhRC
| 5,524
|
allow converting adapters from npz
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-07-07T01:30:58
| 2024-08-12T21:34:38
| 2024-08-12T21:34:38
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | true
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5524",
"html_url": "https://github.com/ollama/ollama/pull/5524",
"diff_url": "https://github.com/ollama/ollama/pull/5524.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5524.patch",
"merged_at": null
}
| null |
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5524/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5524/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1574
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1574/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1574/comments
|
https://api.github.com/repos/ollama/ollama/issues/1574/events
|
https://github.com/ollama/ollama/issues/1574
| 2,045,344,153
|
I_kwDOJ0Z1Ps556XmZ
| 1,574
|
Sending several requests to the server in quick succession appears to cause some responses to fail
|
{
"login": "charstorm",
"id": 126527238,
"node_id": "U_kgDOB4qnBg",
"avatar_url": "https://avatars.githubusercontent.com/u/126527238?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/charstorm",
"html_url": "https://github.com/charstorm",
"followers_url": "https://api.github.com/users/charstorm/followers",
"following_url": "https://api.github.com/users/charstorm/following{/other_user}",
"gists_url": "https://api.github.com/users/charstorm/gists{/gist_id}",
"starred_url": "https://api.github.com/users/charstorm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/charstorm/subscriptions",
"organizations_url": "https://api.github.com/users/charstorm/orgs",
"repos_url": "https://api.github.com/users/charstorm/repos",
"events_url": "https://api.github.com/users/charstorm/events{/privacy}",
"received_events_url": "https://api.github.com/users/charstorm/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 3
| 2023-12-17T19:25:20
| 2023-12-17T20:15:23
| 2023-12-17T20:15:23
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi,
First, I want to thank everyone working on this project. I appreciate your efforts.
I was testing ollama server and I noticed that it sometimes gave empty responses. I found out that it happens when a request is made right after the previous one. Adding a sleep seems to solve the issue. Here is some code to demonstrate the issue:
```python
import time
import json
import requests
DEFAULT_URL = "http://localhost:11434/api/generate"
DEFAULT_MODEL = "mistral"
def generate(prompt, model=DEFAULT_MODEL, url=DEFAULT_URL):
post_data = {
"prompt": prompt, "model": model, "stream": True
}
response = requests.post(url, json=post_data)
response.raise_for_status()
parts = []
for line in response.iter_lines():
body = json.loads(line)
if "error" in body:
raise Exception(body["error"])
content = body.get("response", "")
parts.append(content)
if body.get("done"):
break
return "".join(parts).strip()
def test():
prompts = [
"What is radian?", "What is meridian?",
"What is steradian?", "What is circadian?"
]
for prompt in prompts:
print("Prompt:", prompt)
response = generate(prompt)
if not response:
print("ERROR: got empty response!")
else:
print("Response:", response)
print("----"*20)
# time.sleep(0.3) # <-- this is needed to avoid empty responses
if __name__ == "__main__":
test()
```
|
{
"login": "charstorm",
"id": 126527238,
"node_id": "U_kgDOB4qnBg",
"avatar_url": "https://avatars.githubusercontent.com/u/126527238?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/charstorm",
"html_url": "https://github.com/charstorm",
"followers_url": "https://api.github.com/users/charstorm/followers",
"following_url": "https://api.github.com/users/charstorm/following{/other_user}",
"gists_url": "https://api.github.com/users/charstorm/gists{/gist_id}",
"starred_url": "https://api.github.com/users/charstorm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/charstorm/subscriptions",
"organizations_url": "https://api.github.com/users/charstorm/orgs",
"repos_url": "https://api.github.com/users/charstorm/repos",
"events_url": "https://api.github.com/users/charstorm/events{/privacy}",
"received_events_url": "https://api.github.com/users/charstorm/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1574/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1574/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5618
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5618/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5618/comments
|
https://api.github.com/repos/ollama/ollama/issues/5618/events
|
https://github.com/ollama/ollama/pull/5618
| 2,401,860,057
|
PR_kwDOJ0Z1Ps51BqXR
| 5,618
|
OpenAI: add suffix to docs
|
{
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjhan/followers",
"following_url": "https://api.github.com/users/royjhan/following{/other_user}",
"gists_url": "https://api.github.com/users/royjhan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/royjhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/royjhan/subscriptions",
"organizations_url": "https://api.github.com/users/royjhan/orgs",
"repos_url": "https://api.github.com/users/royjhan/repos",
"events_url": "https://api.github.com/users/royjhan/events{/privacy}",
"received_events_url": "https://api.github.com/users/royjhan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-07-10T22:41:08
| 2024-07-16T23:53:07
| 2024-07-16T23:53:07
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | true
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5618",
"html_url": "https://github.com/ollama/ollama/pull/5618",
"diff_url": "https://github.com/ollama/ollama/pull/5618.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5618.patch",
"merged_at": null
}
| null |
{
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjhan/followers",
"following_url": "https://api.github.com/users/royjhan/following{/other_user}",
"gists_url": "https://api.github.com/users/royjhan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/royjhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/royjhan/subscriptions",
"organizations_url": "https://api.github.com/users/royjhan/orgs",
"repos_url": "https://api.github.com/users/royjhan/repos",
"events_url": "https://api.github.com/users/royjhan/events{/privacy}",
"received_events_url": "https://api.github.com/users/royjhan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5618/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5618/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1514
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1514/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1514/comments
|
https://api.github.com/repos/ollama/ollama/issues/1514/events
|
https://github.com/ollama/ollama/issues/1514
| 2,040,816,473
|
I_kwDOJ0Z1Ps55pGNZ
| 1,514
|
The code below appears to ignore CUDA_VISIBLE_DEVICES in its calculation, i.e. any GPU you won't use, will still be counted as VRAM.
|
{
"login": "phalexo",
"id": 4603365,
"node_id": "MDQ6VXNlcjQ2MDMzNjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/4603365?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/phalexo",
"html_url": "https://github.com/phalexo",
"followers_url": "https://api.github.com/users/phalexo/followers",
"following_url": "https://api.github.com/users/phalexo/following{/other_user}",
"gists_url": "https://api.github.com/users/phalexo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/phalexo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/phalexo/subscriptions",
"organizations_url": "https://api.github.com/users/phalexo/orgs",
"repos_url": "https://api.github.com/users/phalexo/repos",
"events_url": "https://api.github.com/users/phalexo/events{/privacy}",
"received_events_url": "https://api.github.com/users/phalexo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg",
"url": "https://api.github.com/repos/ollama/ollama/labels/nvidia",
"name": "nvidia",
"color": "8CDB00",
"default": false,
"description": "Issues relating to Nvidia GPUs and CUDA"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 2
| 2023-12-14T03:00:05
| 2024-04-23T15:31:40
| 2024-04-23T15:31:40
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
```go
func CheckVRAM() (int64, error) {
cmd := exec.Command("nvidia-smi", "--query-gpu=memory.free", "--format=csv,noheader,nounits")
var stdout bytes.Buffer
cmd.Stdout = &stdout
err := cmd.Run()
if err != nil {
return 0, errNvidiaSMI
}
var freeMiB int64
scanner := bufio.NewScanner(&stdout)
for scanner.Scan() {
line := scanner.Text()
if strings.Contains(line, "[Insufficient Permissions]") {
return 0, fmt.Errorf("GPU support may not enabled, check you have installed GPU drivers and have the necessary permissions to run nvidia-smi")
}
vram, err := strconv.ParseInt(strings.TrimSpace(line), 10, 64)
if err != nil {
return 0, fmt.Errorf("failed to parse available VRAM: %v", err)
}
freeMiB += vram
}
freeBytes := freeMiB * 1024 * 1024
if freeBytes < 2*format.GigaByte {
log.Printf("less than 2 GB VRAM available")
return 0, errAvailableVRAM
}
return freeBytes, nil
}
```
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1514/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/ollama/ollama/issues/1514/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6405
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6405/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6405/comments
|
https://api.github.com/repos/ollama/ollama/issues/6405/events
|
https://github.com/ollama/ollama/issues/6405
| 2,472,007,294
|
I_kwDOJ0Z1Ps6TV9Z-
| 6,405
|
Implement layer-by-layer paging from CPU RAM into GPU for large models.
|
{
"login": "Speedway1",
"id": 100301611,
"node_id": "U_kgDOBfp7Kw",
"avatar_url": "https://avatars.githubusercontent.com/u/100301611?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Speedway1",
"html_url": "https://github.com/Speedway1",
"followers_url": "https://api.github.com/users/Speedway1/followers",
"following_url": "https://api.github.com/users/Speedway1/following{/other_user}",
"gists_url": "https://api.github.com/users/Speedway1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Speedway1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Speedway1/subscriptions",
"organizations_url": "https://api.github.com/users/Speedway1/orgs",
"repos_url": "https://api.github.com/users/Speedway1/repos",
"events_url": "https://api.github.com/users/Speedway1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Speedway1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 11
| 2024-08-18T14:49:10
| 2024-08-18T23:22:52
| 2024-08-18T19:54:00
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
While the GPU makers want us to believe that the main crunch point is not enough GPU power, the real issue with self-hosted LLMs is lack of memory. Especially when we're inferencing at large context windows (which is where the magic starts to happen).
At the moment Ollama loads all the model's layers and does a very good job of trying to fit everything into GPU and then spilling over to CPU. But CPU is super slow.
A better way to handle large models might be:
1) Load the entire model into RAM and set aside the KV storage, etc for large contexts in the GPU's VRAM.
2) Work out how much VRAM is available and then translate that into how many layers could be loaded into VRAM. Let;'s call it _n_ layers.
3) Load in the first _n_ layers, then when inference need to move to _n_+1, replace the current _n_ layers in VRAM with the next _n_ layers, and so on until the last layer is reached.
Some people have implemented single layer paging but by paging as many layers as possible, there should, in theory at least, be some efficiency gains. Similarly loading the layers into RAM rather than off disc is for maximum access and transfer speeds.
Where more than one GPU card is in the machine this might make for an even more efficient algorithm as layers can be loaded into the dormant GPU while processing continues with with active GPU.
Is this possible?
|
{
"login": "Speedway1",
"id": 100301611,
"node_id": "U_kgDOBfp7Kw",
"avatar_url": "https://avatars.githubusercontent.com/u/100301611?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Speedway1",
"html_url": "https://github.com/Speedway1",
"followers_url": "https://api.github.com/users/Speedway1/followers",
"following_url": "https://api.github.com/users/Speedway1/following{/other_user}",
"gists_url": "https://api.github.com/users/Speedway1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Speedway1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Speedway1/subscriptions",
"organizations_url": "https://api.github.com/users/Speedway1/orgs",
"repos_url": "https://api.github.com/users/Speedway1/repos",
"events_url": "https://api.github.com/users/Speedway1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Speedway1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6405/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6405/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2013
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2013/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2013/comments
|
https://api.github.com/repos/ollama/ollama/issues/2013/events
|
https://github.com/ollama/ollama/pull/2013
| 2,083,245,093
|
PR_kwDOJ0Z1Ps5kJ6Tg
| 2,013
|
Add support for min_p sampling (original by @Robitx)
|
{
"login": "nathanpbell",
"id": 3697,
"node_id": "MDQ6VXNlcjM2OTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3697?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nathanpbell",
"html_url": "https://github.com/nathanpbell",
"followers_url": "https://api.github.com/users/nathanpbell/followers",
"following_url": "https://api.github.com/users/nathanpbell/following{/other_user}",
"gists_url": "https://api.github.com/users/nathanpbell/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nathanpbell/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nathanpbell/subscriptions",
"organizations_url": "https://api.github.com/users/nathanpbell/orgs",
"repos_url": "https://api.github.com/users/nathanpbell/repos",
"events_url": "https://api.github.com/users/nathanpbell/events{/privacy}",
"received_events_url": "https://api.github.com/users/nathanpbell/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 4
| 2024-01-16T08:04:30
| 2024-05-22T10:48:24
| 2024-01-16T09:00:55
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/2013",
"html_url": "https://github.com/ollama/ollama/pull/2013",
"diff_url": "https://github.com/ollama/ollama/pull/2013.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2013.patch",
"merged_at": null
}
|
This is a updated copy of @Robitx's pull request to add support for min_p sampling that was implemented in llama.cpp.
It differs from @Robitx's pull request in only in that it resolves the merge conflict that occurred after he submitted his original pull request. Feel free to ignore this and pull in his instead (if the merge is resolved)
|
{
"login": "nathanpbell",
"id": 3697,
"node_id": "MDQ6VXNlcjM2OTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3697?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nathanpbell",
"html_url": "https://github.com/nathanpbell",
"followers_url": "https://api.github.com/users/nathanpbell/followers",
"following_url": "https://api.github.com/users/nathanpbell/following{/other_user}",
"gists_url": "https://api.github.com/users/nathanpbell/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nathanpbell/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nathanpbell/subscriptions",
"organizations_url": "https://api.github.com/users/nathanpbell/orgs",
"repos_url": "https://api.github.com/users/nathanpbell/repos",
"events_url": "https://api.github.com/users/nathanpbell/events{/privacy}",
"received_events_url": "https://api.github.com/users/nathanpbell/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2013/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2013/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/41
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/41/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/41/comments
|
https://api.github.com/repos/ollama/ollama/issues/41/events
|
https://github.com/ollama/ollama/pull/41
| 1,791,993,700
|
PR_kwDOJ0Z1Ps5U1YcJ
| 41
|
tcp socket
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2023-07-06T17:56:25
| 2023-07-06T18:15:50
| 2023-07-06T18:15:32
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/41",
"html_url": "https://github.com/ollama/ollama/pull/41",
"diff_url": "https://github.com/ollama/ollama/pull/41.diff",
"patch_url": "https://github.com/ollama/ollama/pull/41.patch",
"merged_at": "2023-07-06T18:15:32"
}
| null |
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/41/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/41/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1562
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1562/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1562/comments
|
https://api.github.com/repos/ollama/ollama/issues/1562/events
|
https://github.com/ollama/ollama/issues/1562
| 2,044,697,361
|
I_kwDOJ0Z1Ps5535sR
| 1,562
|
Inquiries Regarding Ollama Tool Usage
|
{
"login": "ewijaya",
"id": 9668738,
"node_id": "MDQ6VXNlcjk2Njg3Mzg=",
"avatar_url": "https://avatars.githubusercontent.com/u/9668738?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ewijaya",
"html_url": "https://github.com/ewijaya",
"followers_url": "https://api.github.com/users/ewijaya/followers",
"following_url": "https://api.github.com/users/ewijaya/following{/other_user}",
"gists_url": "https://api.github.com/users/ewijaya/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ewijaya/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ewijaya/subscriptions",
"organizations_url": "https://api.github.com/users/ewijaya/orgs",
"repos_url": "https://api.github.com/users/ewijaya/repos",
"events_url": "https://api.github.com/users/ewijaya/events{/privacy}",
"received_events_url": "https://api.github.com/users/ewijaya/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2023-12-16T10:24:55
| 2023-12-19T17:53:05
| 2023-12-19T17:53:05
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi,
Thanks for Ollama tool, it's been a fantastic resource!
I have a couple of inquiries I hope you could assist me with:
1. I recently executed the following command:
```
ollama create dolphin.mistral -f Modelfile.dolphin.mistral
```
The contents of my `Modelfile.dolphin.mistral` are as follows:
```
FROM ./dolphin-2.5-mixtral-8x7b.Q4_K_M.gguf
TEMPLATE """{{ if and .First .System }}system
{{ .System }}
{{ end }}user
{{ .Prompt }}
assistant
"""
SYSTEM """You are Dolphin, an uncensored, unbiased AI assistant"""
PARAMETER stop
```
Could you please guide me on the typical duration for the `ollama create` process to complete?
2. I also noticed that the output of `ollama create` is typically saved in the following directory:
```
/usr/share/ollama/.ollama/models/
```
Is it possible to designate an alternative directory for storing this output, apart from the default location?
Sincerely,
E.W.
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1562/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1562/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/917
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/917/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/917/comments
|
https://api.github.com/repos/ollama/ollama/issues/917/events
|
https://github.com/ollama/ollama/pull/917
| 1,964,079,452
|
PR_kwDOJ0Z1Ps5d5Xnq
| 917
|
fix docker build annotations
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-10-26T18:04:41
| 2023-10-26T19:00:34
| 2023-10-26T19:00:34
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/917",
"html_url": "https://github.com/ollama/ollama/pull/917",
"diff_url": "https://github.com/ollama/ollama/pull/917.diff",
"patch_url": "https://github.com/ollama/ollama/pull/917.patch",
"merged_at": "2023-10-26T19:00:34"
}
| null |
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/917/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/917/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7070
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7070/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7070/comments
|
https://api.github.com/repos/ollama/ollama/issues/7070/events
|
https://github.com/ollama/ollama/issues/7070
| 2,560,290,013
|
I_kwDOJ0Z1Ps6Ymuzd
| 7,070
|
Warning: Could not connect to a running Ollama instance (Mac OS - Apple Silicon M2 Pro)
|
{
"login": "sohamnandi77",
"id": 56152437,
"node_id": "MDQ6VXNlcjU2MTUyNDM3",
"avatar_url": "https://avatars.githubusercontent.com/u/56152437?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sohamnandi77",
"html_url": "https://github.com/sohamnandi77",
"followers_url": "https://api.github.com/users/sohamnandi77/followers",
"following_url": "https://api.github.com/users/sohamnandi77/following{/other_user}",
"gists_url": "https://api.github.com/users/sohamnandi77/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sohamnandi77/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sohamnandi77/subscriptions",
"organizations_url": "https://api.github.com/users/sohamnandi77/orgs",
"repos_url": "https://api.github.com/users/sohamnandi77/repos",
"events_url": "https://api.github.com/users/sohamnandi77/events{/privacy}",
"received_events_url": "https://api.github.com/users/sohamnandi77/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info",
"name": "needs more info",
"color": "BA8041",
"default": false,
"description": "More information is needed to assist"
}
] |
closed
| false
| null |
[] | null | 4
| 2024-10-01T22:05:41
| 2025-01-30T05:06:42
| 2024-11-05T22:53:58
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
After successfully installing Ollama on my machine, I am encountering the following warning messages when trying to run the software:
**Copy code**
Warning: could not connect to a running Ollama instance
Warning: client version is 0.3.12
**Steps to Reproduce:**
Install Ollama on macOS using the latest release version (0.3.12).
Attempt to start or use Ollama.
The warning appears immediately after trying to connect to Ollama.
**Expected Behavior:**
Ollama should start and run without warning, successfully connecting to the instance.
**Actual Behavior:**
The warning appears indicating that Ollama cannot connect to a running instance, despite following the installation instructions.
**Additional Information:**
I have verified the installation steps and checked the version, but the issue persists. Please let me know if there are additional logs or configurations that I can provide to assist in resolving the issue.
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.3.12
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7070/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7070/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3565
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3565/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3565/comments
|
https://api.github.com/repos/ollama/ollama/issues/3565/events
|
https://github.com/ollama/ollama/pull/3565
| 2,234,466,351
|
PR_kwDOJ0Z1Ps5sL8Pu
| 3,565
|
fix: rope
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-04-09T23:18:35
| 2024-04-24T16:14:45
| 2024-04-09T23:36:55
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3565",
"html_url": "https://github.com/ollama/ollama/pull/3565",
"diff_url": "https://github.com/ollama/ollama/pull/3565.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3565.patch",
"merged_at": "2024-04-09T23:36:55"
}
|
Some models set RopeFrequencyBase and RopeFrequencyScale. Removing these fields makes those models unusable
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3565/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3565/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4972
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4972/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4972/comments
|
https://api.github.com/repos/ollama/ollama/issues/4972/events
|
https://github.com/ollama/ollama/pull/4972
| 2,345,709,903
|
PR_kwDOJ0Z1Ps5yEs9O
| 4,972
|
fix: "Skip searching for network devices"
|
{
"login": "jayson-cloude",
"id": 62731682,
"node_id": "MDQ6VXNlcjYyNzMxNjgy",
"avatar_url": "https://avatars.githubusercontent.com/u/62731682?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jayson-cloude",
"html_url": "https://github.com/jayson-cloude",
"followers_url": "https://api.github.com/users/jayson-cloude/followers",
"following_url": "https://api.github.com/users/jayson-cloude/following{/other_user}",
"gists_url": "https://api.github.com/users/jayson-cloude/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jayson-cloude/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jayson-cloude/subscriptions",
"organizations_url": "https://api.github.com/users/jayson-cloude/orgs",
"repos_url": "https://api.github.com/users/jayson-cloude/repos",
"events_url": "https://api.github.com/users/jayson-cloude/events{/privacy}",
"received_events_url": "https://api.github.com/users/jayson-cloude/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-06-11T08:12:25
| 2024-06-15T00:04:41
| 2024-06-15T00:04:41
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4972",
"html_url": "https://github.com/ollama/ollama/pull/4972",
"diff_url": "https://github.com/ollama/ollama/pull/4972.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4972.patch",
"merged_at": "2024-06-15T00:04:41"
}
|
On an Ubuntu 24.04 computer with vmware installed, the sudo lshw command will get stuck. "Network interfaces" is always displayed
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4972/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4972/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5847
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5847/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5847/comments
|
https://api.github.com/repos/ollama/ollama/issues/5847/events
|
https://github.com/ollama/ollama/pull/5847
| 2,422,400,869
|
PR_kwDOJ0Z1Ps52ENrz
| 5,847
|
Reduce docker image size
|
{
"login": "yeahdongcn",
"id": 2831050,
"node_id": "MDQ6VXNlcjI4MzEwNTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2831050?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yeahdongcn",
"html_url": "https://github.com/yeahdongcn",
"followers_url": "https://api.github.com/users/yeahdongcn/followers",
"following_url": "https://api.github.com/users/yeahdongcn/following{/other_user}",
"gists_url": "https://api.github.com/users/yeahdongcn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yeahdongcn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yeahdongcn/subscriptions",
"organizations_url": "https://api.github.com/users/yeahdongcn/orgs",
"repos_url": "https://api.github.com/users/yeahdongcn/repos",
"events_url": "https://api.github.com/users/yeahdongcn/events{/privacy}",
"received_events_url": "https://api.github.com/users/yeahdongcn/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-07-22T09:31:17
| 2024-09-03T16:25:32
| 2024-09-03T16:25:31
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5847",
"html_url": "https://github.com/ollama/ollama/pull/5847",
"diff_url": "https://github.com/ollama/ollama/pull/5847.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5847.patch",
"merged_at": "2024-09-03T16:25:31"
}
|
The docker image size is approximately reduced by 20MB after cleaning the apt caches.
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5847/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5847/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5788
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5788/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5788/comments
|
https://api.github.com/repos/ollama/ollama/issues/5788/events
|
https://github.com/ollama/ollama/issues/5788
| 2,418,228,328
|
I_kwDOJ0Z1Ps6QIzxo
| 5,788
|
Support LoRA GGUF Adapters
|
{
"login": "suncloudsmoon",
"id": 34616349,
"node_id": "MDQ6VXNlcjM0NjE2MzQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/34616349?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/suncloudsmoon",
"html_url": "https://github.com/suncloudsmoon",
"followers_url": "https://api.github.com/users/suncloudsmoon/followers",
"following_url": "https://api.github.com/users/suncloudsmoon/following{/other_user}",
"gists_url": "https://api.github.com/users/suncloudsmoon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/suncloudsmoon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/suncloudsmoon/subscriptions",
"organizations_url": "https://api.github.com/users/suncloudsmoon/orgs",
"repos_url": "https://api.github.com/users/suncloudsmoon/repos",
"events_url": "https://api.github.com/users/suncloudsmoon/events{/privacy}",
"received_events_url": "https://api.github.com/users/suncloudsmoon/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-07-19T07:11:05
| 2024-09-19T21:15:52
| 2024-09-12T22:20:48
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Recently, [llama.cpp added support for LoRA GGUF adapters](https://github.com/ggerganov/llama.cpp/pull/8332), replacing the old GGML format. I would love to see this feature extended to Ollama if it's possible. Currently, Ollama only supports GGML adapters as shown in [```modelfile.md```](https://github.com/ollama/ollama/blob/main/docs/modelfile.md#adapter).
> The ADAPTER instruction is an optional instruction that specifies any LoRA adapter that should apply to the base model. The value of this instruction should be an absolute path or a path relative to the Modelfile and the file must be in a GGML file format. The adapter should be tuned from the base model otherwise the behaviour is undefined.
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5788/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5788/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6102
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6102/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6102/comments
|
https://api.github.com/repos/ollama/ollama/issues/6102/events
|
https://github.com/ollama/ollama/pull/6102
| 2,440,578,534
|
PR_kwDOJ0Z1Ps53Bf3W
| 6,102
|
cmd: quantize progress
|
{
"login": "joshyan1",
"id": 76125168,
"node_id": "MDQ6VXNlcjc2MTI1MTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/76125168?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joshyan1",
"html_url": "https://github.com/joshyan1",
"followers_url": "https://api.github.com/users/joshyan1/followers",
"following_url": "https://api.github.com/users/joshyan1/following{/other_user}",
"gists_url": "https://api.github.com/users/joshyan1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joshyan1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joshyan1/subscriptions",
"organizations_url": "https://api.github.com/users/joshyan1/orgs",
"repos_url": "https://api.github.com/users/joshyan1/repos",
"events_url": "https://api.github.com/users/joshyan1/events{/privacy}",
"received_events_url": "https://api.github.com/users/joshyan1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-07-31T17:49:55
| 2024-11-21T09:51:09
| 2024-11-21T09:51:09
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6102",
"html_url": "https://github.com/ollama/ollama/pull/6102",
"diff_url": "https://github.com/ollama/ollama/pull/6102.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6102.patch",
"merged_at": null
}
|
new PR because the old one was again stuck rebasing
|
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6102/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6102/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/8381
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8381/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8381/comments
|
https://api.github.com/repos/ollama/ollama/issues/8381/events
|
https://github.com/ollama/ollama/pull/8381
| 2,781,408,916
|
PR_kwDOJ0Z1Ps6HZi2n
| 8,381
|
Explicit mention `ollama serve` will start a server, friendly for new users
|
{
"login": "deephbz",
"id": 13776377,
"node_id": "MDQ6VXNlcjEzNzc2Mzc3",
"avatar_url": "https://avatars.githubusercontent.com/u/13776377?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/deephbz",
"html_url": "https://github.com/deephbz",
"followers_url": "https://api.github.com/users/deephbz/followers",
"following_url": "https://api.github.com/users/deephbz/following{/other_user}",
"gists_url": "https://api.github.com/users/deephbz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/deephbz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/deephbz/subscriptions",
"organizations_url": "https://api.github.com/users/deephbz/orgs",
"repos_url": "https://api.github.com/users/deephbz/repos",
"events_url": "https://api.github.com/users/deephbz/events{/privacy}",
"received_events_url": "https://api.github.com/users/deephbz/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 0
| 2025-01-10T23:29:20
| 2025-01-10T23:29:20
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/8381",
"html_url": "https://github.com/ollama/ollama/pull/8381",
"diff_url": "https://github.com/ollama/ollama/pull/8381.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8381.patch",
"merged_at": null
}
|
Different local LLM framework run differently. Some run as a stand-alone single while others run as server to serve API requests.
New users are coming to Ollama and we could make it clear in this quick start tutorial.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8381/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8381/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3696
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3696/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3696/comments
|
https://api.github.com/repos/ollama/ollama/issues/3696/events
|
https://github.com/ollama/ollama/pull/3696
| 2,247,896,624
|
PR_kwDOJ0Z1Ps5s5u3L
| 3,696
|
supported openbmb/minicpm-2b-dpo
|
{
"login": "hadoop2xu",
"id": 48076281,
"node_id": "MDQ6VXNlcjQ4MDc2Mjgx",
"avatar_url": "https://avatars.githubusercontent.com/u/48076281?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hadoop2xu",
"html_url": "https://github.com/hadoop2xu",
"followers_url": "https://api.github.com/users/hadoop2xu/followers",
"following_url": "https://api.github.com/users/hadoop2xu/following{/other_user}",
"gists_url": "https://api.github.com/users/hadoop2xu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hadoop2xu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hadoop2xu/subscriptions",
"organizations_url": "https://api.github.com/users/hadoop2xu/orgs",
"repos_url": "https://api.github.com/users/hadoop2xu/repos",
"events_url": "https://api.github.com/users/hadoop2xu/events{/privacy}",
"received_events_url": "https://api.github.com/users/hadoop2xu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-04-17T10:02:01
| 2024-05-09T18:09:16
| 2024-05-09T18:09:15
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3696",
"html_url": "https://github.com/ollama/ollama/pull/3696",
"diff_url": "https://github.com/ollama/ollama/pull/3696.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3696.patch",
"merged_at": null
}
|
支持 openbmb/minicpm-2b-dpo
使用方法:ollama run modelbest/minicpm-2b-dpo
模型地址: https://huggingface.co/openbmb/MiniCPM-2B-dpo-bf16
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3696/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3696/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7361
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7361/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7361/comments
|
https://api.github.com/repos/ollama/ollama/issues/7361/events
|
https://github.com/ollama/ollama/pull/7361
| 2,614,860,053
|
PR_kwDOJ0Z1Ps5_8UJt
| 7,361
|
Fix incremental build file deps
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-10-25T18:36:03
| 2024-10-25T18:50:48
| 2024-10-25T18:50:45
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7361",
"html_url": "https://github.com/ollama/ollama/pull/7361",
"diff_url": "https://github.com/ollama/ollama/pull/7361.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7361.patch",
"merged_at": "2024-10-25T18:50:45"
}
|
The common src/hdr defs should be in the common definitions, not gpu specific.
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7361/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7361/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4199
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4199/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4199/comments
|
https://api.github.com/repos/ollama/ollama/issues/4199/events
|
https://github.com/ollama/ollama/issues/4199
| 2,280,846,843
|
I_kwDOJ0Z1Ps6H8vX7
| 4,199
|
support llama 3 Moe
|
{
"login": "taozhiyuai",
"id": 146583103,
"node_id": "U_kgDOCLyuPw",
"avatar_url": "https://avatars.githubusercontent.com/u/146583103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/taozhiyuai",
"html_url": "https://github.com/taozhiyuai",
"followers_url": "https://api.github.com/users/taozhiyuai/followers",
"following_url": "https://api.github.com/users/taozhiyuai/following{/other_user}",
"gists_url": "https://api.github.com/users/taozhiyuai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/taozhiyuai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/taozhiyuai/subscriptions",
"organizations_url": "https://api.github.com/users/taozhiyuai/orgs",
"repos_url": "https://api.github.com/users/taozhiyuai/repos",
"events_url": "https://api.github.com/users/taozhiyuai/events{/privacy}",
"received_events_url": "https://api.github.com/users/taozhiyuai/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
open
| false
| null |
[] | null | 3
| 2024-05-06T13:08:32
| 2024-05-06T23:36:48
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
please support
QuantFactory/Meta-Llama-3-120B-Instruct-GGUF
raincandy-u/Llama-3-Aplite-Instruct-4x8B-GGUF-MoE
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4199/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4199/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/279
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/279/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/279/comments
|
https://api.github.com/repos/ollama/ollama/issues/279/events
|
https://github.com/ollama/ollama/issues/279
| 1,836,703,477
|
I_kwDOJ0Z1Ps5ted71
| 279
|
Files and folders in .ollama aren't getting cleaned up
|
{
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.github.com/users/technovangelist/followers",
"following_url": "https://api.github.com/users/technovangelist/following{/other_user}",
"gists_url": "https://api.github.com/users/technovangelist/gists{/gist_id}",
"starred_url": "https://api.github.com/users/technovangelist/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/technovangelist/subscriptions",
"organizations_url": "https://api.github.com/users/technovangelist/orgs",
"repos_url": "https://api.github.com/users/technovangelist/repos",
"events_url": "https://api.github.com/users/technovangelist/events{/privacy}",
"received_events_url": "https://api.github.com/users/technovangelist/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 0
| 2023-08-04T12:59:21
| 2023-10-23T16:29:53
| 2023-10-23T16:29:53
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I created a sentiments modelfile and the blobs and manifests folders were populated. Then i deleted that model, and the files were removed but the folders under manifests were not.
Then I noticed that sentiments uses orca and doesn't specify a SYSTEM instruction, so inherits it from orca. So updated the modelfile to have a SYSTEm instruction that is blank. created it. now in blobs I have a new layer that has 0 size, as expected. the only model that uses this empty system prompt is sentiments.
I further updated the modelfile to pull the system prompt out of the template and into the system instruction in the modelfile. and created again. Now the manifest references the new system prompt digest in blobs. but even though no model file is using the empty system prompt it is still there.
I ran ollama rm sentiments, and all the <current> layers in sentiments were deleted, but that empty system prompt remains.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/279/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/279/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1888
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1888/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1888/comments
|
https://api.github.com/repos/ollama/ollama/issues/1888/events
|
https://github.com/ollama/ollama/issues/1888
| 2,073,926,657
|
I_kwDOJ0Z1Ps57nZwB
| 1,888
|
nvmlInit_v2 unable to detect Nvidia GPU in WSL
|
{
"login": "taweili",
"id": 6722,
"node_id": "MDQ6VXNlcjY3MjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/6722?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/taweili",
"html_url": "https://github.com/taweili",
"followers_url": "https://api.github.com/users/taweili/followers",
"following_url": "https://api.github.com/users/taweili/following{/other_user}",
"gists_url": "https://api.github.com/users/taweili/gists{/gist_id}",
"starred_url": "https://api.github.com/users/taweili/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/taweili/subscriptions",
"organizations_url": "https://api.github.com/users/taweili/orgs",
"repos_url": "https://api.github.com/users/taweili/repos",
"events_url": "https://api.github.com/users/taweili/events{/privacy}",
"received_events_url": "https://api.github.com/users/taweili/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 1
| 2024-01-10T09:16:33
| 2024-01-10T23:21:58
| 2024-01-10T23:21:58
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Ollama has switched to using [NVML](https://developer.nvidia.com/nvidia-management-library-nvml) to detect the Nvidia environment. However, this method failed on WSL. Here is a short C code to validate the behavior.
The `nvmlReturn_t` returns 9 [NVML_ERROR_DRIVER_NOT_LOADED = 9](https://docs.nvidia.com/deploy/nvml-api/group__nvmlDeviceEnumvs.html#group__nvmlDeviceEnumvs_1g06fa9b5de08c6cc716fbf565e93dd3d0). This may make sense according to the implementation of Nvidia in WSL as it use the driver from Windows host. I can't find any document on this, one way or another.
This issue prevents Ollama v0.1.18 and 0.1.19 from using Nvidia hardware in WSL.
```c
#include <stdio.h>
#include "gpu_info_cuda.h"
cuda_init_resp_t resp;
mem_info_t mem_info;
void main(void)
{
nvmlReturn_t ret;
cuda_init(&resp);
ret = resp.ch.initFn();
printf("%d\n", ret);
}
```
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1888/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1888/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8602
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8602/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8602/comments
|
https://api.github.com/repos/ollama/ollama/issues/8602/events
|
https://github.com/ollama/ollama/issues/8602
| 2,812,269,390
|
I_kwDOJ0Z1Ps6nn9NO
| 8,602
|
Deepseek-R1 671B - Segmentation Fault Bug
|
{
"login": "Notbici",
"id": 196611455,
"node_id": "U_kgDOC7gNfw",
"avatar_url": "https://avatars.githubusercontent.com/u/196611455?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Notbici",
"html_url": "https://github.com/Notbici",
"followers_url": "https://api.github.com/users/Notbici/followers",
"following_url": "https://api.github.com/users/Notbici/following{/other_user}",
"gists_url": "https://api.github.com/users/Notbici/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Notbici/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Notbici/subscriptions",
"organizations_url": "https://api.github.com/users/Notbici/orgs",
"repos_url": "https://api.github.com/users/Notbici/repos",
"events_url": "https://api.github.com/users/Notbici/events{/privacy}",
"received_events_url": "https://api.github.com/users/Notbici/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 3
| 2025-01-27T07:27:36
| 2025-01-28T11:09:17
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Hi,
I've been using the Deepseek-R1 671B model from Ollama on my 8x H100 machine and keep running into a segmentation fault, I've noticed that the frequency of the segfault happens the larger the context becomes.
I'm using the latest Ollama release.
Hardware Specs:
- 8x H100 - 80GB SXM
- Xeon Platinum 8468 (160c)
- Micron 7450 ssd
- 1548gb of ram
- OS is ubuntu 22.04
- CUDA: 12.6
- NVIDIA driver: 560.35.05
Happy to test params or gather more data, I'm having a hard time working around this. The distilled models like the deepseek llama 70B work just fine.
[server.err.log](https://github.com/user-attachments/files/18554764/server.err.log)
Any advice is appreciated.
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.5.7
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8602/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8602/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/7339
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7339/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7339/comments
|
https://api.github.com/repos/ollama/ollama/issues/7339/events
|
https://github.com/ollama/ollama/issues/7339
| 2,610,352,358
|
I_kwDOJ0Z1Ps6bltDm
| 7,339
|
Error: an unknown error was encountered while running the model
|
{
"login": "ipsmile",
"id": 28075439,
"node_id": "MDQ6VXNlcjI4MDc1NDM5",
"avatar_url": "https://avatars.githubusercontent.com/u/28075439?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ipsmile",
"html_url": "https://github.com/ipsmile",
"followers_url": "https://api.github.com/users/ipsmile/followers",
"following_url": "https://api.github.com/users/ipsmile/following{/other_user}",
"gists_url": "https://api.github.com/users/ipsmile/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ipsmile/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ipsmile/subscriptions",
"organizations_url": "https://api.github.com/users/ipsmile/orgs",
"repos_url": "https://api.github.com/users/ipsmile/repos",
"events_url": "https://api.github.com/users/ipsmile/events{/privacy}",
"received_events_url": "https://api.github.com/users/ipsmile/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info",
"name": "needs more info",
"color": "BA8041",
"default": false,
"description": "More information is needed to assist"
}
] |
closed
| false
| null |
[] | null | 4
| 2024-10-24T04:05:02
| 2024-10-31T18:20:08
| 2024-10-31T18:20:07
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
$ ollama run deepseek-coder-v2
pulling manifest
pulling 5ff0abeeac1d... 100% ▕████████████████▏ 8.9 GB
pulling 22091531faf0... 100% ▕████████████████▏ 705 B
pulling 4bb71764481f... 100% ▕████████████████▏ 13 KB
pulling 1c8f573e830c... 100% ▕████████████████▏ 1.1 KB
pulling 19f2fb9e8bc6... 100% ▕████████████████▏ 32 B
pulling 34488e453cfe... 100% ▕████████████████▏ 568 B
verifying sha256 digest
writing manifest
success
>>> provide a line by line analysis for the following code: require("inc/inc.Set
... tings.php");
...
... if(true) {
... require_once("inc/inc.Utils.php");
... require_once("inc/inc.LogInit.php");
... require_once("inc/inc.Language.php");
... require_once("inc/inc.Init.php");
... require_once("inc/inc.Extension.php");
... require_once("inc/inc.DBInit.php");
...
... $c = new \Slim\Container(); //Create Your container
... $c['notFoundHandler'] = function ($c) use ($settings, $dms) {
... return function ($request, $response) use ($c, $settings, $d
... ms) {
... $uri = $request->getUri();
... if($uri->getBasePath())
... $file = $uri->getPath();
... else
... $file = substr($uri->getPath(), 1);
... if(file_exists($file) && is_file($file)) {
... $_SERVER['SCRIPT_FILENAME'] = basename($file
... );
... // include($file);
... exit;
... }
... if($request->isXhr()) {
... exit;
... }
... // print_r($request->getUri());
... // exit;
... return $c['response']
... ->withStatus(302)
... ->withHeader('Location', isset($settings->_s
... iteDefaultPage) && strlen($settings->_siteDefaultPage)>0 ? $settings->_httpR
... oot.$settings->_siteDefaultPage : $settings->_httpRoot."out/out.ViewFolder.p
... hp");
... };
... };
... $app = new \Slim\App($c);
... $container = $app->getContainer();
... $container['dms'] = $dms;
... $container['config'] = $settings;
... $container['conversionmgr'] = $conversionmgr;
... $container['logger'] = $logger;
... $container['fulltextservice'] = $fulltextservice;
... $container['notifier'] = $notifier;
... $container['authenticator'] = $authenticator;
...
... if(isset($GLOBALS['SEEDDMS_HOOKS']['initDMS'])) {
... foreach($GLOBALS['SEEDDMS_HOOKS']['initDMS'] as $hookObj) {
... if (method_exists($hookObj, 'addRoute')) {
... $hookObj->addRoute(array('dms'=>$dms, 'app'=
... >$app, 'settings'=>$settings, 'conversionmgr'=>$conversionmgr, 'authenticato
... r'=>$authenticator, 'fulltextservice'=>$fulltextservice, 'logger'=>$logger))
... ;
... // } else {
... // include("inc/inc.Authentication.php");
... // if (method_exists($hookObj, 'addRouteAfter
... Authentication')) {
... // $hookObj->addRouteAfterAuthenticat
... ion(array('dms'=>$dms, 'app'=>$app, 'settings'=>$settings, 'user'=>$user));
... // }
... }
... }
... }
...
... /*
... $app->get('/out/[{path:.*}]', function($request, $response, $path =
... null) use ($app) {
... $uri = $request->getUri();
... if($uri->getBasePath())
... $file = $uri->getPath();
... else
... $file = substr($uri->getPath(), 1);
... if(file_exists($file) && is_file($file)) {
... $_SERVER['SCRIPT_FILENAME'] = basename($file);
... include($file);
... exit;
... }
... });
... */
...
... $app->run();
... } else {
...
... header("Location: ". (isset($settings->_siteDefaultPage) && strlen($
... settings->_siteDefaultPage)>0 ? $settings->_siteDefaultPage : "out/out.ViewF
... older.php"));
...
Here's a line-by-line analysis of the provided PHP code snippet:
```php
require("inc/inc.Settings.php"); // This line includes the Settings file
which is necessary for the script to function correctly, as it likely
contains configuration settings or global variables used throughout the
application.
if(true) { // The if statement always evaluates to true due to its
condition being a constant boolean value 'true'. Therefore, the code
inside this block will execute every time the script is run.
require_once("inc/inc.Utils.php"); // This line includes the Utils
file which might contain utility functions that can be used throughout the
application. The `require_once` ensures the file is only included once
even if it's called multiple times in the script.
require_once("inc/inc.LogInit.php"); // Similar to above, this line
includes a log initialization file which might set up logging mechanisms
for error tracking or other informational outputs.
require_once("inc/inc.Language.php"); // This includes a language
configuration file which could be used for internationalization and
localization purposes in the application.
require_once("inc/inc.Init.php"); // Another initialization file,
possibly setting up initial configurations or environment settings needed
by various parts of the script.
require_once("inc/inc.Extension.php"); // This line includes a file
that might contain extensions or additional functionalities provided as
modules to enhance the application's capabilities.
require_once("inc/inc.DBInit.php"); // This includes database
initialization code, possibly setting up database connections and
configurations used throughout the application.
$c = new \Slim\Container(); // Creates a new container instance using
the Slim framework's Container class. A container is typically used for
dependency injection in frameworks like Slim.
$c['notFoundHandler'] = function ($c) use ($settings, $dms) { //
Defines an anonymous function as the not found handler for requests made
to routes that do not exist or are not mapped properly. It uses a closure
to capture the settings and dms variables from the parent scope.
return function ($request, $response) use ($c, $settings, $dms) {
// The anonymous function returns another anonymous function which is
triggered when a 404 error occurs. This inner function captures request,
response, container, settings, and dms as well.
$uri = $request->getUri(); // Retrieves the URI object from
the Slim request to get details about the current request URL.
if($uri->getBasePath()) // Checks if there is a base path set
in the URI, possibly for routing purposes.
$file = $uri->getPath(); // If a base path exists, it uses
the entire path; otherwise, it takes the path starting from index 1 to
avoid leading slash issues.
else
$file = substr($uri->getPath(), 1); // Retrieves the path
and skips the first character (usually a slash) if no base path is set.
if(file_exists($file) && is_file($file)) { // Checks if the
file exists in the specified path and whether it's a regular file (not
directory).
$_SERVER['SCRIPT_FILENAME'] = basename($file); // If the
file exists, sets the SCRIPT_FILENAME server variable to the name of the
file.
// include($file); // This line would
normally be used to include and execute the PHP code contained in the
found file but has been commented out here.
exit; // Terminates the script execution after including
or processing the file.
}
if($request->isXhr()) { // Checks if the request is an AJAX
(XMLHttpRequest) type.
exit; // If it's an AJAX request, terminates the script
execution immediately.
}
// print_r($request->getUri()); // This line would
normally be used to debug by printing the URI object for debugging
purposes but has been commented out here.
// exit; // Terminates the script after possibly
outputting debug information.
return $c['response'] // If none of the above conditions are
met, returns a redirect response to the default page or an error handling
page based on configuration settings.
->withStatus(302) // Sets the HTTP status code to 302
(Found), which means temporarily moved to another URL.
->withHeader('Location',
isset($settings->_siteDefaultPage) &&
strlen($settings->_siteDefaultPage)>0 ?
$settings->_httpRoot.$settings->_siteDefaultPage :
$settings->_httpRoot."out/out.ViewFolder.php"); // Sets the header for
redirection, using the configured default page or a fallback if not set.
};
};
$app = new \Slim\App($c); // Creates a new Slim application instance
with the previously created container as its dependency injection
container.
$container = $app->getContainer(); // Retrieves and assigns to
`$container` the current application's service container, which includes
various components like settings, database connections, etc.
$container['dms'] = $dms; // Assigns the dms object (presumably a
dependency) to the 'dms' key in the Slim container for easy access
throughout the app.
Error: an unknown error was encountered while running the model
### OS
Linux
### GPU
AMD
### CPU
AMD
### Ollama version
0.3.13
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7339/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7339/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6874
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6874/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6874/comments
|
https://api.github.com/repos/ollama/ollama/issues/6874/events
|
https://github.com/ollama/ollama/issues/6874
| 2,535,612,824
|
I_kwDOJ0Z1Ps6XImGY
| 6,874
|
Unable to pull models behind the proxy on windows
|
{
"login": "WeiguangHan",
"id": 109776541,
"node_id": "U_kgDOBosOnQ",
"avatar_url": "https://avatars.githubusercontent.com/u/109776541?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/WeiguangHan",
"html_url": "https://github.com/WeiguangHan",
"followers_url": "https://api.github.com/users/WeiguangHan/followers",
"following_url": "https://api.github.com/users/WeiguangHan/following{/other_user}",
"gists_url": "https://api.github.com/users/WeiguangHan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/WeiguangHan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/WeiguangHan/subscriptions",
"organizations_url": "https://api.github.com/users/WeiguangHan/orgs",
"repos_url": "https://api.github.com/users/WeiguangHan/repos",
"events_url": "https://api.github.com/users/WeiguangHan/events{/privacy}",
"received_events_url": "https://api.github.com/users/WeiguangHan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg",
"url": "https://api.github.com/repos/ollama/ollama/labels/windows",
"name": "windows",
"color": "0052CC",
"default": false,
"description": ""
},
{
"id": 6677370291,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCVsw",
"url": "https://api.github.com/repos/ollama/ollama/labels/networking",
"name": "networking",
"color": "0B5368",
"default": false,
"description": "Issues relating to ollama pull and push"
}
] |
closed
| false
| null |
[] | null | 3
| 2024-09-19T08:11:57
| 2024-09-20T23:34:26
| 2024-09-20T23:34:06
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
```
PS C:\Users\Administrator> set HTTPS_PROXY=http://child-prc.intel.com:913
PS C:\Users\Administrator> set https_proxy=http://child-prc.intel.com:913
PS C:\Users\Administrator> ollama run qwen2.5:7b
pulling manifest
Error: pull model manifest: Get "https://registry.ollama.ai/v2/library/qwen2.5/manifests/7b": dial tcp 172.67.182.229:443: i/o timeout
```
I set the https_proxy but it doesn't work on my windows computer. I try it many times but it doesn't work.
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6874/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/ollama/ollama/issues/6874/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6963
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6963/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6963/comments
|
https://api.github.com/repos/ollama/ollama/issues/6963/events
|
https://github.com/ollama/ollama/pull/6963
| 2,548,755,009
|
PR_kwDOJ0Z1Ps58stLp
| 6,963
|
llama3.2 vision support
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 10
| 2024-09-25T18:57:57
| 2024-10-22T14:04:35
| 2024-10-18T23:12:35
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6963",
"html_url": "https://github.com/ollama/ollama/pull/6963",
"diff_url": "https://github.com/ollama/ollama/pull/6963.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6963.patch",
"merged_at": "2024-10-18T23:12:35"
}
|
Image processing routines for being able to run llama3.2.
This will need to be refactored at some point to support other multimodal models as well.
EDIT: This now includes all of the code for getting vision support to work, and not just the image processing routines. It's still not 100% though, but good enough to test out and kick the tires.
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6963/reactions",
"total_count": 120,
"+1": 65,
"-1": 0,
"laugh": 0,
"hooray": 27,
"confused": 0,
"heart": 3,
"rocket": 25,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6963/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/443
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/443/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/443/comments
|
https://api.github.com/repos/ollama/ollama/issues/443/events
|
https://github.com/ollama/ollama/pull/443
| 1,874,156,921
|
PR_kwDOJ0Z1Ps5ZKnUl
| 443
|
windows: fix filepath bugs
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-08-30T18:16:07
| 2023-08-31T21:19:11
| 2023-08-31T21:19:10
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/443",
"html_url": "https://github.com/ollama/ollama/pull/443",
"diff_url": "https://github.com/ollama/ollama/pull/443.diff",
"patch_url": "https://github.com/ollama/ollama/pull/443.patch",
"merged_at": "2023-08-31T21:19:10"
}
|
List and Delete has the same issue where the path was constructed using Linux/macOS path separators which does not work in Windows. This PR fixes and simplifies the code.
Fix `filenameWithPath` which also assumes a Linux/macOS path separator when looking for `~`.
Use `filenameWithPath` to resolve adapter filepath
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/443/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/443/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7510
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7510/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7510/comments
|
https://api.github.com/repos/ollama/ollama/issues/7510/events
|
https://github.com/ollama/ollama/issues/7510
| 2,635,471,331
|
I_kwDOJ0Z1Ps6dFhnj
| 7,510
|
Add support for function call (response back) (message.role=tool)
|
{
"login": "RogerBarreto",
"id": 19890735,
"node_id": "MDQ6VXNlcjE5ODkwNzM1",
"avatar_url": "https://avatars.githubusercontent.com/u/19890735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RogerBarreto",
"html_url": "https://github.com/RogerBarreto",
"followers_url": "https://api.github.com/users/RogerBarreto/followers",
"following_url": "https://api.github.com/users/RogerBarreto/following{/other_user}",
"gists_url": "https://api.github.com/users/RogerBarreto/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RogerBarreto/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RogerBarreto/subscriptions",
"organizations_url": "https://api.github.com/users/RogerBarreto/orgs",
"repos_url": "https://api.github.com/users/RogerBarreto/repos",
"events_url": "https://api.github.com/users/RogerBarreto/events{/privacy}",
"received_events_url": "https://api.github.com/users/RogerBarreto/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 7706482389,
"node_id": "LA_kwDOJ0Z1Ps8AAAABy1eW1Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/api",
"name": "api",
"color": "bfdadc",
"default": false,
"description": ""
}
] |
open
| false
| null |
[] | null | 2
| 2024-11-05T13:29:06
| 2024-12-06T17:43:20
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
# Add support for function call (response back)
1. Currently there's no support for sending back the function call result to the model using the `role=tool` messages.
2. Using the native API (not openai), function tool calls don't have an identifier associated `tool_call_id`, this is present in the `openai` API, and is important to be available on both APIs.
> [!IMPORTANT]
> This ID is very when providing the result back to the model (in a chat history where the same function was invoked multiple times in a chat history with different results) for the model to reason about.

| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7510/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7510/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/4724
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4724/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4724/comments
|
https://api.github.com/repos/ollama/ollama/issues/4724/events
|
https://github.com/ollama/ollama/issues/4724
| 2,325,983,681
|
I_kwDOJ0Z1Ps6Ko7HB
| 4,724
|
empty response
|
{
"login": "themw123",
"id": 80266862,
"node_id": "MDQ6VXNlcjgwMjY2ODYy",
"avatar_url": "https://avatars.githubusercontent.com/u/80266862?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/themw123",
"html_url": "https://github.com/themw123",
"followers_url": "https://api.github.com/users/themw123/followers",
"following_url": "https://api.github.com/users/themw123/following{/other_user}",
"gists_url": "https://api.github.com/users/themw123/gists{/gist_id}",
"starred_url": "https://api.github.com/users/themw123/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/themw123/subscriptions",
"organizations_url": "https://api.github.com/users/themw123/orgs",
"repos_url": "https://api.github.com/users/themw123/repos",
"events_url": "https://api.github.com/users/themw123/events{/privacy}",
"received_events_url": "https://api.github.com/users/themw123/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 7
| 2024-05-30T15:37:14
| 2025-01-25T17:25:51
| 2024-09-12T23:19:10
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I am getting an empty response string with llama3:8b.
Different models like mistral-instruct are working fine.
Setup:
- windows 11
- newest ollama version
- llama3:8b(latest)
When context gets to high (approximately after exchanging 20 question/answer pairs) by appending the history with every request, i get an empty response from time to time like in 30% of responses.
Here is my code:
```
self.messages = [{"role": "system", "content": system_role}]
self.messages.append({"role": "user", "content": my_input_text})
url = "http://localhost:11434/api/chat"
data = {
"model": "llama3:8b",
"messages": self.messages,
"stream": True
}
response = requests.post(url, json=data)
for chunk in response.iter_lines():
if chunk:
chunk = chunk.decode('utf-8')
chunk = json.loads(chunk)
content = chunk['message']['content']
print(content, end="", flush=True)
yield content
```
### OS
Windows
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.1.39
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4724/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4724/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8683
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8683/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8683/comments
|
https://api.github.com/repos/ollama/ollama/issues/8683/events
|
https://github.com/ollama/ollama/issues/8683
| 2,819,701,999
|
I_kwDOJ0Z1Ps6oETzv
| 8,683
|
Support release build without AVX
|
{
"login": "yoonsio",
"id": 24367477,
"node_id": "MDQ6VXNlcjI0MzY3NDc3",
"avatar_url": "https://avatars.githubusercontent.com/u/24367477?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yoonsio",
"html_url": "https://github.com/yoonsio",
"followers_url": "https://api.github.com/users/yoonsio/followers",
"following_url": "https://api.github.com/users/yoonsio/following{/other_user}",
"gists_url": "https://api.github.com/users/yoonsio/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yoonsio/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yoonsio/subscriptions",
"organizations_url": "https://api.github.com/users/yoonsio/orgs",
"repos_url": "https://api.github.com/users/yoonsio/repos",
"events_url": "https://api.github.com/users/yoonsio/events{/privacy}",
"received_events_url": "https://api.github.com/users/yoonsio/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 0
| 2025-01-30T01:34:51
| 2025-01-30T02:13:47
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Release image fails to detect the GPU when running on a CPU that does not support AVX.
Please add a non-AVX release build to the release pipeline.
```
msg="Dynamic LLM libraries" runners="[cpu_avx cpu cpu_avx2]"
```
Custom image can be built by overriding `CUSTOM_CPU_FLAGS`.
#### Example:
```
docker build --platform linux/amd64 --build-arg VERSION=noavx --build-arg CUSTOM_CPU_FLAGS= -t ollama/ollama:noavx -f Dockerfile .
```
#### Relevant issue:
* https://github.com/ollama/ollama/issues/2187
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8683/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8683/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/7431
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7431/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7431/comments
|
https://api.github.com/repos/ollama/ollama/issues/7431/events
|
https://github.com/ollama/ollama/pull/7431
| 2,625,485,479
|
PR_kwDOJ0Z1Ps6AdJ8D
| 7,431
|
Add Perfect Memory AI to community integrations
|
{
"login": "DariusKocar",
"id": 60488234,
"node_id": "MDQ6VXNlcjYwNDg4MjM0",
"avatar_url": "https://avatars.githubusercontent.com/u/60488234?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DariusKocar",
"html_url": "https://github.com/DariusKocar",
"followers_url": "https://api.github.com/users/DariusKocar/followers",
"following_url": "https://api.github.com/users/DariusKocar/following{/other_user}",
"gists_url": "https://api.github.com/users/DariusKocar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DariusKocar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DariusKocar/subscriptions",
"organizations_url": "https://api.github.com/users/DariusKocar/orgs",
"repos_url": "https://api.github.com/users/DariusKocar/repos",
"events_url": "https://api.github.com/users/DariusKocar/events{/privacy}",
"received_events_url": "https://api.github.com/users/DariusKocar/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-10-30T22:22:53
| 2024-11-17T23:19:26
| 2024-11-17T23:19:26
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7431",
"html_url": "https://github.com/ollama/ollama/pull/7431",
"diff_url": "https://github.com/ollama/ollama/pull/7431.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7431.patch",
"merged_at": "2024-11-17T23:19:26"
}
|
I added Perfect Memory AI to community integrations.
Perfect Memory uses Ollama as an AI provider for offline inference. https://www.perfectmemory.ai/support/ai-assistant/ollama-setup
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7431/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7431/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2195
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2195/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2195/comments
|
https://api.github.com/repos/ollama/ollama/issues/2195/events
|
https://github.com/ollama/ollama/pull/2195
| 2,101,357,601
|
PR_kwDOJ0Z1Ps5lHcg9
| 2,195
|
Ignore AMD integrated GPUs
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 6
| 2024-01-26T00:02:11
| 2024-07-02T04:08:16
| 2024-01-26T17:30:24
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/2195",
"html_url": "https://github.com/ollama/ollama/pull/2195",
"diff_url": "https://github.com/ollama/ollama/pull/2195.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2195.patch",
"merged_at": "2024-01-26T17:30:24"
}
|
Fixes #2054
Integrated GPUs (APUs) from AMD may be reported by ROCm, but we can't run on them with our current llama.cpp configuration. These iGPUs report 512M of memory, so I've coded the check to ignore any ROCm reported GPU that has less than 1G of memory. If we detect only an integrated GPU, this will fallback to CPU mode. If we detect multiple ROCm GPUs, meaning one or more are discrete, and one is integrated, we'll now set `ROCR_VISIBLE_DEVICES` so we ignore the iGPU. If the user has explicitly set `ROCR_VISIBLE_DEVICES` we'll respect their setting.
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2195/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 2,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2195/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6753
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6753/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6753/comments
|
https://api.github.com/repos/ollama/ollama/issues/6753/events
|
https://github.com/ollama/ollama/issues/6753
| 2,519,594,795
|
I_kwDOJ0Z1Ps6WLfcr
| 6,753
|
`image_url` support for vision models
|
{
"login": "madroidmaq",
"id": 6247142,
"node_id": "MDQ6VXNlcjYyNDcxNDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/6247142?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/madroidmaq",
"html_url": "https://github.com/madroidmaq",
"followers_url": "https://api.github.com/users/madroidmaq/followers",
"following_url": "https://api.github.com/users/madroidmaq/following{/other_user}",
"gists_url": "https://api.github.com/users/madroidmaq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/madroidmaq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/madroidmaq/subscriptions",
"organizations_url": "https://api.github.com/users/madroidmaq/orgs",
"repos_url": "https://api.github.com/users/madroidmaq/repos",
"events_url": "https://api.github.com/users/madroidmaq/events{/privacy}",
"received_events_url": "https://api.github.com/users/madroidmaq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 7706482389,
"node_id": "LA_kwDOJ0Z1Ps8AAAABy1eW1Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/api",
"name": "api",
"color": "bfdadc",
"default": false,
"description": ""
}
] |
open
| false
| null |
[] | null | 3
| 2024-09-11T12:20:03
| 2024-11-25T21:18:50
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
curl:
```py
curl http://localhost:11434/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer OPENAI_API_KEY" \
-d '{
"model": "minicpm-v:8b-2.6-fp16",
"messages": [
{
"role": "user",
"content": [
{
"type": "text",
"text": "What’s in this image?"
},
{
"type": "image_url",
"image_url": {
"url": "http://images.cocodataset.org/val2017/000000039769.jpg"
}
}
]
}
],
"max_tokens": 300
}'
```
response:
```json
{
"error": {
"message": "invalid image input",
"type": "invalid_request_error",
"param": null,
"code": null
}
}
```
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.3.10
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6753/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6753/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/6682
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6682/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6682/comments
|
https://api.github.com/repos/ollama/ollama/issues/6682/events
|
https://github.com/ollama/ollama/pull/6682
| 2,511,323,740
|
PR_kwDOJ0Z1Ps56ttgy
| 6,682
|
Remove go server debug logging
|
{
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users/jessegross/followers",
"following_url": "https://api.github.com/users/jessegross/following{/other_user}",
"gists_url": "https://api.github.com/users/jessegross/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jessegross/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jessegross/subscriptions",
"organizations_url": "https://api.github.com/users/jessegross/orgs",
"repos_url": "https://api.github.com/users/jessegross/repos",
"events_url": "https://api.github.com/users/jessegross/events{/privacy}",
"received_events_url": "https://api.github.com/users/jessegross/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-09-06T23:47:54
| 2024-09-07T00:05:14
| 2024-09-07T00:05:13
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6682",
"html_url": "https://github.com/ollama/ollama/pull/6682",
"diff_url": "https://github.com/ollama/ollama/pull/6682.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6682.patch",
"merged_at": "2024-09-07T00:05:13"
}
| null |
{
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users/jessegross/followers",
"following_url": "https://api.github.com/users/jessegross/following{/other_user}",
"gists_url": "https://api.github.com/users/jessegross/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jessegross/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jessegross/subscriptions",
"organizations_url": "https://api.github.com/users/jessegross/orgs",
"repos_url": "https://api.github.com/users/jessegross/repos",
"events_url": "https://api.github.com/users/jessegross/events{/privacy}",
"received_events_url": "https://api.github.com/users/jessegross/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6682/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6682/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5319
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5319/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5319/comments
|
https://api.github.com/repos/ollama/ollama/issues/5319/events
|
https://github.com/ollama/ollama/issues/5319
| 2,377,608,378
|
I_kwDOJ0Z1Ps6Nt2y6
| 5,319
|
Fine-tuned model responding incorrectly to my prompts
|
{
"login": "giannisak",
"id": 154079765,
"node_id": "U_kgDOCS8SFQ",
"avatar_url": "https://avatars.githubusercontent.com/u/154079765?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/giannisak",
"html_url": "https://github.com/giannisak",
"followers_url": "https://api.github.com/users/giannisak/followers",
"following_url": "https://api.github.com/users/giannisak/following{/other_user}",
"gists_url": "https://api.github.com/users/giannisak/gists{/gist_id}",
"starred_url": "https://api.github.com/users/giannisak/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/giannisak/subscriptions",
"organizations_url": "https://api.github.com/users/giannisak/orgs",
"repos_url": "https://api.github.com/users/giannisak/repos",
"events_url": "https://api.github.com/users/giannisak/events{/privacy}",
"received_events_url": "https://api.github.com/users/giannisak/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 3
| 2024-06-27T09:14:30
| 2024-09-16T18:51:00
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I'm having an issue with my fine-tuned model. It doesn't respond to my prompts correctly and instead generates unrelated outputs.
It seems like the model is making up its own user input, then replying to this instead of my actual input.
## Example:
### My Input:
`Hi! Who are you?`
### Response:
```
# User
Write an article about the instruction of 2-HYDROXYPYRIMIDINE-5-BORONIC ACID, PINACOL ESTER 2000 words in the chemical industry.
# Assistant
## Introduction
In the realm of modern organic synthesis, boronic acids and their derivatives have emerged as indispensable reagents (... etc)
```
## ModelFile:
```
FROM path/finetuned_phi3_medium_4k_Q4_K_M.gguf
TEMPLATE """{{ if .System }}<|system|>
{{ .System }}<|end|>
{{ end }}{{ if .Prompt }}<|user|>
{{ .Prompt }}<|end|>
{{ end }}<|assistant|>
{{ .Response }}<|end|>"""
PARAMETER stop "<|end|>"
PARAMETER stop "<|user|>"
PARAMETER stop "<|assistant|>"
```
I am using the template of phi3. I have tried various templates and added system prompts, but the problem persists.
A few days ago, I ran the model in Ollama, and it gave good responses, but now the same model is exhibiting this behavior. I am running the same gguf on Jan, and the responses are okay. I updated to the current Ollama version, and I am recreating the model, but I can't fix it.
Thanks in advance!
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.46
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5319/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/ollama/ollama/issues/5319/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/4891
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4891/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4891/comments
|
https://api.github.com/repos/ollama/ollama/issues/4891/events
|
https://github.com/ollama/ollama/issues/4891
| 2,339,461,478
|
I_kwDOJ0Z1Ps6LcVlm
| 4,891
|
Under NVIDIA's latest driver: version 555.99, any model will only run on the CPU.
|
{
"login": "despairTK",
"id": 111871110,
"node_id": "U_kgDOBqsEhg",
"avatar_url": "https://avatars.githubusercontent.com/u/111871110?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/despairTK",
"html_url": "https://github.com/despairTK",
"followers_url": "https://api.github.com/users/despairTK/followers",
"following_url": "https://api.github.com/users/despairTK/following{/other_user}",
"gists_url": "https://api.github.com/users/despairTK/gists{/gist_id}",
"starred_url": "https://api.github.com/users/despairTK/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/despairTK/subscriptions",
"organizations_url": "https://api.github.com/users/despairTK/orgs",
"repos_url": "https://api.github.com/users/despairTK/repos",
"events_url": "https://api.github.com/users/despairTK/events{/privacy}",
"received_events_url": "https://api.github.com/users/despairTK/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 0
| 2024-06-07T02:29:55
| 2024-06-07T03:18:13
| 2024-06-07T03:18:13
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
When I updated to the latest NVIDIA driver: version 555.99, any model would only run on the CPU and the GPU would not work at all.
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.41
|
{
"login": "despairTK",
"id": 111871110,
"node_id": "U_kgDOBqsEhg",
"avatar_url": "https://avatars.githubusercontent.com/u/111871110?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/despairTK",
"html_url": "https://github.com/despairTK",
"followers_url": "https://api.github.com/users/despairTK/followers",
"following_url": "https://api.github.com/users/despairTK/following{/other_user}",
"gists_url": "https://api.github.com/users/despairTK/gists{/gist_id}",
"starred_url": "https://api.github.com/users/despairTK/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/despairTK/subscriptions",
"organizations_url": "https://api.github.com/users/despairTK/orgs",
"repos_url": "https://api.github.com/users/despairTK/repos",
"events_url": "https://api.github.com/users/despairTK/events{/privacy}",
"received_events_url": "https://api.github.com/users/despairTK/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4891/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4891/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1467
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1467/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1467/comments
|
https://api.github.com/repos/ollama/ollama/issues/1467/events
|
https://github.com/ollama/ollama/issues/1467
| 2,036,064,737
|
I_kwDOJ0Z1Ps55W-Hh
| 1,467
|
REST API : /api/chat endpoint not working
|
{
"login": "slovanos",
"id": 48527469,
"node_id": "MDQ6VXNlcjQ4NTI3NDY5",
"avatar_url": "https://avatars.githubusercontent.com/u/48527469?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/slovanos",
"html_url": "https://github.com/slovanos",
"followers_url": "https://api.github.com/users/slovanos/followers",
"following_url": "https://api.github.com/users/slovanos/following{/other_user}",
"gists_url": "https://api.github.com/users/slovanos/gists{/gist_id}",
"starred_url": "https://api.github.com/users/slovanos/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/slovanos/subscriptions",
"organizations_url": "https://api.github.com/users/slovanos/orgs",
"repos_url": "https://api.github.com/users/slovanos/repos",
"events_url": "https://api.github.com/users/slovanos/events{/privacy}",
"received_events_url": "https://api.github.com/users/slovanos/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 4
| 2023-12-11T16:32:30
| 2024-03-30T19:19:37
| 2023-12-11T16:58:43
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Refering to the the examples of the main page:
## Generate a response: Works perfectly
```
curl http://localhost:11434/api/generate -d '{
"model": "llama2",
"prompt":"Why is the sky blue?"
}'
```
## Chat with a model: Not Working
### Response is "404 page not found"
```
curl http://localhost:11434/api/chat -d '{
"model": "llama2",
"messages": [
{ "role": "user", "content": "why is the sky blue?" }
]
}'
```
I tried restarting the server and also with mistral model and I am getting the same result.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1467/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1467/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3932
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3932/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3932/comments
|
https://api.github.com/repos/ollama/ollama/issues/3932/events
|
https://github.com/ollama/ollama/issues/3932
| 2,264,970,719
|
I_kwDOJ0Z1Ps6HALXf
| 3,932
|
ERROR: NO SUCH HOST
|
{
"login": "Jinish2170",
"id": 121560356,
"node_id": "U_kgDOBz7dJA",
"avatar_url": "https://avatars.githubusercontent.com/u/121560356?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Jinish2170",
"html_url": "https://github.com/Jinish2170",
"followers_url": "https://api.github.com/users/Jinish2170/followers",
"following_url": "https://api.github.com/users/Jinish2170/following{/other_user}",
"gists_url": "https://api.github.com/users/Jinish2170/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Jinish2170/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Jinish2170/subscriptions",
"organizations_url": "https://api.github.com/users/Jinish2170/orgs",
"repos_url": "https://api.github.com/users/Jinish2170/repos",
"events_url": "https://api.github.com/users/Jinish2170/events{/privacy}",
"received_events_url": "https://api.github.com/users/Jinish2170/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 4
| 2024-04-26T05:05:29
| 2024-05-30T06:26:50
| 2024-05-01T21:24:41
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
pulling manifest
Error: Head "https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/97/970aa74c0a90ef7482477cf803618e776e173c007bf957f635f1015bfcfef0e6/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%!F(MISSING)20240426%!F(MISSING)auto%!F(MISSING)s3%!F(MISSING)aws4_request&X-Amz-Date=20240426T050008Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=2b58511832acfa13fc84fcc46b1158ae5aea40ad8680493fbef6e1e9929f0ca0": dial tcp: lookup dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com: no such host
### OS
Windows
### GPU
Intel
### CPU
Intel
### Ollama version
version is 0.1.32
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3932/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3932/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4076
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4076/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4076/comments
|
https://api.github.com/repos/ollama/ollama/issues/4076/events
|
https://github.com/ollama/ollama/issues/4076
| 2,273,380,884
|
I_kwDOJ0Z1Ps6HgQoU
| 4,076
|
MoonDream:Latest Not Working
|
{
"login": "rb81",
"id": 48117105,
"node_id": "MDQ6VXNlcjQ4MTE3MTA1",
"avatar_url": "https://avatars.githubusercontent.com/u/48117105?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rb81",
"html_url": "https://github.com/rb81",
"followers_url": "https://api.github.com/users/rb81/followers",
"following_url": "https://api.github.com/users/rb81/following{/other_user}",
"gists_url": "https://api.github.com/users/rb81/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rb81/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rb81/subscriptions",
"organizations_url": "https://api.github.com/users/rb81/orgs",
"repos_url": "https://api.github.com/users/rb81/repos",
"events_url": "https://api.github.com/users/rb81/events{/privacy}",
"received_events_url": "https://api.github.com/users/rb81/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-05-01T11:48:03
| 2024-10-30T15:19:06
| 2024-05-01T18:20:09
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
When running moondream:latest, the following error message is received:
```
Error: llama runner process no longer running: -1
```
Tried running the model from CLI using `ollama serve` as well as the desktop application.
Tried using the model form CLI as well as Open-WebUI. Same result for both.
(Maybe related to: https://github.com/ollama/ollama/issues/4063)
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.1.32
|
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4076/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4076/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7183
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7183/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7183/comments
|
https://api.github.com/repos/ollama/ollama/issues/7183/events
|
https://github.com/ollama/ollama/issues/7183
| 2,582,817,318
|
I_kwDOJ0Z1Ps6Z8qom
| 7,183
|
Failed to update all the models downloaded locally
|
{
"login": "qzc438",
"id": 61488260,
"node_id": "MDQ6VXNlcjYxNDg4MjYw",
"avatar_url": "https://avatars.githubusercontent.com/u/61488260?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qzc438",
"html_url": "https://github.com/qzc438",
"followers_url": "https://api.github.com/users/qzc438/followers",
"following_url": "https://api.github.com/users/qzc438/following{/other_user}",
"gists_url": "https://api.github.com/users/qzc438/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qzc438/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qzc438/subscriptions",
"organizations_url": "https://api.github.com/users/qzc438/orgs",
"repos_url": "https://api.github.com/users/qzc438/repos",
"events_url": "https://api.github.com/users/qzc438/events{/privacy}",
"received_events_url": "https://api.github.com/users/qzc438/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 7
| 2024-10-12T10:51:34
| 2024-10-13T04:57:23
| 2024-10-13T04:57:22
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
When I use this code in the latest version of ollama: `ollama list | cut -f 1 | tail -n +2 | xargs -n 1 ollama pull`
There is an error message:
pulling manifest
Error: pull model manifest: file does not exist
### OS
_No response_
### GPU
_No response_
### CPU
_No response_
### Ollama version
_No response_
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7183/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7183/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4781
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4781/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4781/comments
|
https://api.github.com/repos/ollama/ollama/issues/4781/events
|
https://github.com/ollama/ollama/issues/4781
| 2,329,591,935
|
I_kwDOJ0Z1Ps6K2sB_
| 4,781
|
ollama not show my model.
|
{
"login": "tuantupharma",
"id": 35091001,
"node_id": "MDQ6VXNlcjM1MDkxMDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/35091001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tuantupharma",
"html_url": "https://github.com/tuantupharma",
"followers_url": "https://api.github.com/users/tuantupharma/followers",
"following_url": "https://api.github.com/users/tuantupharma/following{/other_user}",
"gists_url": "https://api.github.com/users/tuantupharma/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tuantupharma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tuantupharma/subscriptions",
"organizations_url": "https://api.github.com/users/tuantupharma/orgs",
"repos_url": "https://api.github.com/users/tuantupharma/repos",
"events_url": "https://api.github.com/users/tuantupharma/events{/privacy}",
"received_events_url": "https://api.github.com/users/tuantupharma/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info",
"name": "needs more info",
"color": "BA8041",
"default": false,
"description": "More information is needed to assist"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-06-02T10:53:12
| 2024-07-11T15:23:26
| 2024-07-11T15:23:26
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
ollama working well with chatbox but when i install open-webui at first it working well but after few day ollama forget all of my model. the model file still in my ssd but ollama not detect them!, i pull model again but one again ollama not detect them in 2 days
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.41
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4781/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4781/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2442
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2442/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2442/comments
|
https://api.github.com/repos/ollama/ollama/issues/2442/events
|
https://github.com/ollama/ollama/issues/2442
| 2,128,334,996
|
I_kwDOJ0Z1Ps5-29CU
| 2,442
|
Error: unable to initialize llm library Radeon card detected, but permissions not set up properly. Either run ollama as root, or add you user account to the render group.
|
{
"login": "pladaria",
"id": 579417,
"node_id": "MDQ6VXNlcjU3OTQxNw==",
"avatar_url": "https://avatars.githubusercontent.com/u/579417?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pladaria",
"html_url": "https://github.com/pladaria",
"followers_url": "https://api.github.com/users/pladaria/followers",
"following_url": "https://api.github.com/users/pladaria/following{/other_user}",
"gists_url": "https://api.github.com/users/pladaria/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pladaria/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pladaria/subscriptions",
"organizations_url": "https://api.github.com/users/pladaria/orgs",
"repos_url": "https://api.github.com/users/pladaria/repos",
"events_url": "https://api.github.com/users/pladaria/events{/privacy}",
"received_events_url": "https://api.github.com/users/pladaria/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 4
| 2024-02-10T11:29:38
| 2024-03-12T02:08:44
| 2024-03-11T23:31:44
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I'm unable to run ollama. My setup:
* OS: Linux
* CPU+GPU: AMD Ryzen 3 2200G with Radeon Vega Graphics
* GPU: nVidia Tesla P40 - 24GB RAM
```
$ ollama serve
time=2024-02-10T12:21:38.851+01:00 level=INFO source=images.go:863 msg="total blobs: 0"
time=2024-02-10T12:21:38.851+01:00 level=INFO source=images.go:870 msg="total unused blobs removed: 0"
time=2024-02-10T12:21:38.851+01:00 level=INFO source=routes.go:999 msg="Listening on 127.0.0.1:11434 (version 0.1.24)"
time=2024-02-10T12:21:38.851+01:00 level=INFO source=payload_common.go:106 msg="Extracting dynamic libraries..."
Error: unable to initialize llm library Radeon card detected, but permissions not set up properly. Either run ollama as root, or add you user account to the render group.
```
Same result using `sudo` or adding myself to the `render` group.
Also tried:
`OLLAMA_LLM_LIBRARY="cuda_v11" ollama serve`
`OLLAMA_LLM_LIBRARY="cpu_avx2" ollama serve`
with the same results
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2442/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2442/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4410
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4410/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4410/comments
|
https://api.github.com/repos/ollama/ollama/issues/4410/events
|
https://github.com/ollama/ollama/issues/4410
| 2,293,775,494
|
I_kwDOJ0Z1Ps6IuDyG
| 4,410
|
Inconsistent punctuation in `ollama serve -h`
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "joshyan1",
"id": 76125168,
"node_id": "MDQ6VXNlcjc2MTI1MTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/76125168?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joshyan1",
"html_url": "https://github.com/joshyan1",
"followers_url": "https://api.github.com/users/joshyan1/followers",
"following_url": "https://api.github.com/users/joshyan1/following{/other_user}",
"gists_url": "https://api.github.com/users/joshyan1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joshyan1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joshyan1/subscriptions",
"organizations_url": "https://api.github.com/users/joshyan1/orgs",
"repos_url": "https://api.github.com/users/joshyan1/repos",
"events_url": "https://api.github.com/users/joshyan1/events{/privacy}",
"received_events_url": "https://api.github.com/users/joshyan1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "joshyan1",
"id": 76125168,
"node_id": "MDQ6VXNlcjc2MTI1MTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/76125168?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joshyan1",
"html_url": "https://github.com/joshyan1",
"followers_url": "https://api.github.com/users/joshyan1/followers",
"following_url": "https://api.github.com/users/joshyan1/following{/other_user}",
"gists_url": "https://api.github.com/users/joshyan1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joshyan1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joshyan1/subscriptions",
"organizations_url": "https://api.github.com/users/joshyan1/orgs",
"repos_url": "https://api.github.com/users/joshyan1/repos",
"events_url": "https://api.github.com/users/joshyan1/events{/privacy}",
"received_events_url": "https://api.github.com/users/joshyan1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 0
| 2024-05-13T20:22:52
| 2024-05-13T22:30:47
| 2024-05-13T22:30:47
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
```
Environment Variables:
OLLAMA_HOST The host:port to bind to (default "127.0.0.1:11434")
OLLAMA_ORIGINS A comma separated list of allowed origins.
OLLAMA_MODELS The path to the models directory (default is "~/.ollama/models")
OLLAMA_KEEP_ALIVE The duration that models stay loaded in memory (default is "5m")
OLLAMA_DEBUG Set to 1 to enable additional debug logging
```
We should omit the tailing `.` in the `OLLAMA_ORIGINS` description
### OS
_No response_
### GPU
_No response_
### CPU
_No response_
### Ollama version
_No response_
|
{
"login": "joshyan1",
"id": 76125168,
"node_id": "MDQ6VXNlcjc2MTI1MTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/76125168?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joshyan1",
"html_url": "https://github.com/joshyan1",
"followers_url": "https://api.github.com/users/joshyan1/followers",
"following_url": "https://api.github.com/users/joshyan1/following{/other_user}",
"gists_url": "https://api.github.com/users/joshyan1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joshyan1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joshyan1/subscriptions",
"organizations_url": "https://api.github.com/users/joshyan1/orgs",
"repos_url": "https://api.github.com/users/joshyan1/repos",
"events_url": "https://api.github.com/users/joshyan1/events{/privacy}",
"received_events_url": "https://api.github.com/users/joshyan1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4410/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4410/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4415
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4415/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4415/comments
|
https://api.github.com/repos/ollama/ollama/issues/4415/events
|
https://github.com/ollama/ollama/pull/4415
| 2,294,112,351
|
PR_kwDOJ0Z1Ps5vVETA
| 4,415
|
update the FAQ to be more clear about windows env variables
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-05-14T01:00:41
| 2024-05-14T01:01:14
| 2024-05-14T01:01:13
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4415",
"html_url": "https://github.com/ollama/ollama/pull/4415",
"diff_url": "https://github.com/ollama/ollama/pull/4415.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4415.patch",
"merged_at": "2024-05-14T01:01:13"
}
| null |
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4415/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4415/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4576
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4576/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4576/comments
|
https://api.github.com/repos/ollama/ollama/issues/4576/events
|
https://github.com/ollama/ollama/issues/4576
| 2,310,708,658
|
I_kwDOJ0Z1Ps6Jup2y
| 4,576
|
Tried Agentic chucking using Ollama but got error
|
{
"login": "arunkumarm-git",
"id": 170125746,
"node_id": "U_kgDOCiPpsg",
"avatar_url": "https://avatars.githubusercontent.com/u/170125746?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arunkumarm-git",
"html_url": "https://github.com/arunkumarm-git",
"followers_url": "https://api.github.com/users/arunkumarm-git/followers",
"following_url": "https://api.github.com/users/arunkumarm-git/following{/other_user}",
"gists_url": "https://api.github.com/users/arunkumarm-git/gists{/gist_id}",
"starred_url": "https://api.github.com/users/arunkumarm-git/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arunkumarm-git/subscriptions",
"organizations_url": "https://api.github.com/users/arunkumarm-git/orgs",
"repos_url": "https://api.github.com/users/arunkumarm-git/repos",
"events_url": "https://api.github.com/users/arunkumarm-git/events{/privacy}",
"received_events_url": "https://api.github.com/users/arunkumarm-git/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 0
| 2024-05-22T14:30:20
| 2024-05-22T14:30:20
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Code:
from langchain_community.llms import Ollama
from langchain.chains import create_extraction_chain_pydantic
from langchain_core.pydantic_v1 import BaseModel
from typing import Optional, List
llm = Ollama(model='llama3')
from langchain import hub
prompt = hub.pull("wfh/proposal-indexing")
runnable = prompt | llm
class Sentences(BaseModel):
sentences: List[str]
# Extraction
extraction_chain = create_extraction_chain_pydantic(pydantic_schema=Sentences, llm=llm)
def get_propositions(text):
runnable_output = runnable.invoke({
"input": text
}).content
propositions = extraction_chain.run(runnable_output)[0].sentences
return propositions
essay_propositions = []
for i, para in enumerate(pdf_text_document):
propositions = get_propositions(para)
essay_propositions.extend(propositions)
Error:
File g:\AI arbiter V2\env\Lib\site-packages\urllib3\connectionpool.py:715, in HTTPConnectionPool.urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw)
[714](file:///G:/AI%20arbiter%20V2/env/Lib/site-packages/urllib3/connectionpool.py:714) # Make the request on the httplib connection object.
--> [715](file:///G:/AI%20arbiter%20V2/env/Lib/site-packages/urllib3/connectionpool.py:715) httplib_response = self._make_request(
[716](file:///G:/AI%20arbiter%20V2/env/Lib/site-packages/urllib3/connectionpool.py:716) conn,
[717](file:///G:/AI%20arbiter%20V2/env/Lib/site-packages/urllib3/connectionpool.py:717) method,
[718](file:///G:/AI%20arbiter%20V2/env/Lib/site-packages/urllib3/connectionpool.py:718) url,
[719](file:///G:/AI%20arbiter%20V2/env/Lib/site-packages/urllib3/connectionpool.py:719) timeout=timeout_obj,
[720](file:///G:/AI%20arbiter%20V2/env/Lib/site-packages/urllib3/connectionpool.py:720) body=body,
[721](file:///G:/AI%20arbiter%20V2/env/Lib/site-packages/urllib3/connectionpool.py:721) headers=headers,
[722](file:///G:/AI%20arbiter%20V2/env/Lib/site-packages/urllib3/connectionpool.py:722) chunked=chunked,
[723](file:///G:/AI%20arbiter%20V2/env/Lib/site-packages/urllib3/connectionpool.py:723) )
[725](file:///G:/AI%20arbiter%20V2/env/Lib/site-packages/urllib3/connectionpool.py:725) # If we're going to release the connection in ``finally:``, then
[726](file:///G:/AI%20arbiter%20V2/env/Lib/site-packages/urllib3/connectionpool.py:726) # the response doesn't need to know about the connection. Otherwise
[727](file:///G:/AI%20arbiter%20V2/env/Lib/site-packages/urllib3/connectionpool.py:727) # it will also try to release it and we'll have a double-release
[728](file:///G:/AI%20arbiter%20V2/env/Lib/site-packages/urllib3/connectionpool.py:728) # mess.
File g:\AI arbiter V2\env\Lib\site-packages\urllib3\connectionpool.py:467, in HTTPConnectionPool._make_request(self, conn, method, url, timeout, chunked, **httplib_request_kw)
[463](file:///G:/AI%20arbiter%20V2/env/Lib/site-packages/urllib3/connectionpool.py:463) except BaseException as e:
[464](file:///G:/AI%20arbiter%20V2/env/Lib/site-packages/urllib3/connectionpool.py:464) # Remove the TypeError from the exception chain in
[465](file:///G:/AI%20arbiter%20V2/env/Lib/site-packages/urllib3/connectionpool.py:465) # Python 3 (including for exceptions like SystemExit).
[466](file:///G:/AI%20arbiter%20V2/env/Lib/site-packages/urllib3/connectionpool.py:466) # Otherwise it looks like a bug in the code.
--> [467](file:///G:/AI%20arbiter%20V2/env/Lib/site-packages/urllib3/connectionpool.py:467) six.raise_from(e, None)
[468](file:///G:/AI%20arbiter%20V2/env/Lib/site-packages/urllib3/connectionpool.py:468) except (SocketTimeout, BaseSSLError, SocketError) as e:
...
[503](file:///G:/AI%20arbiter%20V2/env/Lib/site-packages/requests/adapters.py:503) except MaxRetryError as e:
[504](file:///G:/AI%20arbiter%20V2/env/Lib/site-packages/requests/adapters.py:504) if isinstance(e.reason, ConnectTimeoutError):
[505](file:///G:/AI%20arbiter%20V2/env/Lib/site-packages/requests/adapters.py:505) # TODO: Remove this in 3.0.0: see #2811
ConnectionError: ('Connection aborted.', ConnectionResetError(10054, 'An existing connection was forcibly closed by the remote host', None, 10054, None))
### OS
Windows
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.1.30
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4576/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4576/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/1984
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1984/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1984/comments
|
https://api.github.com/repos/ollama/ollama/issues/1984/events
|
https://github.com/ollama/ollama/pull/1984
| 2,080,568,798
|
PR_kwDOJ0Z1Ps5kA--8
| 1,984
|
req
|
{
"login": "leotamminen",
"id": 122639748,
"node_id": "U_kgDOB09VhA",
"avatar_url": "https://avatars.githubusercontent.com/u/122639748?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/leotamminen",
"html_url": "https://github.com/leotamminen",
"followers_url": "https://api.github.com/users/leotamminen/followers",
"following_url": "https://api.github.com/users/leotamminen/following{/other_user}",
"gists_url": "https://api.github.com/users/leotamminen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/leotamminen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leotamminen/subscriptions",
"organizations_url": "https://api.github.com/users/leotamminen/orgs",
"repos_url": "https://api.github.com/users/leotamminen/repos",
"events_url": "https://api.github.com/users/leotamminen/events{/privacy}",
"received_events_url": "https://api.github.com/users/leotamminen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-01-14T03:50:13
| 2024-01-14T03:50:32
| 2024-01-14T03:50:32
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1984",
"html_url": "https://github.com/ollama/ollama/pull/1984",
"diff_url": "https://github.com/ollama/ollama/pull/1984.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1984.patch",
"merged_at": null
}
| null |
{
"login": "leotamminen",
"id": 122639748,
"node_id": "U_kgDOB09VhA",
"avatar_url": "https://avatars.githubusercontent.com/u/122639748?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/leotamminen",
"html_url": "https://github.com/leotamminen",
"followers_url": "https://api.github.com/users/leotamminen/followers",
"following_url": "https://api.github.com/users/leotamminen/following{/other_user}",
"gists_url": "https://api.github.com/users/leotamminen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/leotamminen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leotamminen/subscriptions",
"organizations_url": "https://api.github.com/users/leotamminen/orgs",
"repos_url": "https://api.github.com/users/leotamminen/repos",
"events_url": "https://api.github.com/users/leotamminen/events{/privacy}",
"received_events_url": "https://api.github.com/users/leotamminen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1984/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1984/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1305
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1305/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1305/comments
|
https://api.github.com/repos/ollama/ollama/issues/1305/events
|
https://github.com/ollama/ollama/issues/1305
| 2,014,737,519
|
I_kwDOJ0Z1Ps54FnRv
| 1,305
|
Flatpak package for Linux
|
{
"login": "rugk",
"id": 11966684,
"node_id": "MDQ6VXNlcjExOTY2Njg0",
"avatar_url": "https://avatars.githubusercontent.com/u/11966684?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rugk",
"html_url": "https://github.com/rugk",
"followers_url": "https://api.github.com/users/rugk/followers",
"following_url": "https://api.github.com/users/rugk/following{/other_user}",
"gists_url": "https://api.github.com/users/rugk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rugk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rugk/subscriptions",
"organizations_url": "https://api.github.com/users/rugk/orgs",
"repos_url": "https://api.github.com/users/rugk/repos",
"events_url": "https://api.github.com/users/rugk/events{/privacy}",
"received_events_url": "https://api.github.com/users/rugk/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2023-11-28T15:40:53
| 2023-12-05T20:18:58
| 2023-11-28T21:55:44
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
It would be nice if you could publish this as a [flatpak](https://flatpak.org/) on [flathub](https://flathub.org/) e.g.
Flatpaks are a new software distribution mechanism for Linux distros, can thus installed on any distro and are easy to update. They are easy to install _and_ update and work on all Linux distros.
Also, if you publish it on _FlatHub_ you may grow your user base given many distros include that and it is a common software source, so your app can be discovered more easily.
Here is [how to get started](http://docs.flatpak.org/en/latest/getting-started.html).
|
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1305/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1305/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/469
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/469/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/469/comments
|
https://api.github.com/repos/ollama/ollama/issues/469/events
|
https://github.com/ollama/ollama/pull/469
| 1,882,647,556
|
PR_kwDOJ0Z1Ps5ZnDAK
| 469
|
metal: add missing barriers for mul-mat
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-09-05T20:08:44
| 2023-09-05T23:37:14
| 2023-09-05T23:37:13
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/469",
"html_url": "https://github.com/ollama/ollama/pull/469",
"diff_url": "https://github.com/ollama/ollama/pull/469.diff",
"patch_url": "https://github.com/ollama/ollama/pull/469.patch",
"merged_at": "2023-09-05T23:37:13"
}
|
port
https://github.com/ggerganov/llama.cpp/pull/2699
to fix null response on generate
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/469/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/469/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2682
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2682/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2682/comments
|
https://api.github.com/repos/ollama/ollama/issues/2682/events
|
https://github.com/ollama/ollama/issues/2682
| 2,149,357,434
|
I_kwDOJ0Z1Ps6AHJd6
| 2,682
|
Windows - Serve Mode - Need to Ctrl-C or Right Click the CMD prompt from time to time to keep things moving
|
{
"login": "Shawneau",
"id": 51348013,
"node_id": "MDQ6VXNlcjUxMzQ4MDEz",
"avatar_url": "https://avatars.githubusercontent.com/u/51348013?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Shawneau",
"html_url": "https://github.com/Shawneau",
"followers_url": "https://api.github.com/users/Shawneau/followers",
"following_url": "https://api.github.com/users/Shawneau/following{/other_user}",
"gists_url": "https://api.github.com/users/Shawneau/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Shawneau/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Shawneau/subscriptions",
"organizations_url": "https://api.github.com/users/Shawneau/orgs",
"repos_url": "https://api.github.com/users/Shawneau/repos",
"events_url": "https://api.github.com/users/Shawneau/events{/privacy}",
"received_events_url": "https://api.github.com/users/Shawneau/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 4
| 2024-02-22T15:40:30
| 2024-03-12T00:14:53
| 2024-03-12T00:14:30
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I'm running open web ui and every once and a while Ollama's cmd prompt in serve mode just stops doing anything, not a crash, it's still up, but I need to ctrl-c or right click in the window to get it moving again. Any idea why?
<img width="537" alt="image" src="https://github.com/ollama/ollama/assets/51348013/9116654d-2558-420e-b8ed-a9b7d156cf55">
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2682/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2682/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7745
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7745/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7745/comments
|
https://api.github.com/repos/ollama/ollama/issues/7745/events
|
https://github.com/ollama/ollama/issues/7745
| 2,673,076,302
|
I_kwDOJ0Z1Ps6fU-hO
| 7,745
|
gpu VRAM usage didn't recover within timeout on llama3.2-vision:90b
|
{
"login": "ergosumdre",
"id": 35677602,
"node_id": "MDQ6VXNlcjM1Njc3NjAy",
"avatar_url": "https://avatars.githubusercontent.com/u/35677602?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ergosumdre",
"html_url": "https://github.com/ergosumdre",
"followers_url": "https://api.github.com/users/ergosumdre/followers",
"following_url": "https://api.github.com/users/ergosumdre/following{/other_user}",
"gists_url": "https://api.github.com/users/ergosumdre/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ergosumdre/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ergosumdre/subscriptions",
"organizations_url": "https://api.github.com/users/ergosumdre/orgs",
"repos_url": "https://api.github.com/users/ergosumdre/repos",
"events_url": "https://api.github.com/users/ergosumdre/events{/privacy}",
"received_events_url": "https://api.github.com/users/ergosumdre/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-11-19T18:10:27
| 2024-11-19T22:29:40
| 2024-11-19T22:29:40
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I'm encountering a 'timed out waiting for llama runner to start' error when executing the following command:
`ollama run llama3.2-vision:90b`
I have 64GB of VRAM, and I’m able to run other models without any issues. However, this specific model doesn’t seem to work.
Here are the server logs:
> dre@dre-slimserver-3t:~$ journalctl -u ollama --no-pager | tail -200
> Nov 19 11:54:18 dre-slimserver-3t ollama[569358]: llm_load_tensors: CUDA3 buffer size = 6520.80 MiB
> Nov 19 11:54:23 dre-slimserver-3t ollama[569358]: llama_new_context_with_model: n_ctx = 8192
> Nov 19 11:54:23 dre-slimserver-3t ollama[569358]: llama_new_context_with_model: n_batch = 2048
> Nov 19 11:54:23 dre-slimserver-3t ollama[569358]: llama_new_context_with_model: n_ubatch = 512
> Nov 19 11:54:23 dre-slimserver-3t ollama[569358]: llama_new_context_with_model: flash_attn = 0
> Nov 19 11:54:23 dre-slimserver-3t ollama[569358]: llama_new_context_with_model: freq_base = 500000.0
> Nov 19 11:54:23 dre-slimserver-3t ollama[569358]: llama_new_context_with_model: freq_scale = 1
> Nov 19 11:54:23 dre-slimserver-3t ollama[569358]: llama_kv_cache_init: CUDA0 KV buffer size = 672.00 MiB
> Nov 19 11:54:23 dre-slimserver-3t ollama[569358]: llama_kv_cache_init: CUDA1 KV buffer size = 640.00 MiB
> Nov 19 11:54:23 dre-slimserver-3t ollama[569358]: llama_kv_cache_init: CUDA2 KV buffer size = 640.00 MiB
> Nov 19 11:54:23 dre-slimserver-3t ollama[569358]: llama_kv_cache_init: CUDA3 KV buffer size = 608.00 MiB
> Nov 19 11:54:23 dre-slimserver-3t ollama[569358]: llama_new_context_with_model: KV self size = 2560.00 MiB, K (f16): 1280.00 MiB, V (f16): 1280.00 MiB
> Nov 19 11:54:23 dre-slimserver-3t ollama[569358]: llama_new_context_with_model: CUDA_Host output buffer size = 2.08 MiB
> Nov 19 11:54:23 dre-slimserver-3t ollama[569358]: llama_new_context_with_model: pipeline parallelism enabled (n_copies=4)
> Nov 19 11:54:24 dre-slimserver-3t ollama[569358]: llama_new_context_with_model: CUDA0 compute buffer size = 1216.01 MiB
> Nov 19 11:54:24 dre-slimserver-3t ollama[569358]: llama_new_context_with_model: CUDA1 compute buffer size = 1216.01 MiB
> Nov 19 11:54:24 dre-slimserver-3t ollama[569358]: llama_new_context_with_model: CUDA2 compute buffer size = 1216.01 MiB
> Nov 19 11:54:24 dre-slimserver-3t ollama[569358]: llama_new_context_with_model: CUDA3 compute buffer size = 1216.02 MiB
> Nov 19 11:54:24 dre-slimserver-3t ollama[569358]: llama_new_context_with_model: CUDA_Host compute buffer size = 80.02 MiB
> Nov 19 11:54:24 dre-slimserver-3t ollama[569358]: llama_new_context_with_model: graph nodes = 2566
> Nov 19 11:54:24 dre-slimserver-3t ollama[569358]: llama_new_context_with_model: graph splits = 5
> Nov 19 11:54:24 dre-slimserver-3t ollama[569358]: time=2024-11-19T11:54:24.228-06:00 level=INFO source=server.go:601 msg="llama runner started in 201.30 seconds"
> Nov 19 11:54:24 dre-slimserver-3t ollama[569358]: [GIN] 2024/11/19 - 11:54:24 | 200 | 3m22s | 100.100.70.109 | POST "/api/generate"
> Nov 19 11:54:55 dre-slimserver-3t ollama[569358]: [GIN] 2024/11/19 - 11:54:55 | 200 | 3.021552133s | 100.100.70.109 | POST "/api/chat"
> Nov 19 11:55:02 dre-slimserver-3t ollama[569358]: [GIN] 2024/11/19 - 11:55:02 | 200 | 37.298µs | 100.100.70.109 | HEAD "/"
> Nov 19 11:55:02 dre-slimserver-3t ollama[569358]: [GIN] 2024/11/19 - 11:55:02 | 200 | 53.04444ms | 100.100.70.109 | POST "/api/show"
> Nov 19 11:55:02 dre-slimserver-3t ollama[569358]: [GIN] 2024/11/19 - 11:55:02 | 200 | 55.176054ms | 100.100.70.109 | POST "/api/generate"
> Nov 19 11:55:07 dre-slimserver-3t ollama[569358]: [GIN] 2024/11/19 - 11:55:07 | 200 | 1.876809867s | 100.100.70.109 | POST "/api/chat"
> Nov 19 11:55:13 dre-slimserver-3t ollama[569358]: llama_model_loader: loaded meta data with 22 key-value pairs and 723 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-a09affdc2dffd0db03e7f0d1344c374d85add72ea4c715aa69b7b427f03d35d3 (version GGUF V3 (latest))
> Nov 19 11:55:13 dre-slimserver-3t ollama[569358]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
> Nov 19 11:55:13 dre-slimserver-3t ollama[569358]: llama_model_loader: - kv 0: general.architecture str = llama
> Nov 19 11:55:13 dre-slimserver-3t ollama[569358]: llama_model_loader: - kv 1: general.name str = Meta-Llama-3-70B-Instruct
> Nov 19 11:55:13 dre-slimserver-3t ollama[569358]: llama_model_loader: - kv 2: llama.block_count u32 = 80
> Nov 19 11:55:13 dre-slimserver-3t ollama[569358]: llama_model_loader: - kv 3: llama.context_length u32 = 8192
> Nov 19 11:55:13 dre-slimserver-3t ollama[569358]: llama_model_loader: - kv 4: llama.embedding_length u32 = 8192
> Nov 19 11:55:13 dre-slimserver-3t ollama[569358]: llama_model_loader: - kv 5: llama.feed_forward_length u32 = 28672
> Nov 19 11:55:13 dre-slimserver-3t ollama[569358]: llama_model_loader: - kv 6: llama.attention.head_count u32 = 64
> Nov 19 11:55:13 dre-slimserver-3t ollama[569358]: llama_model_loader: - kv 7: llama.attention.head_count_kv u32 = 8
> Nov 19 11:55:13 dre-slimserver-3t ollama[569358]: llama_model_loader: - kv 8: llama.rope.freq_base f32 = 500000.000000
> Nov 19 11:55:13 dre-slimserver-3t ollama[569358]: llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
> Nov 19 11:55:13 dre-slimserver-3t ollama[569358]: llama_model_loader: - kv 10: general.file_type u32 = 10
> Nov 19 11:55:13 dre-slimserver-3t ollama[569358]: llama_model_loader: - kv 11: llama.vocab_size u32 = 128256
> Nov 19 11:55:13 dre-slimserver-3t ollama[569358]: llama_model_loader: - kv 12: llama.rope.dimension_count u32 = 128
> Nov 19 11:55:13 dre-slimserver-3t ollama[569358]: llama_model_loader: - kv 13: tokenizer.ggml.model str = gpt2
> Nov 19 11:55:13 dre-slimserver-3t ollama[569358]: llama_model_loader: - kv 14: tokenizer.ggml.pre str = llama-bpe
> Nov 19 11:55:13 dre-slimserver-3t ollama[569358]: llama_model_loader: - kv 15: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ...
> Nov 19 11:55:13 dre-slimserver-3t ollama[569358]: llama_model_loader: - kv 16: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
> Nov 19 11:55:13 dre-slimserver-3t ollama[569358]: llama_model_loader: - kv 17: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
> Nov 19 11:55:13 dre-slimserver-3t ollama[569358]: llama_model_loader: - kv 18: tokenizer.ggml.bos_token_id u32 = 128000
> Nov 19 11:55:13 dre-slimserver-3t ollama[569358]: llama_model_loader: - kv 19: tokenizer.ggml.eos_token_id u32 = 128009
> Nov 19 11:55:13 dre-slimserver-3t ollama[569358]: llama_model_loader: - kv 20: tokenizer.chat_template str = {% set loop_messages = messages %}{% ...
> Nov 19 11:55:13 dre-slimserver-3t ollama[569358]: llama_model_loader: - kv 21: general.quantization_version u32 = 2
> Nov 19 11:55:13 dre-slimserver-3t ollama[569358]: llama_model_loader: - type f32: 161 tensors
> Nov 19 11:55:13 dre-slimserver-3t ollama[569358]: llama_model_loader: - type q2_K: 321 tensors
> Nov 19 11:55:13 dre-slimserver-3t ollama[569358]: llama_model_loader: - type q3_K: 160 tensors
> Nov 19 11:55:13 dre-slimserver-3t ollama[569358]: llama_model_loader: - type q5_K: 80 tensors
> Nov 19 11:55:13 dre-slimserver-3t ollama[569358]: llama_model_loader: - type q6_K: 1 tensors
> Nov 19 11:55:14 dre-slimserver-3t ollama[569358]: llm_load_vocab: special tokens cache size = 256
> Nov 19 11:55:14 dre-slimserver-3t ollama[569358]: llm_load_vocab: token to piece cache size = 0.8000 MB
> Nov 19 11:55:14 dre-slimserver-3t ollama[569358]: llm_load_print_meta: format = GGUF V3 (latest)
> Nov 19 11:55:14 dre-slimserver-3t ollama[569358]: llm_load_print_meta: arch = llama
> Nov 19 11:55:14 dre-slimserver-3t ollama[569358]: llm_load_print_meta: vocab type = BPE
> Nov 19 11:55:14 dre-slimserver-3t ollama[569358]: llm_load_print_meta: n_vocab = 128256
> Nov 19 11:55:14 dre-slimserver-3t ollama[569358]: llm_load_print_meta: n_merges = 280147
> Nov 19 11:55:14 dre-slimserver-3t ollama[569358]: llm_load_print_meta: vocab_only = 1
> Nov 19 11:55:14 dre-slimserver-3t ollama[569358]: llm_load_print_meta: model type = ?B
> Nov 19 11:55:14 dre-slimserver-3t ollama[569358]: llm_load_print_meta: model ftype = all F32
> Nov 19 11:55:14 dre-slimserver-3t ollama[569358]: llm_load_print_meta: model params = 70.55 B
> Nov 19 11:55:14 dre-slimserver-3t ollama[569358]: llm_load_print_meta: model size = 24.56 GiB (2.99 BPW)
> Nov 19 11:55:14 dre-slimserver-3t ollama[569358]: llm_load_print_meta: general.name = Meta-Llama-3-70B-Instruct
> Nov 19 11:55:14 dre-slimserver-3t ollama[569358]: llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>'
> Nov 19 11:55:14 dre-slimserver-3t ollama[569358]: llm_load_print_meta: EOS token = 128009 '<|eot_id|>'
> Nov 19 11:55:14 dre-slimserver-3t ollama[569358]: llm_load_print_meta: LF token = 128 'Ä'
> Nov 19 11:55:14 dre-slimserver-3t ollama[569358]: llm_load_print_meta: EOT token = 128009 '<|eot_id|>'
> Nov 19 11:55:14 dre-slimserver-3t ollama[569358]: llm_load_print_meta: EOG token = 128009 '<|eot_id|>'
> Nov 19 11:55:14 dre-slimserver-3t ollama[569358]: llm_load_print_meta: max token length = 256
> Nov 19 11:55:14 dre-slimserver-3t ollama[569358]: llama_model_load: vocab only - skipping tensors
> Nov 19 11:55:42 dre-slimserver-3t ollama[569358]: [GIN] 2024/11/19 - 11:55:42 | 200 | 28.364µs | 100.100.70.109 | HEAD "/"
> Nov 19 11:55:42 dre-slimserver-3t ollama[569358]: [GIN] 2024/11/19 - 11:55:42 | 200 | 26.459µs | 100.100.70.109 | GET "/api/ps"
> Nov 19 11:55:47 dre-slimserver-3t ollama[569358]: [GIN] 2024/11/19 - 11:55:47 | 200 | 33.907µs | 100.100.70.109 | HEAD "/"
> Nov 19 11:55:47 dre-slimserver-3t ollama[569358]: [GIN] 2024/11/19 - 11:55:47 | 200 | 37.95µs | 100.100.70.109 | GET "/api/ps"
> Nov 19 11:56:32 dre-slimserver-3t ollama[569358]: [GIN] 2024/11/19 - 11:56:32 | 200 | 1m19s | 100.100.70.109 | POST "/api/chat"
> Nov 19 11:57:02 dre-slimserver-3t ollama[569358]: [GIN] 2024/11/19 - 11:57:02 | 200 | 31.326µs | 100.100.70.109 | HEAD "/"
> Nov 19 11:57:02 dre-slimserver-3t ollama[569358]: [GIN] 2024/11/19 - 11:57:02 | 200 | 36.847003ms | 100.100.70.109 | POST "/api/show"
> Nov 19 11:57:02 dre-slimserver-3t ollama[569358]: time=2024-11-19T11:57:02.933-06:00 level=WARN source=sched.go:137 msg="mllama doesn't support parallel requests yet"
> Nov 19 11:57:03 dre-slimserver-3t ollama[569358]: time=2024-11-19T11:57:03.647-06:00 level=INFO source=sched.go:507 msg="updated VRAM based on existing loaded models" gpu=GPU-3b28212c-2887-4f2a-6f4b-65a53d0f55ad library=cuda total="15.8 GiB" available="6.5 GiB"
> Nov 19 11:57:03 dre-slimserver-3t ollama[569358]: time=2024-11-19T11:57:03.647-06:00 level=INFO source=sched.go:507 msg="updated VRAM based on existing loaded models" gpu=GPU-a9fe4c2f-a87c-c645-82ef-2da178797d63 library=cuda total="15.8 GiB" available="6.6 GiB"
> Nov 19 11:57:03 dre-slimserver-3t ollama[569358]: time=2024-11-19T11:57:03.647-06:00 level=INFO source=sched.go:507 msg="updated VRAM based on existing loaded models" gpu=GPU-8e52c368-ae0c-66f4-0f9f-d9b1b5682d23 library=cuda total="15.8 GiB" available="7.4 GiB"
> Nov 19 11:57:03 dre-slimserver-3t ollama[569358]: time=2024-11-19T11:57:03.647-06:00 level=INFO source=sched.go:507 msg="updated VRAM based on existing loaded models" gpu=GPU-bb673042-8563-32d8-a436-fddbe6b0de03 library=cuda total="15.8 GiB" available="7.4 GiB"
> Nov 19 11:57:07 dre-slimserver-3t ollama[569358]: time=2024-11-19T11:57:07.079-06:00 level=INFO source=server.go:105 msg="system memory" total="125.7 GiB" free="120.9 GiB" free_swap="2.0 GiB"
> Nov 19 11:57:07 dre-slimserver-3t ollama[569358]: time=2024-11-19T11:57:07.083-06:00 level=INFO source=memory.go:343 msg="offload to cuda" projector.weights="1.9 GiB" projector.graph="2.8 GiB" layers.requested=-1 layers.model=101 layers.offload=96 layers.split=16,27,27,26 memory.available="[15.0 GiB 15.5 GiB 15.5 GiB 15.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="62.9 GiB" memory.required.partial="60.0 GiB" memory.required.kv="1.6 GiB" memory.required.allocations="[14.5 GiB 15.2 GiB 15.2 GiB 15.1 GiB]" memory.weights.total="49.3 GiB" memory.weights.repeating="48.5 GiB" memory.weights.nonrepeating="822.0 MiB" memory.graph.full="1.1 GiB" memory.graph.partial="1.1 GiB"
> Nov 19 11:57:07 dre-slimserver-3t ollama[569358]: time=2024-11-19T11:57:07.085-06:00 level=INFO source=server.go:383 msg="starting llama server" cmd="/tmp/ollama1922152689/runners/cuda_v12/ollama_llama_server --model /usr/share/ollama/.ollama/models/blobs/sha256-da63a910e34997d50c9f21cc7f16996d1e76e1c128b13319edd68348f760ecc7 --ctx-size 2048 --batch-size 512 --n-gpu-layers 96 --mmproj /usr/share/ollama/.ollama/models/blobs/sha256-6b6c374d159e097509b33e9fda648c178c903959fc0c7dbfae487cc8d958093e --threads 10 --parallel 1 --tensor-split 16,27,27,26 --port 34707"
> Nov 19 11:57:07 dre-slimserver-3t ollama[569358]: time=2024-11-19T11:57:07.085-06:00 level=INFO source=sched.go:449 msg="loaded runners" count=1
> Nov 19 11:57:07 dre-slimserver-3t ollama[569358]: time=2024-11-19T11:57:07.085-06:00 level=INFO source=server.go:562 msg="waiting for llama runner to start responding"
> Nov 19 11:57:07 dre-slimserver-3t ollama[569358]: time=2024-11-19T11:57:07.085-06:00 level=INFO source=server.go:596 msg="waiting for server to become available" status="llm server error"
> Nov 19 11:57:07 dre-slimserver-3t ollama[569358]: time=2024-11-19T11:57:07.134-06:00 level=INFO source=runner.go:883 msg="starting go runner"
> Nov 19 11:57:07 dre-slimserver-3t ollama[569358]: time=2024-11-19T11:57:07.135-06:00 level=INFO source=runner.go:884 msg=system info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | cgo(gcc)" threads=10
> Nov 19 11:57:07 dre-slimserver-3t ollama[569358]: time=2024-11-19T11:57:07.135-06:00 level=INFO source=.:0 msg="Server listening on 127.0.0.1:34707"
> Nov 19 11:57:07 dre-slimserver-3t ollama[569358]: llama_model_loader: loaded meta data with 27 key-value pairs and 984 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-da63a910e34997d50c9f21cc7f16996d1e76e1c128b13319edd68348f760ecc7 (version GGUF V3 (latest))
> Nov 19 11:57:07 dre-slimserver-3t ollama[569358]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
> Nov 19 11:57:07 dre-slimserver-3t ollama[569358]: llama_model_loader: - kv 0: general.architecture str = mllama
> Nov 19 11:57:07 dre-slimserver-3t ollama[569358]: llama_model_loader: - kv 1: general.type str = model
> Nov 19 11:57:07 dre-slimserver-3t ollama[569358]: llama_model_loader: - kv 2: general.name str = Model
> Nov 19 11:57:07 dre-slimserver-3t ollama[569358]: llama_model_loader: - kv 3: general.size_label str = 88B
> Nov 19 11:57:07 dre-slimserver-3t ollama[569358]: llama_model_loader: - kv 4: mllama.block_count u32 = 100
> Nov 19 11:57:07 dre-slimserver-3t ollama[569358]: llama_model_loader: - kv 5: mllama.context_length u32 = 131072
> Nov 19 11:57:07 dre-slimserver-3t ollama[569358]: llama_model_loader: - kv 6: mllama.embedding_length u32 = 8192
> Nov 19 11:57:07 dre-slimserver-3t ollama[569358]: llama_model_loader: - kv 7: mllama.feed_forward_length u32 = 28672
> Nov 19 11:57:07 dre-slimserver-3t ollama[569358]: llama_model_loader: - kv 8: mllama.attention.head_count u32 = 64
> Nov 19 11:57:07 dre-slimserver-3t ollama[569358]: llama_model_loader: - kv 9: mllama.attention.head_count_kv u32 = 8
> Nov 19 11:57:07 dre-slimserver-3t ollama[569358]: llama_model_loader: - kv 10: mllama.rope.freq_base f32 = 500000.000000
> Nov 19 11:57:07 dre-slimserver-3t ollama[569358]: llama_model_loader: - kv 11: mllama.attention.layer_norm_rms_epsilon f32 = 0.000010
> Nov 19 11:57:07 dre-slimserver-3t ollama[569358]: llama_model_loader: - kv 12: general.file_type u32 = 15
> Nov 19 11:57:07 dre-slimserver-3t ollama[569358]: llama_model_loader: - kv 13: mllama.vocab_size u32 = 128256
> Nov 19 11:57:07 dre-slimserver-3t ollama[569358]: llama_model_loader: - kv 14: mllama.rope.dimension_count u32 = 128
> Nov 19 11:57:07 dre-slimserver-3t ollama[569358]: llama_model_loader: - kv 15: mllama.attention.cross_attention_layers arr[i32,20] = [3, 8, 13, 18, 23, 28, 33, 38, 43, 48...
> Nov 19 11:57:07 dre-slimserver-3t ollama[569358]: llama_model_loader: - kv 16: tokenizer.ggml.add_bos_token bool = true
> Nov 19 11:57:07 dre-slimserver-3t ollama[569358]: llama_model_loader: - kv 17: tokenizer.ggml.model str = gpt2
> Nov 19 11:57:07 dre-slimserver-3t ollama[569358]: llama_model_loader: - kv 18: tokenizer.ggml.pre str = llama-bpe
> Nov 19 11:57:07 dre-slimserver-3t ollama[569358]: llama_model_loader: - kv 19: tokenizer.ggml.tokens arr[str,128257] = ["!", "\"", "#", "$", "%", "&", "'", ...
> Nov 19 11:57:07 dre-slimserver-3t ollama[569358]: llama_model_loader: - kv 20: tokenizer.ggml.token_type arr[i32,128257] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
> Nov 19 11:57:07 dre-slimserver-3t ollama[569358]: time=2024-11-19T11:57:07.337-06:00 level=INFO source=server.go:596 msg="waiting for server to become available" status="llm server loading model"
> Nov 19 11:57:07 dre-slimserver-3t ollama[569358]: llama_model_loader: - kv 21: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
> Nov 19 11:57:07 dre-slimserver-3t ollama[569358]: llama_model_loader: - kv 22: tokenizer.ggml.bos_token_id u32 = 128000
> Nov 19 11:57:07 dre-slimserver-3t ollama[569358]: llama_model_loader: - kv 23: tokenizer.ggml.eos_token_id u32 = 128009
> Nov 19 11:57:07 dre-slimserver-3t ollama[569358]: llama_model_loader: - kv 24: tokenizer.ggml.padding_token_id u32 = 128004
> Nov 19 11:57:07 dre-slimserver-3t ollama[569358]: llama_model_loader: - kv 25: tokenizer.chat_template str = {{- bos_token }}\n{%- if custom_tools ...
> Nov 19 11:57:07 dre-slimserver-3t ollama[569358]: llama_model_loader: - kv 26: general.quantization_version u32 = 2
> Nov 19 11:57:07 dre-slimserver-3t ollama[569358]: llama_model_loader: - type f32: 282 tensors
> Nov 19 11:57:07 dre-slimserver-3t ollama[569358]: llama_model_loader: - type q4_K: 611 tensors
> Nov 19 11:57:07 dre-slimserver-3t ollama[569358]: llama_model_loader: - type q6_K: 91 tensors
> Nov 19 11:57:07 dre-slimserver-3t ollama[569358]: llm_load_vocab: special tokens cache size = 257
> Nov 19 11:57:08 dre-slimserver-3t ollama[569358]: llm_load_vocab: token to piece cache size = 0.7999 MB
> Nov 19 11:57:08 dre-slimserver-3t ollama[569358]: llm_load_print_meta: format = GGUF V3 (latest)
> Nov 19 11:57:08 dre-slimserver-3t ollama[569358]: llm_load_print_meta: arch = mllama
> Nov 19 11:57:08 dre-slimserver-3t ollama[569358]: llm_load_print_meta: vocab type = BPE
> Nov 19 11:57:08 dre-slimserver-3t ollama[569358]: llm_load_print_meta: n_vocab = 128256
> Nov 19 11:57:08 dre-slimserver-3t ollama[569358]: llm_load_print_meta: n_merges = 280147
> Nov 19 11:57:08 dre-slimserver-3t ollama[569358]: llm_load_print_meta: vocab_only = 0
> Nov 19 11:57:08 dre-slimserver-3t ollama[569358]: llm_load_print_meta: n_ctx_train = 131072
> Nov 19 11:57:08 dre-slimserver-3t ollama[569358]: llm_load_print_meta: n_embd = 8192
> Nov 19 11:57:08 dre-slimserver-3t ollama[569358]: llm_load_print_meta: n_layer = 100
> Nov 19 11:57:08 dre-slimserver-3t ollama[569358]: llm_load_print_meta: n_head = 64
> Nov 19 11:57:08 dre-slimserver-3t ollama[569358]: llm_load_print_meta: n_head_kv = 8
> Nov 19 11:57:08 dre-slimserver-3t ollama[569358]: llm_load_print_meta: n_rot = 128
> Nov 19 11:57:08 dre-slimserver-3t ollama[569358]: llm_load_print_meta: n_swa = 0
> Nov 19 11:57:08 dre-slimserver-3t ollama[569358]: llm_load_print_meta: n_embd_head_k = 128
> Nov 19 11:57:08 dre-slimserver-3t ollama[569358]: llm_load_print_meta: n_embd_head_v = 128
> Nov 19 11:57:08 dre-slimserver-3t ollama[569358]: llm_load_print_meta: n_gqa = 8
> Nov 19 11:57:08 dre-slimserver-3t ollama[569358]: llm_load_print_meta: n_embd_k_gqa = 1024
> Nov 19 11:57:08 dre-slimserver-3t ollama[569358]: llm_load_print_meta: n_embd_v_gqa = 1024
> Nov 19 11:57:08 dre-slimserver-3t ollama[569358]: llm_load_print_meta: f_norm_eps = 0.0e+00
> Nov 19 11:57:08 dre-slimserver-3t ollama[569358]: llm_load_print_meta: f_norm_rms_eps = 1.0e-05
> Nov 19 11:57:08 dre-slimserver-3t ollama[569358]: llm_load_print_meta: f_clamp_kqv = 0.0e+00
> Nov 19 11:57:08 dre-slimserver-3t ollama[569358]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00
> Nov 19 11:57:08 dre-slimserver-3t ollama[569358]: llm_load_print_meta: f_logit_scale = 0.0e+00
> Nov 19 11:57:08 dre-slimserver-3t ollama[569358]: llm_load_print_meta: n_ff = 28672
> Nov 19 11:57:08 dre-slimserver-3t ollama[569358]: llm_load_print_meta: n_expert = 0
> Nov 19 11:57:08 dre-slimserver-3t ollama[569358]: llm_load_print_meta: n_expert_used = 0
> Nov 19 11:57:08 dre-slimserver-3t ollama[569358]: llm_load_print_meta: causal attn = 1
> Nov 19 11:57:08 dre-slimserver-3t ollama[569358]: llm_load_print_meta: pooling type = 0
> Nov 19 11:57:08 dre-slimserver-3t ollama[569358]: llm_load_print_meta: rope type = 0
> Nov 19 11:57:08 dre-slimserver-3t ollama[569358]: llm_load_print_meta: rope scaling = linear
> Nov 19 11:57:08 dre-slimserver-3t ollama[569358]: llm_load_print_meta: freq_base_train = 500000.0
> Nov 19 11:57:08 dre-slimserver-3t ollama[569358]: llm_load_print_meta: freq_scale_train = 1
> Nov 19 11:57:08 dre-slimserver-3t ollama[569358]: llm_load_print_meta: n_ctx_orig_yarn = 131072
> Nov 19 11:57:08 dre-slimserver-3t ollama[569358]: llm_load_print_meta: rope_finetuned = unknown
> Nov 19 11:57:08 dre-slimserver-3t ollama[569358]: llm_load_print_meta: ssm_d_conv = 0
> Nov 19 11:57:08 dre-slimserver-3t ollama[569358]: llm_load_print_meta: ssm_d_inner = 0
> Nov 19 11:57:08 dre-slimserver-3t ollama[569358]: llm_load_print_meta: ssm_d_state = 0
> Nov 19 11:57:08 dre-slimserver-3t ollama[569358]: llm_load_print_meta: ssm_dt_rank = 0
> Nov 19 11:57:08 dre-slimserver-3t ollama[569358]: llm_load_print_meta: ssm_dt_b_c_rms = 0
> Nov 19 11:57:08 dre-slimserver-3t ollama[569358]: llm_load_print_meta: model type = ?B
> Nov 19 11:57:08 dre-slimserver-3t ollama[569358]: llm_load_print_meta: model ftype = Q4_K - Medium
> Nov 19 11:57:08 dre-slimserver-3t ollama[569358]: llm_load_print_meta: model params = 87.67 B
> Nov 19 11:57:08 dre-slimserver-3t ollama[569358]: llm_load_print_meta: model size = 49.08 GiB (4.81 BPW)
> Nov 19 11:57:08 dre-slimserver-3t ollama[569358]: llm_load_print_meta: general.name = Model
> Nov 19 11:57:08 dre-slimserver-3t ollama[569358]: llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>'
> Nov 19 11:57:08 dre-slimserver-3t ollama[569358]: llm_load_print_meta: EOS token = 128009 '<|eot_id|>'
> Nov 19 11:57:08 dre-slimserver-3t ollama[569358]: llm_load_print_meta: PAD token = 128004 '<|finetune_right_pad_id|>'
> Nov 19 11:57:08 dre-slimserver-3t ollama[569358]: llm_load_print_meta: LF token = 128 'Ä'
> Nov 19 11:57:08 dre-slimserver-3t ollama[569358]: llm_load_print_meta: EOT token = 128009 '<|eot_id|>'
> Nov 19 11:57:08 dre-slimserver-3t ollama[569358]: llm_load_print_meta: EOM token = 128008 '<|eom_id|>'
> Nov 19 11:57:08 dre-slimserver-3t ollama[569358]: llm_load_print_meta: EOG token = 128008 '<|eom_id|>'
> Nov 19 11:57:08 dre-slimserver-3t ollama[569358]: llm_load_print_meta: EOG token = 128009 '<|eot_id|>'
> Nov 19 11:57:08 dre-slimserver-3t ollama[569358]: llm_load_print_meta: max token length = 256
> Nov 19 11:57:08 dre-slimserver-3t ollama[569358]: llama_model_load: vocab mismatch 128256 !- 128257 ...
> Nov 19 11:57:08 dre-slimserver-3t ollama[569358]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
> Nov 19 11:57:08 dre-slimserver-3t ollama[569358]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
> Nov 19 11:57:08 dre-slimserver-3t ollama[569358]: ggml_cuda_init: found 4 CUDA devices:
> Nov 19 11:57:08 dre-slimserver-3t ollama[569358]: Device 0: Tesla V100-SXM2-16GB, compute capability 7.0, VMM: yes
> Nov 19 11:57:08 dre-slimserver-3t ollama[569358]: Device 1: Tesla V100-SXM2-16GB, compute capability 7.0, VMM: yes
> Nov 19 11:57:08 dre-slimserver-3t ollama[569358]: Device 2: Tesla V100-SXM2-16GB, compute capability 7.0, VMM: yes
> Nov 19 11:57:08 dre-slimserver-3t ollama[569358]: Device 3: Tesla V100-SXM2-16GB, compute capability 7.0, VMM: yes
> Nov 19 11:57:08 dre-slimserver-3t ollama[569358]: llm_load_tensors: ggml ctx size = 2.25 MiB
> Nov 19 12:02:09 dre-slimserver-3t ollama[569358]: time=2024-11-19T12:02:07.166-06:00 level=ERROR source=sched.go:455 msg="error loading llama server" error="timed out waiting for llama runner to start - progress 0.00 - "
> Nov 19 12:02:09 dre-slimserver-3t ollama[569358]: [GIN] 2024/11/19 - 12:02:07 | 500 | 5m4s | 100.100.70.109 | POST "/api/generate"
> Nov 19 12:02:12 dre-slimserver-3t ollama[569358]: time=2024-11-19T12:02:12.413-06:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.247257499 model=/usr/share/ollama/.ollama/models/blobs/sha256-da63a910e34997d50c9f21cc7f16996d1e76e1c128b13319edd68348f760ecc7
> Nov 19 12:02:13 dre-slimserver-3t ollama[569358]: time=2024-11-19T12:02:13.171-06:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=6.005048646 model=/usr/share/ollama/.ollama/models/blobs/sha256-da63a910e34997d50c9f21cc7f16996d1e76e1c128b13319edd68348f760ecc7
> Nov 19 12:02:13 dre-slimserver-3t ollama[569358]: time=2024-11-19T12:02:13.802-06:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=6.635933639 model=/usr/share/ollama/.ollama/models/blobs/sha256-da63a910e34997d50c9f21cc7f16996d1e76e1c128b13319edd68348f760ecc7
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.4.2
|
{
"login": "ergosumdre",
"id": 35677602,
"node_id": "MDQ6VXNlcjM1Njc3NjAy",
"avatar_url": "https://avatars.githubusercontent.com/u/35677602?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ergosumdre",
"html_url": "https://github.com/ergosumdre",
"followers_url": "https://api.github.com/users/ergosumdre/followers",
"following_url": "https://api.github.com/users/ergosumdre/following{/other_user}",
"gists_url": "https://api.github.com/users/ergosumdre/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ergosumdre/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ergosumdre/subscriptions",
"organizations_url": "https://api.github.com/users/ergosumdre/orgs",
"repos_url": "https://api.github.com/users/ergosumdre/repos",
"events_url": "https://api.github.com/users/ergosumdre/events{/privacy}",
"received_events_url": "https://api.github.com/users/ergosumdre/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7745/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7745/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8639
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8639/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8639/comments
|
https://api.github.com/repos/ollama/ollama/issues/8639/events
|
https://github.com/ollama/ollama/pull/8639
| 2,815,856,245
|
PR_kwDOJ0Z1Ps6JPIfp
| 8,639
|
Enable using rocm/dev-almalinux images for unified-builder-amd64
|
{
"login": "michaelburch",
"id": 13478210,
"node_id": "MDQ6VXNlcjEzNDc4MjEw",
"avatar_url": "https://avatars.githubusercontent.com/u/13478210?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/michaelburch",
"html_url": "https://github.com/michaelburch",
"followers_url": "https://api.github.com/users/michaelburch/followers",
"following_url": "https://api.github.com/users/michaelburch/following{/other_user}",
"gists_url": "https://api.github.com/users/michaelburch/gists{/gist_id}",
"starred_url": "https://api.github.com/users/michaelburch/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/michaelburch/subscriptions",
"organizations_url": "https://api.github.com/users/michaelburch/orgs",
"repos_url": "https://api.github.com/users/michaelburch/repos",
"events_url": "https://api.github.com/users/michaelburch/events{/privacy}",
"received_events_url": "https://api.github.com/users/michaelburch/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 0
| 2025-01-28T14:33:29
| 2025-01-28T15:32:45
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/8639",
"html_url": "https://github.com/ollama/ollama/pull/8639",
"diff_url": "https://github.com/ollama/ollama/pull/8639.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8639.patch",
"merged_at": null
}
|
Adds build args and dependency updates to support using rocm/dev-almalinux images for unified-builder-amd64.
ARG RHEL_VERSION=8
ARG RHEL_VARIANT=almalinux-${RHEL_VERSION}
Can be used with the following to build docker images with the latest ROCM library:
ARG ROCM_VERSION=6.3.1
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8639/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8639/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5125
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5125/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5125/comments
|
https://api.github.com/repos/ollama/ollama/issues/5125/events
|
https://github.com/ollama/ollama/pull/5125
| 2,360,951,586
|
PR_kwDOJ0Z1Ps5y4v6j
| 5,125
|
Bump latest fedora cuda repo to 39
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-06-19T00:15:15
| 2024-06-20T18:27:27
| 2024-06-20T18:27:24
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5125",
"html_url": "https://github.com/ollama/ollama/pull/5125",
"diff_url": "https://github.com/ollama/ollama/pull/5125.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5125.patch",
"merged_at": "2024-06-20T18:27:24"
}
|
Fixes #5062
Fedora39 is now the latest.
https://developer.download.nvidia.com/compute/cuda/repos/
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5125/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5125/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7853
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7853/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7853/comments
|
https://api.github.com/repos/ollama/ollama/issues/7853/events
|
https://github.com/ollama/ollama/issues/7853
| 2,697,243,627
|
I_kwDOJ0Z1Ps6gxKvr
| 7,853
|
embeding api issue
|
{
"login": "sycbbyes",
"id": 15940789,
"node_id": "MDQ6VXNlcjE1OTQwNzg5",
"avatar_url": "https://avatars.githubusercontent.com/u/15940789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sycbbyes",
"html_url": "https://github.com/sycbbyes",
"followers_url": "https://api.github.com/users/sycbbyes/followers",
"following_url": "https://api.github.com/users/sycbbyes/following{/other_user}",
"gists_url": "https://api.github.com/users/sycbbyes/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sycbbyes/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sycbbyes/subscriptions",
"organizations_url": "https://api.github.com/users/sycbbyes/orgs",
"repos_url": "https://api.github.com/users/sycbbyes/repos",
"events_url": "https://api.github.com/users/sycbbyes/events{/privacy}",
"received_events_url": "https://api.github.com/users/sycbbyes/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-11-27T06:03:41
| 2024-12-05T07:44:33
| 2024-12-05T07:44:33
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
when call ollama 、api/embed, there is python error with message:
ollama-webui | 2024-11-27T04:25:57.120363066Z ValueError: [TypeError("'coroutine' object is not iterable"), TypeError('vars() argument must have __dict__ attribute')]
ollama-webui | 2024-11-27T04:26:04.202107805Z INFO: 172.19.0.1:56148 - "POST /ollama/api/embed HTTP/1.1" 500 Internal Server Error
ollama-webui | 2024-11-27T04:26:04.216128921Z ERROR: Exception in ASGI application
ollama-webui | 2024-11-27T04:26:04.216161929Z Traceback (most recent call last):
ollama-webui | 2024-11-27T04:26:04.216170778Z File "/usr/local/lib/python3.11/site-packages/fastapi/encoders.py", line 324, in jsonable_encoder
ollama-webui | 2024-11-27T04:26:04.216178272Z data = dict(obj)
ollama-webui | 2024-11-27T04:26:04.216184675Z ^^^^^^^^^
ollama-webui | 2024-11-27T04:26:04.216190906Z TypeError: 'coroutine' object is not iterable
### OS
_No response_
### GPU
_No response_
### CPU
_No response_
### Ollama version
0.4.5
|
{
"login": "sycbbyes",
"id": 15940789,
"node_id": "MDQ6VXNlcjE1OTQwNzg5",
"avatar_url": "https://avatars.githubusercontent.com/u/15940789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sycbbyes",
"html_url": "https://github.com/sycbbyes",
"followers_url": "https://api.github.com/users/sycbbyes/followers",
"following_url": "https://api.github.com/users/sycbbyes/following{/other_user}",
"gists_url": "https://api.github.com/users/sycbbyes/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sycbbyes/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sycbbyes/subscriptions",
"organizations_url": "https://api.github.com/users/sycbbyes/orgs",
"repos_url": "https://api.github.com/users/sycbbyes/repos",
"events_url": "https://api.github.com/users/sycbbyes/events{/privacy}",
"received_events_url": "https://api.github.com/users/sycbbyes/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7853/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7853/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7716
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7716/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7716/comments
|
https://api.github.com/repos/ollama/ollama/issues/7716/events
|
https://github.com/ollama/ollama/issues/7716
| 2,666,898,598
|
I_kwDOJ0Z1Ps6e9aSm
| 7,716
|
Feature suggestions and development compilation environment issues
|
{
"login": "mingyue0094",
"id": 63558866,
"node_id": "MDQ6VXNlcjYzNTU4ODY2",
"avatar_url": "https://avatars.githubusercontent.com/u/63558866?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mingyue0094",
"html_url": "https://github.com/mingyue0094",
"followers_url": "https://api.github.com/users/mingyue0094/followers",
"following_url": "https://api.github.com/users/mingyue0094/following{/other_user}",
"gists_url": "https://api.github.com/users/mingyue0094/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mingyue0094/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mingyue0094/subscriptions",
"organizations_url": "https://api.github.com/users/mingyue0094/orgs",
"repos_url": "https://api.github.com/users/mingyue0094/repos",
"events_url": "https://api.github.com/users/mingyue0094/events{/privacy}",
"received_events_url": "https://api.github.com/users/mingyue0094/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 3
| 2024-11-18T02:43:59
| 2024-11-20T19:43:19
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Wish:
1. set env avx=0 will automatically try to use Nvidia gpu
2. On this repository page, press `.` to enter a complete development environment to modify code, compile, download files, and run tests. Configuring this development environment is so complicated and difficult.
Good luck to you
-------
set env OLLAMA_HOST "0.0.0.0" can control the listening address. Many home users have old computers, but they can still be used. The motherboard of this old computer does not support AVX, but for gaming, it has NVIDIA GPU, such as 3060.
So, in the future, can we add an environment variable as a switch, with `avx = 0`, so that the GPU without avx is adopted?
I read the Issues and saw that many people enthusiastically shared how to modify the logic and compile and implement this function by themselves. This is particularly unfriendly for people who are not doing cross-platform development in Go, because they will be stuck in the environment preparation part. Maybe the source code can be changed correctly, but because the environment is very complicated, it cannot be compiled. Moreover, people who are not doing cross-platform development in Go will only compile this environment once and will not need it later.
If your official support can try to support GPU without avx when users modify and add an environment variable, it will be great.
Or, it would be good to have a development compilation environment. Just like, open github, press `.` and then start modifying the code, perform compilation, download the compiled file and test it.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7716/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7716/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/3543
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3543/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3543/comments
|
https://api.github.com/repos/ollama/ollama/issues/3543/events
|
https://github.com/ollama/ollama/issues/3543
| 2,232,381,369
|
I_kwDOJ0Z1Ps6FD2-5
| 3,543
|
Conversion Script
|
{
"login": "scefali",
"id": 8533851,
"node_id": "MDQ6VXNlcjg1MzM4NTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/8533851?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/scefali",
"html_url": "https://github.com/scefali",
"followers_url": "https://api.github.com/users/scefali/followers",
"following_url": "https://api.github.com/users/scefali/following{/other_user}",
"gists_url": "https://api.github.com/users/scefali/gists{/gist_id}",
"starred_url": "https://api.github.com/users/scefali/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/scefali/subscriptions",
"organizations_url": "https://api.github.com/users/scefali/orgs",
"repos_url": "https://api.github.com/users/scefali/repos",
"events_url": "https://api.github.com/users/scefali/events{/privacy}",
"received_events_url": "https://api.github.com/users/scefali/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-04-09T00:22:18
| 2024-04-24T18:39:07
| 2024-04-24T18:39:07
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I am trying to run the conversion script as shown in the example for conversion to gguf.
### What did you expect to see?
```
python llm/llama.cpp/convert.py ./model --outtype f16 --outfile converted.bin
Loading model file model/model-00001-of-00002.safetensors
Traceback (most recent call last):
File "/Users/stevecefali/omoide/ollama/llm/llama.cpp/convert.py", line 1523, in <module>
main()
File "/Users/stevecefali/omoide/ollama/llm/llama.cpp/convert.py", line 1455, in main
model_plus = load_some_model(args.model)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/stevecefali/omoide/ollama/llm/llama.cpp/convert.py", line 1344, in load_some_model
models_plus.append(lazy_load_file(path))
^^^^^^^^^^^^^^^^^^^^
File "/Users/stevecefali/omoide/ollama/llm/llama.cpp/convert.py", line 966, in lazy_load_file
raise ValueError(f"unknown format: {path}")
ValueError: unknown format: model/model-00001-of-00002.safetensors
```
### Steps to reproduce
```
git clone git@github.com:ollama/ollama.git ollama
cd ollama
git submodule init
git submodule update llm/llama.cpp
python3 -m venv llm/llama.cpp/.venv
source llm/llama.cpp/.venv/bin/activate
pip install -r llm/llama.cpp/requirements.txt
make -C llm/llama.cpp quantize
git lfs install
git clone https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1 model
python llm/llama.cpp/convert.py ./model --outtype f16 --outfile converted.bin
```
### Are there any recent changes that introduced the issue?
_No response_
### OS
_No response_
### Architecture
_No response_
### Platform
_No response_
### Ollama version
_No response_
### GPU
_No response_
### GPU info
_No response_
### CPU
_No response_
### Other software
_No response_
|
{
"login": "scefali",
"id": 8533851,
"node_id": "MDQ6VXNlcjg1MzM4NTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/8533851?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/scefali",
"html_url": "https://github.com/scefali",
"followers_url": "https://api.github.com/users/scefali/followers",
"following_url": "https://api.github.com/users/scefali/following{/other_user}",
"gists_url": "https://api.github.com/users/scefali/gists{/gist_id}",
"starred_url": "https://api.github.com/users/scefali/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/scefali/subscriptions",
"organizations_url": "https://api.github.com/users/scefali/orgs",
"repos_url": "https://api.github.com/users/scefali/repos",
"events_url": "https://api.github.com/users/scefali/events{/privacy}",
"received_events_url": "https://api.github.com/users/scefali/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3543/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3543/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3464
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3464/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3464/comments
|
https://api.github.com/repos/ollama/ollama/issues/3464/events
|
https://github.com/ollama/ollama/pull/3464
| 2,221,593,829
|
PR_kwDOJ0Z1Ps5rfusN
| 3,464
|
Fix numgpu opt miscomparison
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-04-02T22:58:56
| 2024-04-03T03:10:20
| 2024-04-03T03:10:17
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3464",
"html_url": "https://github.com/ollama/ollama/pull/3464",
"diff_url": "https://github.com/ollama/ollama/pull/3464.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3464.patch",
"merged_at": "2024-04-03T03:10:17"
}
|
opts are now a pointer which means we incorrectly reloaded the model when the actual layers loaded didn't match the input request
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3464/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3464/timeline
| null | null | true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.