url
stringlengths 51
54
| repository_url
stringclasses 1
value | labels_url
stringlengths 65
68
| comments_url
stringlengths 60
63
| events_url
stringlengths 58
61
| html_url
stringlengths 39
44
| id
int64 1.78B
2.82B
| node_id
stringlengths 18
19
| number
int64 1
8.69k
| title
stringlengths 1
382
| user
dict | labels
listlengths 0
5
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
2
| milestone
null | comments
int64 0
323
| created_at
timestamp[s] | updated_at
timestamp[s] | closed_at
timestamp[s] | author_association
stringclasses 4
values | sub_issues_summary
dict | active_lock_reason
null | draft
bool 2
classes | pull_request
dict | body
stringlengths 2
118k
⌀ | closed_by
dict | reactions
dict | timeline_url
stringlengths 60
63
| performed_via_github_app
null | state_reason
stringclasses 4
values | is_pull_request
bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/ollama/ollama/issues/4714
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4714/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4714/comments
|
https://api.github.com/repos/ollama/ollama/issues/4714/events
|
https://github.com/ollama/ollama/issues/4714
| 2,324,669,902
|
I_kwDOJ0Z1Ps6Kj6XO
| 4,714
|
In macOS Terminal.app, single Japanese character at the end of ongoing line disappears.
|
{
"login": "tokyohandsome",
"id": 34906599,
"node_id": "MDQ6VXNlcjM0OTA2NTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/34906599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tokyohandsome",
"html_url": "https://github.com/tokyohandsome",
"followers_url": "https://api.github.com/users/tokyohandsome/followers",
"following_url": "https://api.github.com/users/tokyohandsome/following{/other_user}",
"gists_url": "https://api.github.com/users/tokyohandsome/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tokyohandsome/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tokyohandsome/subscriptions",
"organizations_url": "https://api.github.com/users/tokyohandsome/orgs",
"repos_url": "https://api.github.com/users/tokyohandsome/repos",
"events_url": "https://api.github.com/users/tokyohandsome/events{/privacy}",
"received_events_url": "https://api.github.com/users/tokyohandsome/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "joshyan1",
"id": 76125168,
"node_id": "MDQ6VXNlcjc2MTI1MTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/76125168?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joshyan1",
"html_url": "https://github.com/joshyan1",
"followers_url": "https://api.github.com/users/joshyan1/followers",
"following_url": "https://api.github.com/users/joshyan1/following{/other_user}",
"gists_url": "https://api.github.com/users/joshyan1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joshyan1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joshyan1/subscriptions",
"organizations_url": "https://api.github.com/users/joshyan1/orgs",
"repos_url": "https://api.github.com/users/joshyan1/repos",
"events_url": "https://api.github.com/users/joshyan1/events{/privacy}",
"received_events_url": "https://api.github.com/users/joshyan1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "joshyan1",
"id": 76125168,
"node_id": "MDQ6VXNlcjc2MTI1MTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/76125168?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joshyan1",
"html_url": "https://github.com/joshyan1",
"followers_url": "https://api.github.com/users/joshyan1/followers",
"following_url": "https://api.github.com/users/joshyan1/following{/other_user}",
"gists_url": "https://api.github.com/users/joshyan1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joshyan1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joshyan1/subscriptions",
"organizations_url": "https://api.github.com/users/joshyan1/orgs",
"repos_url": "https://api.github.com/users/joshyan1/repos",
"events_url": "https://api.github.com/users/joshyan1/events{/privacy}",
"received_events_url": "https://api.github.com/users/joshyan1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 0
| 2024-05-30T04:26:22
| 2024-05-30T23:25:13
| 2024-05-30T23:25:13
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
First, I'd like to thank you for fixing the Japanese and other multi-byte (double-width) character issues. Output of Japanese is much better than before.
However, while I'm testing a couple of Japanese LLM's I found another issue which does not seem to be related to LLM model.
When a sentence continues beyond the edge of the window, the last character gets removed. It's written then deleted.
Example:
```
>>> 東京にまつわる興味深い事実を教えてください。
東京にまつわる興味深い事実の一つに、「世界で最も人口密度が高い都市の一つであること」が挙げ
れます。東京都特別区部は、約1,300万人もの人々が暮らしており、非常に狭い地域に多くの人々が生
しています。
```
First line of response initially ended with "ら" but it's deleted when the line was wrapped and the following characters appeared in the next line.
Similarly, the second line initially had "活" at the end but it's gone when the sentence continued to the third line.
There are still a room for few more characters to be added at the end of each line.
It looks like Ollama does not cut or copy the full character code, i.e. only half byte of double-byte of a Japanese character. I assume so because there seems an invisible character or space ' ' exists instead of the missing character at the end of a line.
LLMs that I tested with (all resulted the same):
aya:35b-23-q4_0
andrewcanis/command-r:q4_0
ArrowPro-7B-KUJIRA-f16:converted (I converted from a gguf model)
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.1.39
|
{
"login": "joshyan1",
"id": 76125168,
"node_id": "MDQ6VXNlcjc2MTI1MTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/76125168?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joshyan1",
"html_url": "https://github.com/joshyan1",
"followers_url": "https://api.github.com/users/joshyan1/followers",
"following_url": "https://api.github.com/users/joshyan1/following{/other_user}",
"gists_url": "https://api.github.com/users/joshyan1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joshyan1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joshyan1/subscriptions",
"organizations_url": "https://api.github.com/users/joshyan1/orgs",
"repos_url": "https://api.github.com/users/joshyan1/repos",
"events_url": "https://api.github.com/users/joshyan1/events{/privacy}",
"received_events_url": "https://api.github.com/users/joshyan1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4714/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4714/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5717
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5717/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5717/comments
|
https://api.github.com/repos/ollama/ollama/issues/5717/events
|
https://github.com/ollama/ollama/pull/5717
| 2,410,170,666
|
PR_kwDOJ0Z1Ps51dl4b
| 5,717
|
server: omit model system prompt if empty
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-07-16T04:14:45
| 2024-07-16T18:09:02
| 2024-07-16T18:09:00
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5717",
"html_url": "https://github.com/ollama/ollama/pull/5717",
"diff_url": "https://github.com/ollama/ollama/pull/5717.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5717.patch",
"merged_at": "2024-07-16T18:09:00"
}
|
The model's system prompt (defined by the `SYSTEM` Modelfile command) will be templated out even if empty currently. This fixes the issue so that it is only templated if not empty.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5717/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5717/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4023
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4023/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4023/comments
|
https://api.github.com/repos/ollama/ollama/issues/4023/events
|
https://github.com/ollama/ollama/pull/4023
| 2,268,617,892
|
PR_kwDOJ0Z1Ps5t_agt
| 4,023
|
fix(cli): unable to use CLI within the container
|
{
"login": "BlackHole1",
"id": 8198408,
"node_id": "MDQ6VXNlcjgxOTg0MDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/8198408?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BlackHole1",
"html_url": "https://github.com/BlackHole1",
"followers_url": "https://api.github.com/users/BlackHole1/followers",
"following_url": "https://api.github.com/users/BlackHole1/following{/other_user}",
"gists_url": "https://api.github.com/users/BlackHole1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BlackHole1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BlackHole1/subscriptions",
"organizations_url": "https://api.github.com/users/BlackHole1/orgs",
"repos_url": "https://api.github.com/users/BlackHole1/repos",
"events_url": "https://api.github.com/users/BlackHole1/events{/privacy}",
"received_events_url": "https://api.github.com/users/BlackHole1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 5
| 2024-04-29T10:03:47
| 2024-05-07T01:43:55
| 2024-05-06T21:53:11
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4023",
"html_url": "https://github.com/ollama/ollama/pull/4023",
"diff_url": "https://github.com/ollama/ollama/pull/4023.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4023.patch",
"merged_at": null
}
|
In the container, `OLLAMA_HOST` is set by default to `0.0.0.0` (ref: [Dockerfile#L137]), which is fine when starting the server. However, as a client, it is must to use `127.0.0.1` or `localhost` for requests.
fix: #3521 #1337
maybe fix: #3526
[Dockerfile#L137]: https://github.com/ollama/ollama/blob/7e432cdfac51583459e7bfa8fdd485c74a6597e7/Dockerfile#L137
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4023/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4023/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1321
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1321/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1321/comments
|
https://api.github.com/repos/ollama/ollama/issues/1321/events
|
https://github.com/ollama/ollama/pull/1321
| 2,017,280,524
|
PR_kwDOJ0Z1Ps5gtNWe
| 1,321
|
Fixed cuda repo location for rhel os
|
{
"login": "jeremiahbuckley",
"id": 17296746,
"node_id": "MDQ6VXNlcjE3Mjk2NzQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/17296746?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jeremiahbuckley",
"html_url": "https://github.com/jeremiahbuckley",
"followers_url": "https://api.github.com/users/jeremiahbuckley/followers",
"following_url": "https://api.github.com/users/jeremiahbuckley/following{/other_user}",
"gists_url": "https://api.github.com/users/jeremiahbuckley/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jeremiahbuckley/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jeremiahbuckley/subscriptions",
"organizations_url": "https://api.github.com/users/jeremiahbuckley/orgs",
"repos_url": "https://api.github.com/users/jeremiahbuckley/repos",
"events_url": "https://api.github.com/users/jeremiahbuckley/events{/privacy}",
"received_events_url": "https://api.github.com/users/jeremiahbuckley/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-11-29T19:28:42
| 2023-11-29T19:55:15
| 2023-11-29T19:55:15
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1321",
"html_url": "https://github.com/ollama/ollama/pull/1321",
"diff_url": "https://github.com/ollama/ollama/pull/1321.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1321.patch",
"merged_at": "2023-11-29T19:55:15"
}
| null |
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1321/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1321/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3450
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3450/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3450/comments
|
https://api.github.com/repos/ollama/ollama/issues/3450/events
|
https://github.com/ollama/ollama/issues/3450
| 2,220,003,514
|
I_kwDOJ0Z1Ps6EUpC6
| 3,450
|
I want to make a opensource prompt and response database .
|
{
"login": "hemangjoshi37a",
"id": 12392345,
"node_id": "MDQ6VXNlcjEyMzkyMzQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/12392345?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hemangjoshi37a",
"html_url": "https://github.com/hemangjoshi37a",
"followers_url": "https://api.github.com/users/hemangjoshi37a/followers",
"following_url": "https://api.github.com/users/hemangjoshi37a/following{/other_user}",
"gists_url": "https://api.github.com/users/hemangjoshi37a/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hemangjoshi37a/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hemangjoshi37a/subscriptions",
"organizations_url": "https://api.github.com/users/hemangjoshi37a/orgs",
"repos_url": "https://api.github.com/users/hemangjoshi37a/repos",
"events_url": "https://api.github.com/users/hemangjoshi37a/events{/privacy}",
"received_events_url": "https://api.github.com/users/hemangjoshi37a/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 0
| 2024-04-02T09:24:47
| 2024-04-19T15:41:26
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What are you trying to do?
In this I want users to give their consent about open sourcing their prompt-response pairs to a centralized database from which everyone can train their model and using this we can infinitely improve our models.
### How should we solve this?
adding a consent tick check box in the box that says that i am willing to donate my data to this centralized opensource database. after this whenever a user makes any llm query then the prompt-response pair will be sent to our centralized database and it will be stored. here we can also store some metadata such as from which model this reposnse was generated , then time of generation of the response etc.
### What is the impact of this?
by doing this we can have much smaller and smaller in size models with much improved responses generation because they continuously keep improving.
### Anything else?
N/A
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3450/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3450/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/6921
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6921/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6921/comments
|
https://api.github.com/repos/ollama/ollama/issues/6921/events
|
https://github.com/ollama/ollama/issues/6921
| 2,543,021,522
|
I_kwDOJ0Z1Ps6Xk23S
| 6,921
|
Ollam build error wih CUDA on Jetson Orin (CUDA v12.6)
|
{
"login": "jarek7777",
"id": 72649794,
"node_id": "MDQ6VXNlcjcyNjQ5Nzk0",
"avatar_url": "https://avatars.githubusercontent.com/u/72649794?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jarek7777",
"html_url": "https://github.com/jarek7777",
"followers_url": "https://api.github.com/users/jarek7777/followers",
"following_url": "https://api.github.com/users/jarek7777/following{/other_user}",
"gists_url": "https://api.github.com/users/jarek7777/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jarek7777/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jarek7777/subscriptions",
"organizations_url": "https://api.github.com/users/jarek7777/orgs",
"repos_url": "https://api.github.com/users/jarek7777/repos",
"events_url": "https://api.github.com/users/jarek7777/events{/privacy}",
"received_events_url": "https://api.github.com/users/jarek7777/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-09-23T15:34:13
| 2024-09-25T00:17:35
| 2024-09-25T00:17:34
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Error: llama runner process has terminated: CUDA error: the provided PTX was compiled with an unsupported toolchain.
current device: 0, in function ggml_cuda_compute_forward at /ollama/llm/llama.cpp/ggml/src/ggml-cuda.cu:2326
err
/ollama/llm/llama.cpp/ggml/src/ggml-cuda.cu:102: CUDA error
CUDA version 12.6.68
### OS
Linux
### GPU
Nvidia
### CPU
Other
### Ollama version
0.3.11
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6921/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6921/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6453
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6453/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6453/comments
|
https://api.github.com/repos/ollama/ollama/issues/6453/events
|
https://github.com/ollama/ollama/issues/6453
| 2,478,130,586
|
I_kwDOJ0Z1Ps6TtUWa
| 6,453
|
Inconsistent GPU Usage
|
{
"login": "gru3zi",
"id": 44057919,
"node_id": "MDQ6VXNlcjQ0MDU3OTE5",
"avatar_url": "https://avatars.githubusercontent.com/u/44057919?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gru3zi",
"html_url": "https://github.com/gru3zi",
"followers_url": "https://api.github.com/users/gru3zi/followers",
"following_url": "https://api.github.com/users/gru3zi/following{/other_user}",
"gists_url": "https://api.github.com/users/gru3zi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gru3zi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gru3zi/subscriptions",
"organizations_url": "https://api.github.com/users/gru3zi/orgs",
"repos_url": "https://api.github.com/users/gru3zi/repos",
"events_url": "https://api.github.com/users/gru3zi/events{/privacy}",
"received_events_url": "https://api.github.com/users/gru3zi/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 4
| 2024-08-21T14:03:09
| 2024-08-21T20:13:34
| 2024-08-21T19:53:27
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I have been happily using Ollama for sometime with my Dual RTX 3090's with an NV-Link Adaptor. Recently ive been finding the output to be quite slow. After checking both the outputs of 'ollama ps' and nvidia-smi I found that my GPUs are not fully being utilised anymore. It seems to split between the CPU and GPUs.
If I run different models of the same size, some output full GPU usage while others dont.


My service file which I set with **Environment=CUDA_VISIBLE_DEVICES** and had also run 'sudo rmmod nvidia_uvm && sudo modprobe nvidia_uvm'

Is there a way to stop CPU usage all together?
### OS
Linux
### GPU
Nvidia
### CPU
AMD
### Ollama version
ollama version is 0.3.6
|
{
"login": "gru3zi",
"id": 44057919,
"node_id": "MDQ6VXNlcjQ0MDU3OTE5",
"avatar_url": "https://avatars.githubusercontent.com/u/44057919?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gru3zi",
"html_url": "https://github.com/gru3zi",
"followers_url": "https://api.github.com/users/gru3zi/followers",
"following_url": "https://api.github.com/users/gru3zi/following{/other_user}",
"gists_url": "https://api.github.com/users/gru3zi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gru3zi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gru3zi/subscriptions",
"organizations_url": "https://api.github.com/users/gru3zi/orgs",
"repos_url": "https://api.github.com/users/gru3zi/repos",
"events_url": "https://api.github.com/users/gru3zi/events{/privacy}",
"received_events_url": "https://api.github.com/users/gru3zi/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6453/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6453/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7871
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7871/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7871/comments
|
https://api.github.com/repos/ollama/ollama/issues/7871/events
|
https://github.com/ollama/ollama/issues/7871
| 2,701,943,335
|
I_kwDOJ0Z1Ps6hDGIn
| 7,871
|
pydantic issue with converted PNG images
|
{
"login": "ibagur",
"id": 2979615,
"node_id": "MDQ6VXNlcjI5Nzk2MTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/2979615?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ibagur",
"html_url": "https://github.com/ibagur",
"followers_url": "https://api.github.com/users/ibagur/followers",
"following_url": "https://api.github.com/users/ibagur/following{/other_user}",
"gists_url": "https://api.github.com/users/ibagur/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ibagur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ibagur/subscriptions",
"organizations_url": "https://api.github.com/users/ibagur/orgs",
"repos_url": "https://api.github.com/users/ibagur/repos",
"events_url": "https://api.github.com/users/ibagur/events{/privacy}",
"received_events_url": "https://api.github.com/users/ibagur/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 3
| 2024-11-28T11:58:42
| 2024-11-28T16:32:24
| 2024-11-28T12:12:26
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Directly feeding a PNG image does not work (`failed to decode image: image: unknown format`), so until recently, I was using the code bellow in order to encode the image file and it used to work fine:
```
import base64
import io
from PIL import Image
import ollama
def encode_image_to_base64(image_path: str, format: str = "PNG") -> str:
"""Encodes an image file to a base64 string."""
with Image.open(image_path) as img:
buffered = io.BytesIO()
img.save(buffered, format=format)
return base64.b64encode(buffered.getvalue()).decode('utf-8')
response = ollama.chat(
model='llama3.2-vision:latest',
messages=[{
'role': 'user',
'content': 'What is in this image?',
'images': [encode_image_to_base64('image1.png')]
}]
)
# Print the response
print(response['message']['content'])
```
But now I get a pydantic serialization error:
```
PydanticSerializationError: Error calling function `serialize_model`: OSError: [Errno 63] File name too long: 'iVBORw0KGgoAAAANSUhEUgAAAOYAAAGQCAIAAAA1BIuEAAEAAElEQ...
```
The only workaround I have found to make it work is to directly convert the PNG image into JPG and then feed the JPG image directly to ollama `llama3.2-vision:latest` model:
```
def convert_to_jpg(image_path: str) -> bytes:
"""
Convert image to JPG in memory and return the bytes
"""
with Image.open(image_path) as img:
# Convert to RGB if needed
if img.mode != 'RGB':
img = img.convert('RGB')
# Save as JPG to memory buffer
buffered = io.BytesIO()
img.save(buffered, format='JPEG', quality=85)
return buffered.getvalue()
response = ollama.chat(
model='llama3.2-vision:latest',
messages=[{
'role': 'user',
'content': 'What is in this image?',
'images': [convert_to_jpg('image1.png')]
}]
)
# Print the response
print(response['message']['content'])
```
What could be the reason? Any recent update on either `ollama`, `pydantic` or `pillow` libraries? These are the versions I am using in my `venv`:
```
ollama 0.4.1
pillow 11.0.0
pydantic 2.10.2
pydantic_core 2.27.1
```
Thanks for your suggestions!
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.4.1
UPDATE:
The particular image I was using was wrong format. Apparently it was a 'webp' image wrongly misnamed as PNG. Now feeding directly a proper PNG image works fine, no need for conversion
|
{
"login": "ibagur",
"id": 2979615,
"node_id": "MDQ6VXNlcjI5Nzk2MTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/2979615?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ibagur",
"html_url": "https://github.com/ibagur",
"followers_url": "https://api.github.com/users/ibagur/followers",
"following_url": "https://api.github.com/users/ibagur/following{/other_user}",
"gists_url": "https://api.github.com/users/ibagur/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ibagur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ibagur/subscriptions",
"organizations_url": "https://api.github.com/users/ibagur/orgs",
"repos_url": "https://api.github.com/users/ibagur/repos",
"events_url": "https://api.github.com/users/ibagur/events{/privacy}",
"received_events_url": "https://api.github.com/users/ibagur/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7871/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7871/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2669
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2669/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2669/comments
|
https://api.github.com/repos/ollama/ollama/issues/2669/events
|
https://github.com/ollama/ollama/issues/2669
| 2,148,447,138
|
I_kwDOJ0Z1Ps6ADrOi
| 2,669
|
Is it possible to add Orion model into downloadable model list
|
{
"login": "renillhuang",
"id": 24711416,
"node_id": "MDQ6VXNlcjI0NzExNDE2",
"avatar_url": "https://avatars.githubusercontent.com/u/24711416?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/renillhuang",
"html_url": "https://github.com/renillhuang",
"followers_url": "https://api.github.com/users/renillhuang/followers",
"following_url": "https://api.github.com/users/renillhuang/following{/other_user}",
"gists_url": "https://api.github.com/users/renillhuang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/renillhuang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/renillhuang/subscriptions",
"organizations_url": "https://api.github.com/users/renillhuang/orgs",
"repos_url": "https://api.github.com/users/renillhuang/repos",
"events_url": "https://api.github.com/users/renillhuang/events{/privacy}",
"received_events_url": "https://api.github.com/users/renillhuang/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
open
| false
| null |
[] | null | 3
| 2024-02-22T07:56:24
| 2024-03-12T02:06:16
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
After create Orion14B-chat model, is it possible to upload to ollama project?
And let other users could choose and download/run it locally?
Looking forward to reply, thanks.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2669/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2669/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/1296
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1296/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1296/comments
|
https://api.github.com/repos/ollama/ollama/issues/1296/events
|
https://github.com/ollama/ollama/issues/1296
| 2,013,551,136
|
I_kwDOJ0Z1Ps54BFog
| 1,296
|
All models gone?
|
{
"login": "iplayfast",
"id": 751306,
"node_id": "MDQ6VXNlcjc1MTMwNg==",
"avatar_url": "https://avatars.githubusercontent.com/u/751306?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iplayfast",
"html_url": "https://github.com/iplayfast",
"followers_url": "https://api.github.com/users/iplayfast/followers",
"following_url": "https://api.github.com/users/iplayfast/following{/other_user}",
"gists_url": "https://api.github.com/users/iplayfast/gists{/gist_id}",
"starred_url": "https://api.github.com/users/iplayfast/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iplayfast/subscriptions",
"organizations_url": "https://api.github.com/users/iplayfast/orgs",
"repos_url": "https://api.github.com/users/iplayfast/repos",
"events_url": "https://api.github.com/users/iplayfast/events{/privacy}",
"received_events_url": "https://api.github.com/users/iplayfast/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2023-11-28T03:33:49
| 2023-11-28T14:56:07
| 2023-11-28T14:56:07
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I have no idea what happened. Started working ran
ollama run alfred
Error: could not connect to ollama server, run 'ollama serve' to start it
(alfred was previously installed)
ollama serve &
ollama run alfred
started downloading it!
Olama list
all the models are gone.
in /usr/share/ollama/.ollama/models/blobs there are a lot of files some are large. so I think that's them.
but ollama doesn't know about them.
|
{
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.github.com/users/technovangelist/followers",
"following_url": "https://api.github.com/users/technovangelist/following{/other_user}",
"gists_url": "https://api.github.com/users/technovangelist/gists{/gist_id}",
"starred_url": "https://api.github.com/users/technovangelist/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/technovangelist/subscriptions",
"organizations_url": "https://api.github.com/users/technovangelist/orgs",
"repos_url": "https://api.github.com/users/technovangelist/repos",
"events_url": "https://api.github.com/users/technovangelist/events{/privacy}",
"received_events_url": "https://api.github.com/users/technovangelist/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1296/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1296/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4618
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4618/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4618/comments
|
https://api.github.com/repos/ollama/ollama/issues/4618/events
|
https://github.com/ollama/ollama/issues/4618
| 2,315,979,851
|
I_kwDOJ0Z1Ps6KCwxL
| 4,618
|
Extended lora support
|
{
"login": "AncientMystic",
"id": 62780271,
"node_id": "MDQ6VXNlcjYyNzgwMjcx",
"avatar_url": "https://avatars.githubusercontent.com/u/62780271?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AncientMystic",
"html_url": "https://github.com/AncientMystic",
"followers_url": "https://api.github.com/users/AncientMystic/followers",
"following_url": "https://api.github.com/users/AncientMystic/following{/other_user}",
"gists_url": "https://api.github.com/users/AncientMystic/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AncientMystic/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AncientMystic/subscriptions",
"organizations_url": "https://api.github.com/users/AncientMystic/orgs",
"repos_url": "https://api.github.com/users/AncientMystic/repos",
"events_url": "https://api.github.com/users/AncientMystic/events{/privacy}",
"received_events_url": "https://api.github.com/users/AncientMystic/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 3
| 2024-05-24T18:14:25
| 2024-07-10T19:37:38
| 2024-07-10T19:37:38
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Would it be possible to extend lora support so that they might be pulled like models and loaded with more ease such as with command
Ollama run model adapter lora
Or something similar for ease of use and easier mixing and matching using different loras with different models instead of having to hard code it into the model file of a model?
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4618/reactions",
"total_count": 4,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 4,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4618/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3576
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3576/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3576/comments
|
https://api.github.com/repos/ollama/ollama/issues/3576/events
|
https://github.com/ollama/ollama/issues/3576
| 2,235,575,326
|
I_kwDOJ0Z1Ps6FQCwe
| 3,576
|
Support command r plus
|
{
"login": "taozhiyuai",
"id": 146583103,
"node_id": "U_kgDOCLyuPw",
"avatar_url": "https://avatars.githubusercontent.com/u/146583103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/taozhiyuai",
"html_url": "https://github.com/taozhiyuai",
"followers_url": "https://api.github.com/users/taozhiyuai/followers",
"following_url": "https://api.github.com/users/taozhiyuai/following{/other_user}",
"gists_url": "https://api.github.com/users/taozhiyuai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/taozhiyuai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/taozhiyuai/subscriptions",
"organizations_url": "https://api.github.com/users/taozhiyuai/orgs",
"repos_url": "https://api.github.com/users/taozhiyuai/repos",
"events_url": "https://api.github.com/users/taozhiyuai/events{/privacy}",
"received_events_url": "https://api.github.com/users/taozhiyuai/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-04-10T13:14:59
| 2024-04-17T00:50:31
| 2024-04-17T00:50:31
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What model would you like?
https://huggingface.co/CohereForAI/c4ai-command-r-plus
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3576/reactions",
"total_count": 16,
"+1": 11,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 3,
"eyes": 2
}
|
https://api.github.com/repos/ollama/ollama/issues/3576/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7271
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7271/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7271/comments
|
https://api.github.com/repos/ollama/ollama/issues/7271/events
|
https://github.com/ollama/ollama/pull/7271
| 2,599,236,565
|
PR_kwDOJ0Z1Ps5_LcRJ
| 7,271
|
Refactor context shift flag for infinite text generation comment
|
{
"login": "YassineOsip",
"id": 44472826,
"node_id": "MDQ6VXNlcjQ0NDcyODI2",
"avatar_url": "https://avatars.githubusercontent.com/u/44472826?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/YassineOsip",
"html_url": "https://github.com/YassineOsip",
"followers_url": "https://api.github.com/users/YassineOsip/followers",
"following_url": "https://api.github.com/users/YassineOsip/following{/other_user}",
"gists_url": "https://api.github.com/users/YassineOsip/gists{/gist_id}",
"starred_url": "https://api.github.com/users/YassineOsip/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/YassineOsip/subscriptions",
"organizations_url": "https://api.github.com/users/YassineOsip/orgs",
"repos_url": "https://api.github.com/users/YassineOsip/repos",
"events_url": "https://api.github.com/users/YassineOsip/events{/privacy}",
"received_events_url": "https://api.github.com/users/YassineOsip/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-10-19T14:24:40
| 2024-10-21T20:50:04
| 2024-10-21T20:50:03
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7271",
"html_url": "https://github.com/ollama/ollama/pull/7271",
"diff_url": "https://github.com/ollama/ollama/pull/7271.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7271.patch",
"merged_at": null
}
|
(should be
This pull request includes a minor correction to a comment in the `llama/common.h` file. The change fixes a typo in the comment for the `ctx_shift` parameter.
* [`llama/common.h`](diffhunk://#diff-670d1015c5d0908848f1f635691ebcc8372dc9a337ca0e93ad02abb72df998e3L275-R275): Corrected a typo in the comment for the `ctx_shift` parameter, changing "inifinite" to "infinite". "infinite") not inifinite
|
{
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users/jessegross/followers",
"following_url": "https://api.github.com/users/jessegross/following{/other_user}",
"gists_url": "https://api.github.com/users/jessegross/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jessegross/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jessegross/subscriptions",
"organizations_url": "https://api.github.com/users/jessegross/orgs",
"repos_url": "https://api.github.com/users/jessegross/repos",
"events_url": "https://api.github.com/users/jessegross/events{/privacy}",
"received_events_url": "https://api.github.com/users/jessegross/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7271/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7271/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5460
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5460/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5460/comments
|
https://api.github.com/repos/ollama/ollama/issues/5460/events
|
https://github.com/ollama/ollama/issues/5460
| 2,388,518,933
|
I_kwDOJ0Z1Ps6OXegV
| 5,460
|
custom model: error loading model: check_tensor_dims: tensor 'blk.0.ffn_norm.weight' not found
|
{
"login": "finnbusse",
"id": 110921874,
"node_id": "U_kgDOBpyIkg",
"avatar_url": "https://avatars.githubusercontent.com/u/110921874?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/finnbusse",
"html_url": "https://github.com/finnbusse",
"followers_url": "https://api.github.com/users/finnbusse/followers",
"following_url": "https://api.github.com/users/finnbusse/following{/other_user}",
"gists_url": "https://api.github.com/users/finnbusse/gists{/gist_id}",
"starred_url": "https://api.github.com/users/finnbusse/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/finnbusse/subscriptions",
"organizations_url": "https://api.github.com/users/finnbusse/orgs",
"repos_url": "https://api.github.com/users/finnbusse/repos",
"events_url": "https://api.github.com/users/finnbusse/events{/privacy}",
"received_events_url": "https://api.github.com/users/finnbusse/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-07-03T12:35:45
| 2024-07-26T18:18:50
| 2024-07-26T18:18:50
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I recently trained a custom AI model using Google Colab with Alpaca and Unsloth. The training process was successful, but when attempting to run the model using Ollama, I encountered an error.
`C:\Users\Finn\Downloads>ollama run test2
Error: llama runner process has terminated: exit status 0xc0000409`
Building of this AI model was sucessful:
`C:\Users\Finn\Downloads>ollama create test2 -f Modelfile
transferring model data
using existing layer sha256:b9175c65733392c2bf6c90c4a2fc5772b948369ec3269fb7b0b1f2ae24a8ac2c
creating new layer sha256:73d81a2944b28d56a86a4bd8980f14085e5ec0e894b80f1932da1010a2411add
writing manifest
success`
Modelfile:
`FROM test2.gguf`
Before that, I was able to chat with the model within Google Colab.
Server logs:
` Device 0: NVIDIA GeForce RTX 3060 Ti, compute capability 8.6, VMM: yes
llm_load_tensors: ggml ctx size = 0.39 MiB
llama_model_load: error loading model: check_tensor_dims: tensor 'blk.0.ffn_norm.weight' not found
llama_load_model_from_file: exception loading model
time=2024-07-03T14:34:25.661+02:00 level=INFO source=server.go:594 msg="waiting for server to become available" status="llm server error"
time=2024-07-03T14:34:25.921+02:00 level=ERROR source=sched.go:388 msg="error loading llama server" error="llama runner process has terminated: exit status 0xc0000409 "
[GIN] 2024/07/03 - 14:34:25 | 500 | 1.1579959s | 127.0.0.1 | POST "/api/chat"
time=2024-07-03T14:34:30.949+02:00 level=WARN source=sched.go:575 msg="gpu VRAM usage didn't recover within timeout" seconds=5.0273536 model=C:\Users\Finn\.ollama\models\blobs\sha256-b9175c65733392c2bf6c90c4a2fc5772b948369ec3269fb7b0b1f2ae24a8ac2c
time=2024-07-03T14:34:31.199+02:00 level=WARN source=sched.go:575 msg="gpu VRAM usage didn't recover within timeout" seconds=5.2771314 model=C:\Users\Finn\.ollama\models\blobs\sha256-b9175c65733392c2bf6c90c4a2fc5772b948369ec3269fb7b0b1f2ae24a8ac2c
time=2024-07-03T14:34:31.451+02:00 level=WARN source=sched.go:575 msg="gpu VRAM usage didn't recover within timeout" seconds=5.5292856 model=C:\Users\Finn\.ollama\models\blobs\sha256-b9175c65733392c2bf6c90c4a2fc5772b948369ec3269fb7b0b1f2ae24a8ac2c`
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.48
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5460/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/ollama/ollama/issues/5460/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1164
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1164/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1164/comments
|
https://api.github.com/repos/ollama/ollama/issues/1164/events
|
https://github.com/ollama/ollama/pull/1164
| 1,998,064,614
|
PR_kwDOJ0Z1Ps5fseVd
| 1,164
|
update faq
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-11-17T01:10:37
| 2023-11-17T01:20:20
| 2023-11-17T01:20:19
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1164",
"html_url": "https://github.com/ollama/ollama/pull/1164",
"diff_url": "https://github.com/ollama/ollama/pull/1164.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1164.patch",
"merged_at": "2023-11-17T01:20:19"
}
| null |
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1164/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1164/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/531
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/531/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/531/comments
|
https://api.github.com/repos/ollama/ollama/issues/531/events
|
https://github.com/ollama/ollama/pull/531
| 1,897,072,760
|
PR_kwDOJ0Z1Ps5aXrGo
| 531
|
set request.ContentLength
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-09-14T18:11:06
| 2023-09-14T20:33:12
| 2023-09-14T20:33:11
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/531",
"html_url": "https://github.com/ollama/ollama/pull/531",
"diff_url": "https://github.com/ollama/ollama/pull/531.diff",
"patch_url": "https://github.com/ollama/ollama/pull/531.patch",
"merged_at": "2023-09-14T20:33:11"
}
|
This informs the HTTP client the content length is known and disables chunked Transfer-Encoding
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/531/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/531/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7052
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7052/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7052/comments
|
https://api.github.com/repos/ollama/ollama/issues/7052/events
|
https://github.com/ollama/ollama/issues/7052
| 2,557,928,701
|
I_kwDOJ0Z1Ps6YduT9
| 7,052
|
Capability checking does not consider custom templates
|
{
"login": "kyRobot",
"id": 9490543,
"node_id": "MDQ6VXNlcjk0OTA1NDM=",
"avatar_url": "https://avatars.githubusercontent.com/u/9490543?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kyRobot",
"html_url": "https://github.com/kyRobot",
"followers_url": "https://api.github.com/users/kyRobot/followers",
"following_url": "https://api.github.com/users/kyRobot/following{/other_user}",
"gists_url": "https://api.github.com/users/kyRobot/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kyRobot/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kyRobot/subscriptions",
"organizations_url": "https://api.github.com/users/kyRobot/orgs",
"repos_url": "https://api.github.com/users/kyRobot/repos",
"events_url": "https://api.github.com/users/kyRobot/events{/privacy}",
"received_events_url": "https://api.github.com/users/kyRobot/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 1
| 2024-10-01T00:41:31
| 2024-10-01T00:44:12
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
When sending a completion request to a model that supports FIM tasks - e.g Qwen 2.5 Coder 7B base - Ollama rejects the request because the "model does not support insert".
Passing `raw` with the necessary prompt format to the model does work for completion, so the issue is not with the model, it is with Ollamas acceptance of the non-raw prompt.
I found the issue is that the Modelfile template does not contain `{{.Suffix}}` since the instruct version of the same model, which does include suffix does work for completions, in raw mode and otherwise.
Passing a custom template in the request to the base mode however does not work. Ollama gives the same "model does not support insert" error.
The server code shows that only the modelfile template is considered for compatability checking.
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.3.12
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7052/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7052/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/7178
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7178/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7178/comments
|
https://api.github.com/repos/ollama/ollama/issues/7178/events
|
https://github.com/ollama/ollama/issues/7178
| 2,582,403,527
|
I_kwDOJ0Z1Ps6Z7FnH
| 7,178
|
Qwen2.5-Math support
|
{
"login": "fzyzcjy",
"id": 5236035,
"node_id": "MDQ6VXNlcjUyMzYwMzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/5236035?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fzyzcjy",
"html_url": "https://github.com/fzyzcjy",
"followers_url": "https://api.github.com/users/fzyzcjy/followers",
"following_url": "https://api.github.com/users/fzyzcjy/following{/other_user}",
"gists_url": "https://api.github.com/users/fzyzcjy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fzyzcjy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fzyzcjy/subscriptions",
"organizations_url": "https://api.github.com/users/fzyzcjy/orgs",
"repos_url": "https://api.github.com/users/fzyzcjy/repos",
"events_url": "https://api.github.com/users/fzyzcjy/events{/privacy}",
"received_events_url": "https://api.github.com/users/fzyzcjy/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-10-12T01:46:09
| 2024-10-13T05:07:57
| 2024-10-13T05:07:56
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi thanks for the library! It seems that ollama supports qwen2.5 and qwen2.5-coder, but not qwen2.5-math (a quick search only gives qwen2-math which is older model https://ollama.com/search?q=qwen2.5-math).
Related: https://github.com/ollama/ollama/issues/6916
Related: https://github.com/ollama/ollama/issues/6889
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7178/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7178/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5752
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5752/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5752/comments
|
https://api.github.com/repos/ollama/ollama/issues/5752/events
|
https://github.com/ollama/ollama/pull/5752
| 2,414,269,197
|
PR_kwDOJ0Z1Ps51rXij
| 5,752
|
OpenAI: Function Based Testing
|
{
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjhan/followers",
"following_url": "https://api.github.com/users/royjhan/following{/other_user}",
"gists_url": "https://api.github.com/users/royjhan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/royjhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/royjhan/subscriptions",
"organizations_url": "https://api.github.com/users/royjhan/orgs",
"repos_url": "https://api.github.com/users/royjhan/repos",
"events_url": "https://api.github.com/users/royjhan/events{/privacy}",
"received_events_url": "https://api.github.com/users/royjhan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-07-17T18:17:00
| 2024-07-21T04:39:36
| 2024-07-19T18:37:13
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5752",
"html_url": "https://github.com/ollama/ollama/pull/5752",
"diff_url": "https://github.com/ollama/ollama/pull/5752.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5752.patch",
"merged_at": "2024-07-19T18:37:13"
}
|
Distinguish tests by function, testing requests and error forwarding
captureRequestMiddleware catches the request after it has been converted by the functionality middleware, before hitting a mock endpoint returning 200
ResponseRecorder catches any errors that are returned in the response body immediately by the middleware before reaching the endpoint
|
{
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjhan/followers",
"following_url": "https://api.github.com/users/royjhan/following{/other_user}",
"gists_url": "https://api.github.com/users/royjhan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/royjhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/royjhan/subscriptions",
"organizations_url": "https://api.github.com/users/royjhan/orgs",
"repos_url": "https://api.github.com/users/royjhan/repos",
"events_url": "https://api.github.com/users/royjhan/events{/privacy}",
"received_events_url": "https://api.github.com/users/royjhan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5752/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5752/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6073
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6073/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6073/comments
|
https://api.github.com/repos/ollama/ollama/issues/6073/events
|
https://github.com/ollama/ollama/issues/6073
| 2,437,787,400
|
I_kwDOJ0Z1Ps6RTa8I
| 6,073
|
Model request: Llama3-Athene-70B
|
{
"login": "joliss",
"id": 524783,
"node_id": "MDQ6VXNlcjUyNDc4Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/524783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joliss",
"html_url": "https://github.com/joliss",
"followers_url": "https://api.github.com/users/joliss/followers",
"following_url": "https://api.github.com/users/joliss/following{/other_user}",
"gists_url": "https://api.github.com/users/joliss/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joliss/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joliss/subscriptions",
"organizations_url": "https://api.github.com/users/joliss/orgs",
"repos_url": "https://api.github.com/users/joliss/repos",
"events_url": "https://api.github.com/users/joliss/events{/privacy}",
"received_events_url": "https://api.github.com/users/joliss/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
closed
| false
| null |
[] | null | 3
| 2024-07-30T13:01:04
| 2024-08-17T21:18:41
| 2024-08-17T21:18:41
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
It would be lovely to have the Llama3-based post-trained Athene-70b model available on Ollama! It is currently the highest-ranked open 70b model on the [LMSYS leaderboard](https://chat.lmsys.org/?leaderboard).
https://nexusflow.ai/blogs/athene
https://huggingface.co/Nexusflow/Athene-70B
Somebody also published a GGUF version: https://huggingface.co/bullerwins/Athene-70B-GGUF
|
{
"login": "joliss",
"id": 524783,
"node_id": "MDQ6VXNlcjUyNDc4Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/524783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joliss",
"html_url": "https://github.com/joliss",
"followers_url": "https://api.github.com/users/joliss/followers",
"following_url": "https://api.github.com/users/joliss/following{/other_user}",
"gists_url": "https://api.github.com/users/joliss/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joliss/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joliss/subscriptions",
"organizations_url": "https://api.github.com/users/joliss/orgs",
"repos_url": "https://api.github.com/users/joliss/repos",
"events_url": "https://api.github.com/users/joliss/events{/privacy}",
"received_events_url": "https://api.github.com/users/joliss/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6073/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6073/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4050
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4050/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4050/comments
|
https://api.github.com/repos/ollama/ollama/issues/4050/events
|
https://github.com/ollama/ollama/issues/4050
| 2,271,318,889
|
I_kwDOJ0Z1Ps6HYZNp
| 4,050
|
Ollama after 30 minutes start to be very very slow to answer the questions
|
{
"login": "nunostiles",
"id": 168548263,
"node_id": "U_kgDOCgvXpw",
"avatar_url": "https://avatars.githubusercontent.com/u/168548263?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nunostiles",
"html_url": "https://github.com/nunostiles",
"followers_url": "https://api.github.com/users/nunostiles/followers",
"following_url": "https://api.github.com/users/nunostiles/following{/other_user}",
"gists_url": "https://api.github.com/users/nunostiles/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nunostiles/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nunostiles/subscriptions",
"organizations_url": "https://api.github.com/users/nunostiles/orgs",
"repos_url": "https://api.github.com/users/nunostiles/repos",
"events_url": "https://api.github.com/users/nunostiles/events{/privacy}",
"received_events_url": "https://api.github.com/users/nunostiles/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5808482718,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWjZpng",
"url": "https://api.github.com/repos/ollama/ollama/labels/performance",
"name": "performance",
"color": "A5B5C6",
"default": false,
"description": ""
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info",
"name": "needs more info",
"color": "BA8041",
"default": false,
"description": "More information is needed to assist"
}
] |
closed
| false
| null |
[] | null | 10
| 2024-04-30T12:20:22
| 2024-12-19T23:46:07
| 2024-12-19T23:46:07
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I've already tried with several different models, but the issue is always persisting, after ~30 minutes it keeps taking ages to answer to questions, even with saved models it happens. Is there anything that I can do? it's in fact a bug?
On the first 30 minutes it runs normally without any slowness. Need help please, any suggestion?
Thank you!
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.32
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4050/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4050/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3682
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3682/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3682/comments
|
https://api.github.com/repos/ollama/ollama/issues/3682/events
|
https://github.com/ollama/ollama/pull/3682
| 2,246,845,425
|
PR_kwDOJ0Z1Ps5s2LF1
| 3,682
|
quantize any fp16/fp32 model
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-04-16T20:37:34
| 2024-05-07T22:20:51
| 2024-05-07T22:20:49
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3682",
"html_url": "https://github.com/ollama/ollama/pull/3682",
"diff_url": "https://github.com/ollama/ollama/pull/3682.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3682.patch",
"merged_at": "2024-05-07T22:20:49"
}
|
- FROM /path/to/{safetensors,pytorch}
- FROM /path/to/fp{16,32}.bin
- FROM model:fp{16,32}
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3682/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3682/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/8241
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8241/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8241/comments
|
https://api.github.com/repos/ollama/ollama/issues/8241/events
|
https://github.com/ollama/ollama/issues/8241
| 2,758,807,838
|
I_kwDOJ0Z1Ps6kcBEe
| 8,241
|
Option to show all models available from registry/library
|
{
"login": "t18n",
"id": 14198542,
"node_id": "MDQ6VXNlcjE0MTk4NTQy",
"avatar_url": "https://avatars.githubusercontent.com/u/14198542?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/t18n",
"html_url": "https://github.com/t18n",
"followers_url": "https://api.github.com/users/t18n/followers",
"following_url": "https://api.github.com/users/t18n/following{/other_user}",
"gists_url": "https://api.github.com/users/t18n/gists{/gist_id}",
"starred_url": "https://api.github.com/users/t18n/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/t18n/subscriptions",
"organizations_url": "https://api.github.com/users/t18n/orgs",
"repos_url": "https://api.github.com/users/t18n/repos",
"events_url": "https://api.github.com/users/t18n/events{/privacy}",
"received_events_url": "https://api.github.com/users/t18n/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
|
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 2
| 2024-12-25T13:25:52
| 2024-12-26T00:26:10
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Currently, `ollama list` shows all the installed models. It would be very useful to be able to show all the available models from the registry/[model library](https://github.com/ollama/ollama?tab=readme-ov-file#model-library), which allow us to make model management app to install a model with GUI with one click.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8241/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8241/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/4452
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4452/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4452/comments
|
https://api.github.com/repos/ollama/ollama/issues/4452/events
|
https://github.com/ollama/ollama/pull/4452
| 2,297,736,217
|
PR_kwDOJ0Z1Ps5vhfvd
| 4,452
|
follow naming convenstions
|
{
"login": "Tyrell04",
"id": 43107913,
"node_id": "MDQ6VXNlcjQzMTA3OTEz",
"avatar_url": "https://avatars.githubusercontent.com/u/43107913?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Tyrell04",
"html_url": "https://github.com/Tyrell04",
"followers_url": "https://api.github.com/users/Tyrell04/followers",
"following_url": "https://api.github.com/users/Tyrell04/following{/other_user}",
"gists_url": "https://api.github.com/users/Tyrell04/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Tyrell04/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Tyrell04/subscriptions",
"organizations_url": "https://api.github.com/users/Tyrell04/orgs",
"repos_url": "https://api.github.com/users/Tyrell04/repos",
"events_url": "https://api.github.com/users/Tyrell04/events{/privacy}",
"received_events_url": "https://api.github.com/users/Tyrell04/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-05-15T12:09:38
| 2024-10-26T17:41:47
| 2024-10-26T17:41:47
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4452",
"html_url": "https://github.com/ollama/ollama/pull/4452",
"diff_url": "https://github.com/ollama/ollama/pull/4452.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4452.patch",
"merged_at": null
}
| null |
{
"login": "Tyrell04",
"id": 43107913,
"node_id": "MDQ6VXNlcjQzMTA3OTEz",
"avatar_url": "https://avatars.githubusercontent.com/u/43107913?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Tyrell04",
"html_url": "https://github.com/Tyrell04",
"followers_url": "https://api.github.com/users/Tyrell04/followers",
"following_url": "https://api.github.com/users/Tyrell04/following{/other_user}",
"gists_url": "https://api.github.com/users/Tyrell04/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Tyrell04/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Tyrell04/subscriptions",
"organizations_url": "https://api.github.com/users/Tyrell04/orgs",
"repos_url": "https://api.github.com/users/Tyrell04/repos",
"events_url": "https://api.github.com/users/Tyrell04/events{/privacy}",
"received_events_url": "https://api.github.com/users/Tyrell04/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4452/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4452/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7350
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7350/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7350/comments
|
https://api.github.com/repos/ollama/ollama/issues/7350/events
|
https://github.com/ollama/ollama/issues/7350
| 2,613,001,236
|
I_kwDOJ0Z1Ps6bvzwU
| 7,350
|
Ollama keeps reloading the same model repeatedly
|
{
"login": "cray1031",
"id": 69585934,
"node_id": "MDQ6VXNlcjY5NTg1OTM0",
"avatar_url": "https://avatars.githubusercontent.com/u/69585934?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cray1031",
"html_url": "https://github.com/cray1031",
"followers_url": "https://api.github.com/users/cray1031/followers",
"following_url": "https://api.github.com/users/cray1031/following{/other_user}",
"gists_url": "https://api.github.com/users/cray1031/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cray1031/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cray1031/subscriptions",
"organizations_url": "https://api.github.com/users/cray1031/orgs",
"repos_url": "https://api.github.com/users/cray1031/repos",
"events_url": "https://api.github.com/users/cray1031/events{/privacy}",
"received_events_url": "https://api.github.com/users/cray1031/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info",
"name": "needs more info",
"color": "BA8041",
"default": false,
"description": "More information is needed to assist"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 4
| 2024-10-25T03:47:33
| 2024-11-17T14:22:06
| 2024-11-17T14:22:06
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
`docker run -d --gpus=all -v /data/ollama:/root/.ollama -p 9112:11434 -e OLLAMA_ORIGINS="*" -e OLLAMA_NUM_PARALLEL=15 -e OLLAMA_KEEP_ALIVE=2h -e OLLAMA_DEBUG=1 --name ollama_v0314 ollama/ollama:latest`
```
eleasing cuda driver library
time=2024-10-25T03:39:46.460Z level=DEBUG source=server.go:1086 msg="stopping llama server"
time=2024-10-25T03:39:46.460Z level=DEBUG source=server.go:1092 msg="waiting for llama server to exit"
time=2024-10-25T03:39:46.549Z level=DEBUG source=server.go:1096 msg="llama server stopped"
time=2024-10-25T03:39:46.549Z level=DEBUG source=sched.go:380 msg="runner released" modelPath=/root/.ollama/models/blobs/sha256-ced7796abcbb47ef96412198ebd31ac1eca21e8bbc831d72a31df69e4a30aad5
time=2024-10-25T03:39:46.711Z level=DEBUG source=gpu.go:396 msg="updating system memory data" before.total="501.9 GiB" before.free="482.1 GiB" before.free_swap="0 B" now.total="501.9 GiB" now.free="482.1 GiB" now.free_swap="0 B"
CUDA driver version: 11.7
time=2024-10-25T03:39:46.940Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-556705f3-b8be-5aa8-7580-ce93ecdb297e name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB"
time=2024-10-25T03:39:47.132Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-45d04aa6-0eed-614b-1b71-7fedc703efb6 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="30.4 GiB" now.total="31.7 GiB" now.free="30.4 GiB" now.used="1.4 GiB"
time=2024-10-25T03:39:47.329Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-8a9608c9-324e-1c6b-dfd0-b1bb2ee1ad92 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB"
time=2024-10-25T03:39:47.547Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-007f6b5a-bb43-c925-1206-1985428efa33 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB"
time=2024-10-25T03:39:47.745Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-4b8aaa40-bd18-365a-3057-b6523495086c name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB"
time=2024-10-25T03:39:47.948Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-d298f359-e8fd-12a0-5c50-d9c5b75ca8b6 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB"
time=2024-10-25T03:39:48.119Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-b369557d-d7cf-f951-1c83-302ebb5ce7f3 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB"
time=2024-10-25T03:39:48.273Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-f30d65b6-22cd-76ba-2cc3-22fe7b481010 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB"
releasing cuda driver library
time=2024-10-25T03:39:48.273Z level=DEBUG source=gpu.go:396 msg="updating system memory data" before.total="501.9 GiB" before.free="482.1 GiB" before.free_swap="0 B" now.total="501.9 GiB" now.free="482.1 GiB" now.free_swap="0 B"
CUDA driver version: 11.7
time=2024-10-25T03:39:48.458Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-556705f3-b8be-5aa8-7580-ce93ecdb297e name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB"
time=2024-10-25T03:39:48.606Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-45d04aa6-0eed-614b-1b71-7fedc703efb6 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="30.4 GiB" now.total="31.7 GiB" now.free="30.4 GiB" now.used="1.4 GiB"
time=2024-10-25T03:39:48.753Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-8a9608c9-324e-1c6b-dfd0-b1bb2ee1ad92 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB"
time=2024-10-25T03:39:48.900Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-007f6b5a-bb43-c925-1206-1985428efa33 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB"
time=2024-10-25T03:39:49.087Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-4b8aaa40-bd18-365a-3057-b6523495086c name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB"
time=2024-10-25T03:39:49.235Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-d298f359-e8fd-12a0-5c50-d9c5b75ca8b6 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB"
time=2024-10-25T03:39:49.417Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-b369557d-d7cf-f951-1c83-302ebb5ce7f3 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB"
time=2024-10-25T03:39:49.565Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-f30d65b6-22cd-76ba-2cc3-22fe7b481010 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB"
releasing cuda driver library
time=2024-10-25T03:39:49.565Z level=DEBUG source=gpu.go:396 msg="updating system memory data" before.total="501.9 GiB" before.free="482.1 GiB" before.free_swap="0 B" now.total="501.9 GiB" now.free="482.1 GiB" now.free_swap="0 B"
CUDA driver version: 11.7
time=2024-10-25T03:39:49.758Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-556705f3-b8be-5aa8-7580-ce93ecdb297e name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB"
time=2024-10-25T03:39:49.911Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-45d04aa6-0eed-614b-1b71-7fedc703efb6 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="30.4 GiB" now.total="31.7 GiB" now.free="30.4 GiB" now.used="1.4 GiB"
time=2024-10-25T03:39:50.058Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-8a9608c9-324e-1c6b-dfd0-b1bb2ee1ad92 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB"
time=2024-10-25T03:39:50.204Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-007f6b5a-bb43-c925-1206-1985428efa33 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB"
time=2024-10-25T03:39:50.403Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-4b8aaa40-bd18-365a-3057-b6523495086c name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB"
time=2024-10-25T03:39:50.565Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-d298f359-e8fd-12a0-5c50-d9c5b75ca8b6 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB"
time=2024-10-25T03:39:50.711Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-b369557d-d7cf-f951-1c83-302ebb5ce7f3 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB"
time=2024-10-25T03:39:50.858Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-f30d65b6-22cd-76ba-2cc3-22fe7b481010 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB"
releasing cuda driver library
time=2024-10-25T03:39:50.858Z level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.861122605 model=/root/.ollama/models/blobs/sha256-ced7796abcbb47ef96412198ebd31ac1eca21e8bbc831d72a31df69e4a30aad5
time=2024-10-25T03:39:50.858Z level=DEBUG source=sched.go:384 msg="sending an unloaded event" modelPath=/root/.ollama/models/blobs/sha256-ced7796abcbb47ef96412198ebd31ac1eca21e8bbc831d72a31df69e4a30aad5
time=2024-10-25T03:39:50.858Z level=DEBUG source=sched.go:308 msg="ignoring unload event with no pending requests"
time=2024-10-25T03:39:50.858Z level=DEBUG source=gpu.go:396 msg="updating system memory data" before.total="501.9 GiB" before.free="482.1 GiB" before.free_swap="0 B" now.total="501.9 GiB" now.free="482.1 GiB" now.free_swap="0 B"
CUDA driver version: 11.7
time=2024-10-25T03:39:51.071Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-556705f3-b8be-5aa8-7580-ce93ecdb297e name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB"
time=2024-10-25T03:39:51.219Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-45d04aa6-0eed-614b-1b71-7fedc703efb6 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="30.4 GiB" now.total="31.7 GiB" now.free="30.4 GiB" now.used="1.4 GiB"
time=2024-10-25T03:39:51.414Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-8a9608c9-324e-1c6b-dfd0-b1bb2ee1ad92 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB"
time=2024-10-25T03:39:51.591Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-007f6b5a-bb43-c925-1206-1985428efa33 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB"
time=2024-10-25T03:39:51.784Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-4b8aaa40-bd18-365a-3057-b6523495086c name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB"
time=2024-10-25T03:39:52.011Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-d298f359-e8fd-12a0-5c50-d9c5b75ca8b6 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB"
time=2024-10-25T03:39:52.171Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-b369557d-d7cf-f951-1c83-302ebb5ce7f3 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB"
time=2024-10-25T03:39:52.359Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-f30d65b6-22cd-76ba-2cc3-22fe7b481010 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB"
releasing cuda driver library
time=2024-10-25T03:39:52.360Z level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=7.362796717 model=/root/.ollama/models/blobs/sha256-ced7796abcbb47ef96412198ebd31ac1eca21e8bbc831d72a31df69e4a30aad5
time=2024-10-25T03:39:52.360Z level=DEBUG source=gpu.go:396 msg="updating system memory data" before.total="501.9 GiB" before.free="482.1 GiB" before.free_swap="0 B" now.total="501.9 GiB" now.free="482.1 GiB" now.free_swap="0 B"
CUDA driver version: 11.7
time=2024-10-25T03:39:52.566Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-556705f3-b8be-5aa8-7580-ce93ecdb297e name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB"
time=2024-10-25T03:39:52.720Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-45d04aa6-0eed-614b-1b71-7fedc703efb6 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="30.4 GiB" now.total="31.7 GiB" now.free="30.4 GiB" now.used="1.4 GiB"
time=2024-10-25T03:39:52.868Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-8a9608c9-324e-1c6b-dfd0-b1bb2ee1ad92 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB"
time=2024-10-25T03:39:53.045Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-007f6b5a-bb43-c925-1206-1985428efa33 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB"
time=2024-10-25T03:39:53.383Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-4b8aaa40-bd18-365a-3057-b6523495086c name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB"
time=2024-10-25T03:39:53.652Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-d298f359-e8fd-12a0-5c50-d9c5b75ca8b6 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB"
time=2024-10-25T03:39:53.923Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-b369557d-d7cf-f951-1c83-302ebb5ce7f3 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB"
time=2024-10-25T03:39:54.161Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-f30d65b6-22cd-76ba-2cc3-22fe7b481010 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB"
releasing cuda driver library
time=2024-10-25T03:39:54.161Z level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=9.164325299 model=/root/.ollama/models/blobs/sha256-ced7796abcbb47ef96412198ebd31ac1eca21e8bbc831d72a31df69e4a30aad5
```
### OS
Linux
### GPU
Nvidia
### CPU
_No response_
### Ollama version
0.1.34
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7350/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7350/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7771
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7771/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7771/comments
|
https://api.github.com/repos/ollama/ollama/issues/7771/events
|
https://github.com/ollama/ollama/issues/7771
| 2,677,691,308
|
I_kwDOJ0Z1Ps6fmlOs
| 7,771
|
CUDA error: unspecified launch failure: current device: 0, in function ggml_backend_cuda_synchronize at ggml-cuda.cu:2508
|
{
"login": "daocoder2",
"id": 19505806,
"node_id": "MDQ6VXNlcjE5NTA1ODA2",
"avatar_url": "https://avatars.githubusercontent.com/u/19505806?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/daocoder2",
"html_url": "https://github.com/daocoder2",
"followers_url": "https://api.github.com/users/daocoder2/followers",
"following_url": "https://api.github.com/users/daocoder2/following{/other_user}",
"gists_url": "https://api.github.com/users/daocoder2/gists{/gist_id}",
"starred_url": "https://api.github.com/users/daocoder2/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/daocoder2/subscriptions",
"organizations_url": "https://api.github.com/users/daocoder2/orgs",
"repos_url": "https://api.github.com/users/daocoder2/repos",
"events_url": "https://api.github.com/users/daocoder2/events{/privacy}",
"received_events_url": "https://api.github.com/users/daocoder2/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-11-21T01:41:16
| 2024-11-21T16:50:25
| 2024-11-21T16:50:25
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
```
2024/11/21 01:22:08 routes.go:1189: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:3 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:4 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:true OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2024-11-21T01:22:08.918Z level=INFO source=images.go:755 msg="total blobs: 50"
time=2024-11-21T01:22:08.919Z level=INFO source=images.go:762 msg="total unused blobs removed: 0"
time=2024-11-21T01:22:08.919Z level=INFO source=routes.go:1240 msg="Listening on [::]:11434 (version 0.4.1)"
time=2024-11-21T01:22:08.919Z level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cpu_avx cpu_avx2 cuda_v11 cuda_v12 cpu]"
time=2024-11-21T01:22:08.919Z level=INFO source=gpu.go:221 msg="looking for compatible GPUs"
time=2024-11-21T01:22:09.494Z level=INFO source=types.go:123 msg="inference compute" id=GPU-cfe28dbd-f61e-acdb-96ed-815caf9afc67 library=cuda variant=v11 compute=8.0 driver=11.7 name="NVIDIA Graphics Device" total="79.3 GiB" available="78.9 GiB"
time=2024-11-21T01:22:09.494Z level=INFO source=types.go:123 msg="inference compute" id=GPU-c5a1deea-f294-f993-7aa4-5386493bad88 library=cuda variant=v11 compute=8.0 driver=11.7 name="NVIDIA Graphics Device" total="79.3 GiB" available="45.0 GiB"
time=2024-11-21T01:22:09.494Z level=INFO source=types.go:123 msg="inference compute" id=GPU-807da1fa-7fac-08aa-4a8c-7c176f72f13b library=cuda variant=v11 compute=8.0 driver=11.7 name="NVIDIA Graphics Device" total="79.3 GiB" available="19.4 GiB"
time=2024-11-21T01:22:09.494Z level=INFO source=types.go:123 msg="inference compute" id=GPU-d2785990-22de-3488-9102-778351cda270 library=cuda variant=v11 compute=8.0 driver=11.7 name="NVIDIA Graphics Device" total="79.3 GiB" available="19.8 GiB"
time=2024-11-21T01:22:18.349Z level=WARN source=sched.go:137 msg="mllama doesn't support parallel requests yet"
time=2024-11-21T01:22:18.821Z level=INFO source=sched.go:730 msg="new model will fit in available VRAM, loading" model=/root/.ollama/models/blobs/sha256-da63a910e34997d50c9f21cc7f16996d1e76e1c128b13319edd68348f760ecc7 library=cuda parallel=1 required="62.9 GiB"
time=2024-11-21T01:22:19.246Z level=INFO source=server.go:105 msg="system memory" total="2015.3 GiB" free="1895.5 GiB" free_swap="0 B"
time=2024-11-21T01:22:19.249Z level=INFO source=memory.go:343 msg="offload to cuda" projector.weights="1.9 GiB" projector.graph="2.8 GiB" layers.requested=-1 layers.model=101 layers.offload=101 layers.split=26,25,25,25 memory.available="[78.9 GiB 45.0 GiB 19.8 GiB 19.4 GiB]" memory.gpu_overhead="0 B" memory.required.full="62.9 GiB" memory.required.partial="62.9 GiB" memory.required.kv="1.6 GiB" memory.required.allocations="[19.8 GiB 14.3 GiB 14.4 GiB 14.4 GiB]" memory.weights.total="49.3 GiB" memory.weights.repeating="48.5 GiB" memory.weights.nonrepeating="822.0 MiB" memory.graph.full="1.1 GiB" memory.graph.partial="1.1 GiB"
time=2024-11-21T01:22:19.250Z level=INFO source=server.go:383 msg="starting llama server" cmd="/usr/lib/ollama/runners/cuda_v11/ollama_llama_server --model /root/.ollama/models/blobs/sha256-da63a910e34997d50c9f21cc7f16996d1e76e1c128b13319edd68348f760ecc7 --ctx-size 2048 --batch-size 512 --n-gpu-layers 101 --mmproj /root/.ollama/models/blobs/sha256-6b6c374d159e097509b33e9fda648c178c903959fc0c7dbfae487cc8d958093e --threads 64 --parallel 1 --tensor-split 26,25,25,25 --port 38572"
time=2024-11-21T01:22:19.250Z level=INFO source=sched.go:449 msg="loaded runners" count=1
time=2024-11-21T01:22:19.250Z level=INFO source=server.go:562 msg="waiting for llama runner to start responding"
time=2024-11-21T01:22:19.250Z level=INFO source=server.go:596 msg="waiting for server to become available" status="llm server error"
time=2024-11-21T01:22:19.259Z level=INFO source=runner.go:863 msg="starting go runner"
time=2024-11-21T01:22:19.259Z level=INFO source=runner.go:864 msg=system info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | cgo(gcc)" threads=64
time=2024-11-21T01:22:19.260Z level=INFO source=.:0 msg="Server listening on 127.0.0.1:38572"
llama_model_loader: loaded meta data with 27 key-value pairs and 984 tensors from /root/.ollama/models/blobs/sha256-da63a910e34997d50c9f21cc7f16996d1e76e1c128b13319edd68348f760ecc7 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = mllama
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = Model
llama_model_loader: - kv 3: general.size_label str = 88B
llama_model_loader: - kv 4: mllama.block_count u32 = 100
llama_model_loader: - kv 5: mllama.context_length u32 = 131072
llama_model_loader: - kv 6: mllama.embedding_length u32 = 8192
llama_model_loader: - kv 7: mllama.feed_forward_length u32 = 28672
llama_model_loader: - kv 8: mllama.attention.head_count u32 = 64
llama_model_loader: - kv 9: mllama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 10: mllama.rope.freq_base f32 = 500000.000000
llama_model_loader: - kv 11: mllama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 12: general.file_type u32 = 15
llama_model_loader: - kv 13: mllama.vocab_size u32 = 128256
llama_model_loader: - kv 14: mllama.rope.dimension_count u32 = 128
llama_model_loader: - kv 15: mllama.attention.cross_attention_layers arr[i32,20] = [3, 8, 13, 18, 23, 28, 33, 38, 43, 48...
llama_model_loader: - kv 16: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 17: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 18: tokenizer.ggml.pre str = llama-bpe
llama_model_loader: - kv 19: tokenizer.ggml.tokens arr[str,128257] = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 20: tokenizer.ggml.token_type arr[i32,128257] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 21: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv 22: tokenizer.ggml.bos_token_id u32 = 128000
llama_model_loader: - kv 23: tokenizer.ggml.eos_token_id u32 = 128009
llama_model_loader: - kv 24: tokenizer.ggml.padding_token_id u32 = 128004
llama_model_loader: - kv 25: tokenizer.chat_template str = {{- bos_token }}\n{%- if custom_tools ...
llama_model_loader: - kv 26: general.quantization_version u32 = 2
llama_model_loader: - type f32: 282 tensors
llama_model_loader: - type q4_K: 611 tensors
llama_model_loader: - type q6_K: 91 tensors
time=2024-11-21T01:22:19.502Z level=INFO source=server.go:596 msg="waiting for server to become available" status="llm server loading model"
llm_load_vocab: special tokens cache size = 257
llm_load_vocab: token to piece cache size = 0.7999 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = mllama
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 128256
llm_load_print_meta: n_merges = 280147
llm_load_print_meta: vocab_only = 0
llm_load_print_meta: n_ctx_train = 131072
llm_load_print_meta: n_embd = 8192
llm_load_print_meta: n_layer = 100
llm_load_print_meta: n_head = 64
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_swa = 0
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 8
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 28672
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 131072
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: ssm_dt_b_c_rms = 0
llm_load_print_meta: model type = ?B
llm_load_print_meta: model ftype = Q4_K - Medium
llm_load_print_meta: model params = 87.67 B
llm_load_print_meta: model size = 49.08 GiB (4.81 BPW)
llm_load_print_meta: general.name = Model
llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>'
llm_load_print_meta: EOS token = 128009 '<|eot_id|>'
llm_load_print_meta: PAD token = 128004 '<|finetune_right_pad_id|>'
llm_load_print_meta: LF token = 128 'Ä'
llm_load_print_meta: EOT token = 128009 '<|eot_id|>'
llm_load_print_meta: EOM token = 128008 '<|eom_id|>'
llm_load_print_meta: EOG token = 128008 '<|eom_id|>'
llm_load_print_meta: EOG token = 128009 '<|eot_id|>'
llm_load_print_meta: max token length = 256
llama_model_load: vocab mismatch 128256 !- 128257 ...
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 4 CUDA devices:
Device 0: NVIDIA Graphics Device, compute capability 8.0, VMM: yes
Device 1: NVIDIA Graphics Device, compute capability 8.0, VMM: yes
Device 2: NVIDIA Graphics Device, compute capability 8.0, VMM: yes
Device 3: NVIDIA Graphics Device, compute capability 8.0, VMM: yes
llm_load_tensors: ggml ctx size = 2.25 MiB
llm_load_tensors: offloading 100 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 101/101 layers to GPU
llm_load_tensors: CPU buffer size = 563.66 MiB
llm_load_tensors: CUDA0 buffer size = 12886.45 MiB
llm_load_tensors: CUDA1 buffer size = 12010.76 MiB
llm_load_tensors: CUDA2 buffer size = 11953.01 MiB
llm_load_tensors: CUDA3 buffer size = 12848.06 MiB
time=2024-11-21T01:22:29.980Z level=INFO source=server.go:596 msg="waiting for server to become available" status="llm server not responding"
llama_new_context_with_model: n_ctx = 2048
llama_new_context_with_model: n_batch = 512
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 500000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: CUDA0 KV buffer size = 418.16 MiB
llama_kv_cache_init: CUDA1 KV buffer size = 410.16 MiB
llama_kv_cache_init: CUDA2 KV buffer size = 410.16 MiB
llama_kv_cache_init: CUDA3 KV buffer size = 402.16 MiB
llama_new_context_with_model: KV self size = 1640.62 MiB, K (f16): 820.31 MiB, V (f16): 820.31 MiB
llama_new_context_with_model: CUDA_Host output buffer size = 0.52 MiB
llama_new_context_with_model: pipeline parallelism enabled (n_copies=4)
llama_new_context_with_model: CUDA0 compute buffer size = 400.01 MiB
llama_new_context_with_model: CUDA1 compute buffer size = 400.01 MiB
llama_new_context_with_model: CUDA2 compute buffer size = 400.01 MiB
llama_new_context_with_model: CUDA3 compute buffer size = 400.02 MiB
llama_new_context_with_model: CUDA_Host compute buffer size = 32.02 MiB
llama_new_context_with_model: graph nodes = 2566
llama_new_context_with_model: graph splits = 5
mllama_model_load: model name: Llama-3.2-90B-Vision-Instruct
mllama_model_load: description: vision encoder for Mllama
mllama_model_load: GGUF version: 3
mllama_model_load: alignment: 32
mllama_model_load: n_tensors: 512
mllama_model_load: n_kv: 17
mllama_model_load: ftype: f16
mllama_model_load:
mllama_model_load: vision using CUDA backend
time=2024-11-21T01:22:30.230Z level=INFO source=server.go:596 msg="waiting for server to become available" status="llm server loading model"
mllama_model_load: compute allocated memory: 2853.34 MB
time=2024-11-21T01:22:30.982Z level=INFO source=server.go:601 msg="llama runner started in 11.73 seconds"
CUDA error: unspecified launch failure
current device: 0, in function ggml_backend_cuda_synchronize at ggml-cuda.cu:2508
cudaStreamSynchronize(cuda_ctx->stream())
ggml-cuda.cu:132: CUDA error
SIGBUS: bus error
PC=0x7fe40040db53 m=12 sigcode=2 addr=0x21a403fcc
signal arrived during cgo execution
goroutine 7 gp=0xc0002ac000 m=12 mp=0xc000200808 [syscall]:
runtime.cgocall(0x561ad5f9be90, 0xc000183b60)
runtime/cgocall.go:157 +0x4b fp=0xc000183b38 sp=0xc000183b00 pc=0x561ad5d1e3cb
github.com/ollama/ollama/llama._Cfunc_llama_decode(0x7fe394006450, {0x6f, 0x561ad7af2fe0, 0x0, 0x0, 0x561ad7af37f0, 0x561ad7af4000, 0x561ad7af4810, 0x561ad79c42e0, 0x0, ...})
_cgo_gotypes.go:543 +0x52 fp=0xc000183b60 sp=0xc000183b38 pc=0x561ad5e1b952
github.com/ollama/ollama/llama.(*Context).Decode.func1(0x561ad5f97d4b?, 0x7fe394006450?)
github.com/ollama/ollama/llama/llama.go:167 +0xd8 fp=0xc000183c80 sp=0xc000183b60 pc=0x561ad5e1de78
github.com/ollama/ollama/llama.(*Context).Decode(0xc0000163c0?, 0x1?)
github.com/ollama/ollama/llama/llama.go:167 +0x17 fp=0xc000183cc8 sp=0xc000183c80 pc=0x561ad5e1dcd7
main.(*Server).processBatch(0xc0001ce120, 0xc0001cc150, 0xc0001cc1c0)
github.com/ollama/ollama/llama/runner/runner.go:424 +0x29e fp=0xc000183ed0 sp=0xc000183cc8 pc=0x561ad5f96d7e
main.(*Server).run(0xc0001ce120, {0x561ad62d9a40, 0xc0001a40a0})
github.com/ollama/ollama/llama/runner/runner.go:338 +0x1a5 fp=0xc000183fb8 sp=0xc000183ed0 pc=0x561ad5f96765
main.main.gowrap2()
github.com/ollama/ollama/llama/runner/runner.go:901 +0x28 fp=0xc000183fe0 sp=0xc000183fb8 pc=0x561ad5f9aec8
runtime.goexit({})
runtime/asm_amd64.s:1695 +0x1 fp=0xc000183fe8 sp=0xc000183fe0 pc=0x561ad5d86de1
created by main.main in goroutine 1
github.com/ollama/ollama/llama/runner/runner.go:901 +0xc2b
```
### OS
Linux
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.4.1
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7771/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7771/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8335
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8335/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8335/comments
|
https://api.github.com/repos/ollama/ollama/issues/8335/events
|
https://github.com/ollama/ollama/issues/8335
| 2,772,565,463
|
I_kwDOJ0Z1Ps6lQf3X
| 8,335
|
Make flash attention configurable via UI or enable by default
|
{
"login": "HDembinski",
"id": 2631586,
"node_id": "MDQ6VXNlcjI2MzE1ODY=",
"avatar_url": "https://avatars.githubusercontent.com/u/2631586?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HDembinski",
"html_url": "https://github.com/HDembinski",
"followers_url": "https://api.github.com/users/HDembinski/followers",
"following_url": "https://api.github.com/users/HDembinski/following{/other_user}",
"gists_url": "https://api.github.com/users/HDembinski/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HDembinski/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HDembinski/subscriptions",
"organizations_url": "https://api.github.com/users/HDembinski/orgs",
"repos_url": "https://api.github.com/users/HDembinski/repos",
"events_url": "https://api.github.com/users/HDembinski/events{/privacy}",
"received_events_url": "https://api.github.com/users/HDembinski/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 0
| 2025-01-07T11:10:17
| 2025-01-07T11:10:17
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi, I love Ollama, excellent work. It makes using LLMs really beginner friendly, but does impose any limits on power usage.
I recently learned about flash attention and found out from reading the FAQ that Ollama supports this. As flash attention is important to support large contexts and can speed up models considerably, it would be great if the option to enable flash attention would be more easily accessible.
I am on Windows, and the Ollama Server has a small icon in the notification area. It would be great if you could add a checkbox to enable flash attention and set the KV cache quantization there.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8335/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8335/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/472
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/472/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/472/comments
|
https://api.github.com/repos/ollama/ollama/issues/472/events
|
https://github.com/ollama/ollama/pull/472
| 1,882,888,686
|
PR_kwDOJ0Z1Ps5Zn3qn
| 472
|
Added missing options params to the embeddings docs
|
{
"login": "yackermann",
"id": 1636116,
"node_id": "MDQ6VXNlcjE2MzYxMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1636116?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yackermann",
"html_url": "https://github.com/yackermann",
"followers_url": "https://api.github.com/users/yackermann/followers",
"following_url": "https://api.github.com/users/yackermann/following{/other_user}",
"gists_url": "https://api.github.com/users/yackermann/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yackermann/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yackermann/subscriptions",
"organizations_url": "https://api.github.com/users/yackermann/orgs",
"repos_url": "https://api.github.com/users/yackermann/repos",
"events_url": "https://api.github.com/users/yackermann/events{/privacy}",
"received_events_url": "https://api.github.com/users/yackermann/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2023-09-05T23:59:24
| 2023-09-06T00:19:01
| 2023-09-06T00:18:49
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/472",
"html_url": "https://github.com/ollama/ollama/pull/472",
"diff_url": "https://github.com/ollama/ollama/pull/472.diff",
"patch_url": "https://github.com/ollama/ollama/pull/472.patch",
"merged_at": "2023-09-06T00:18:49"
}
| null |
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/472/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/472/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5975
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5975/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5975/comments
|
https://api.github.com/repos/ollama/ollama/issues/5975/events
|
https://github.com/ollama/ollama/issues/5975
| 2,431,736,450
|
I_kwDOJ0Z1Ps6Q8VqC
| 5,975
|
Deepseek2 with large context crashes with "Deepseek2 does not support K-shift"
|
{
"login": "balckwilliam",
"id": 32457598,
"node_id": "MDQ6VXNlcjMyNDU3NTk4",
"avatar_url": "https://avatars.githubusercontent.com/u/32457598?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/balckwilliam",
"html_url": "https://github.com/balckwilliam",
"followers_url": "https://api.github.com/users/balckwilliam/followers",
"following_url": "https://api.github.com/users/balckwilliam/following{/other_user}",
"gists_url": "https://api.github.com/users/balckwilliam/gists{/gist_id}",
"starred_url": "https://api.github.com/users/balckwilliam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/balckwilliam/subscriptions",
"organizations_url": "https://api.github.com/users/balckwilliam/orgs",
"repos_url": "https://api.github.com/users/balckwilliam/repos",
"events_url": "https://api.github.com/users/balckwilliam/events{/privacy}",
"received_events_url": "https://api.github.com/users/balckwilliam/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
|
{
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users/jessegross/followers",
"following_url": "https://api.github.com/users/jessegross/following{/other_user}",
"gists_url": "https://api.github.com/users/jessegross/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jessegross/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jessegross/subscriptions",
"organizations_url": "https://api.github.com/users/jessegross/orgs",
"repos_url": "https://api.github.com/users/jessegross/repos",
"events_url": "https://api.github.com/users/jessegross/events{/privacy}",
"received_events_url": "https://api.github.com/users/jessegross/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users/jessegross/followers",
"following_url": "https://api.github.com/users/jessegross/following{/other_user}",
"gists_url": "https://api.github.com/users/jessegross/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jessegross/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jessegross/subscriptions",
"organizations_url": "https://api.github.com/users/jessegross/orgs",
"repos_url": "https://api.github.com/users/jessegross/repos",
"events_url": "https://api.github.com/users/jessegross/events{/privacy}",
"received_events_url": "https://api.github.com/users/jessegross/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 12
| 2024-07-26T08:44:26
| 2024-12-17T16:34:29
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
GGML_ASSERT: /go/src/github.com/ollama/ollama/llm/llama.cpp/src/llama.cpp:15147: false && "Deepseek2 does not support K-shift"
### OS
Linux, Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.3.0
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5975/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5975/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/1250
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1250/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1250/comments
|
https://api.github.com/repos/ollama/ollama/issues/1250/events
|
https://github.com/ollama/ollama/pull/1250
| 2,007,218,435
|
PR_kwDOJ0Z1Ps5gLaE2
| 1,250
|
refactor layer creation
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2023-11-22T22:56:02
| 2023-12-05T22:32:54
| 2023-12-05T22:32:52
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1250",
"html_url": "https://github.com/ollama/ollama/pull/1250",
"diff_url": "https://github.com/ollama/ollama/pull/1250.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1250.patch",
"merged_at": "2023-12-05T22:32:52"
}
|
refactor layer creation
previous layer creation was not ideal because:
1. it required reading the input file multiple times, once to calculate the sha256 checksum, another to write it to disk, and potentially one more to decode the underlying gguf
2. used io.ReadSeeker which is prone to user error. if the file isn't reset correctly or in the right place, it could end up reading an empty file
there are also some brittleness when reading existing layers else
writing the inherited layers will error reading an already closed file
this commit aims to fix these issues by restructuring layer creation.
1. it will now write the layer to a temporary file as well as the hash function and move it to the final location on Commit
2. layers are read once when copied to the destination. exception is raw model files which still requires a second read to decode the model metadata
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1250/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1250/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/872
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/872/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/872/comments
|
https://api.github.com/repos/ollama/ollama/issues/872/events
|
https://github.com/ollama/ollama/pull/872
| 1,955,638,029
|
PR_kwDOJ0Z1Ps5dc8hb
| 872
|
fix readme for linux : port address already in use
|
{
"login": "Yadheedhya06",
"id": 79125868,
"node_id": "MDQ6VXNlcjc5MTI1ODY4",
"avatar_url": "https://avatars.githubusercontent.com/u/79125868?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Yadheedhya06",
"html_url": "https://github.com/Yadheedhya06",
"followers_url": "https://api.github.com/users/Yadheedhya06/followers",
"following_url": "https://api.github.com/users/Yadheedhya06/following{/other_user}",
"gists_url": "https://api.github.com/users/Yadheedhya06/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Yadheedhya06/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Yadheedhya06/subscriptions",
"organizations_url": "https://api.github.com/users/Yadheedhya06/orgs",
"repos_url": "https://api.github.com/users/Yadheedhya06/repos",
"events_url": "https://api.github.com/users/Yadheedhya06/events{/privacy}",
"received_events_url": "https://api.github.com/users/Yadheedhya06/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2023-10-21T19:32:37
| 2023-10-26T17:47:43
| 2023-10-26T17:47:43
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/872",
"html_url": "https://github.com/ollama/ollama/pull/872",
"diff_url": "https://github.com/ollama/ollama/pull/872.diff",
"patch_url": "https://github.com/ollama/ollama/pull/872.patch",
"merged_at": null
}
|
If user is installing Ollama for the first time/fresh install then Ollama server is started automatically. So when you try
```
ollama serve
```
then it throws error - 127.0.0.1:11434: bind: address already in use
So instead of running this command user can skip to running model
This PR patches the corresponding fixes in documentation for linux
Fixes: https://github.com/jmorganca/ollama/issues/707
|
{
"login": "Yadheedhya06",
"id": 79125868,
"node_id": "MDQ6VXNlcjc5MTI1ODY4",
"avatar_url": "https://avatars.githubusercontent.com/u/79125868?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Yadheedhya06",
"html_url": "https://github.com/Yadheedhya06",
"followers_url": "https://api.github.com/users/Yadheedhya06/followers",
"following_url": "https://api.github.com/users/Yadheedhya06/following{/other_user}",
"gists_url": "https://api.github.com/users/Yadheedhya06/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Yadheedhya06/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Yadheedhya06/subscriptions",
"organizations_url": "https://api.github.com/users/Yadheedhya06/orgs",
"repos_url": "https://api.github.com/users/Yadheedhya06/repos",
"events_url": "https://api.github.com/users/Yadheedhya06/events{/privacy}",
"received_events_url": "https://api.github.com/users/Yadheedhya06/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/872/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/872/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1732
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1732/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1732/comments
|
https://api.github.com/repos/ollama/ollama/issues/1732/events
|
https://github.com/ollama/ollama/pull/1732
| 2,057,903,932
|
PR_kwDOJ0Z1Ps5i22xd
| 1,732
|
Add list-remote command line option
|
{
"login": "kris-hansen",
"id": 8484582,
"node_id": "MDQ6VXNlcjg0ODQ1ODI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8484582?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kris-hansen",
"html_url": "https://github.com/kris-hansen",
"followers_url": "https://api.github.com/users/kris-hansen/followers",
"following_url": "https://api.github.com/users/kris-hansen/following{/other_user}",
"gists_url": "https://api.github.com/users/kris-hansen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kris-hansen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kris-hansen/subscriptions",
"organizations_url": "https://api.github.com/users/kris-hansen/orgs",
"repos_url": "https://api.github.com/users/kris-hansen/repos",
"events_url": "https://api.github.com/users/kris-hansen/events{/privacy}",
"received_events_url": "https://api.github.com/users/kris-hansen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2023-12-28T02:00:00
| 2024-05-09T16:07:25
| 2024-05-09T16:07:25
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1732",
"html_url": "https://github.com/ollama/ollama/pull/1732",
"diff_url": "https://github.com/ollama/ollama/pull/1732.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1732.patch",
"merged_at": null
}
|
- Added a feature to be able to fetch the model library from ollama.ai/library
- This makes it easier to determine which models are available to pull without leaving the command line world
- using goquery to make the HTML parsing a bit more manageable, added error handling to improve the error reporting in case the html changes
- I realize that parsing the html is a bit hacky, this can be improved in the future by hosting a models.json and then using this to feed the html list (as well as being easier to fetch and parse from the cli) - but this method should be durable provided that the current library page structure remains intact
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1732/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1732/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6671
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6671/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6671/comments
|
https://api.github.com/repos/ollama/ollama/issues/6671/events
|
https://github.com/ollama/ollama/issues/6671
| 2,509,895,725
|
I_kwDOJ0Z1Ps6Vmfgt
| 6,671
|
Reflection 70B NEED Tools
|
{
"login": "xiaoyu9982",
"id": 179811153,
"node_id": "U_kgDOCrezUQ",
"avatar_url": "https://avatars.githubusercontent.com/u/179811153?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xiaoyu9982",
"html_url": "https://github.com/xiaoyu9982",
"followers_url": "https://api.github.com/users/xiaoyu9982/followers",
"following_url": "https://api.github.com/users/xiaoyu9982/following{/other_user}",
"gists_url": "https://api.github.com/users/xiaoyu9982/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xiaoyu9982/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xiaoyu9982/subscriptions",
"organizations_url": "https://api.github.com/users/xiaoyu9982/orgs",
"repos_url": "https://api.github.com/users/xiaoyu9982/repos",
"events_url": "https://api.github.com/users/xiaoyu9982/events{/privacy}",
"received_events_url": "https://api.github.com/users/xiaoyu9982/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
closed
| false
| null |
[] | null | 4
| 2024-09-06T08:57:36
| 2024-09-06T21:19:49
| 2024-09-06T21:19:49
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Reflection 70B NEED Tools
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6671/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6671/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/667
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/667/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/667/comments
|
https://api.github.com/repos/ollama/ollama/issues/667/events
|
https://github.com/ollama/ollama/pull/667
| 1,920,894,343
|
PR_kwDOJ0Z1Ps5bnrd2
| 667
|
Use build tags to generate accelerated binaries for CUDA and ROCm on …
|
{
"login": "65a",
"id": 10104049,
"node_id": "MDQ6VXNlcjEwMTA0MDQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/10104049?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/65a",
"html_url": "https://github.com/65a",
"followers_url": "https://api.github.com/users/65a/followers",
"following_url": "https://api.github.com/users/65a/following{/other_user}",
"gists_url": "https://api.github.com/users/65a/gists{/gist_id}",
"starred_url": "https://api.github.com/users/65a/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/65a/subscriptions",
"organizations_url": "https://api.github.com/users/65a/orgs",
"repos_url": "https://api.github.com/users/65a/repos",
"events_url": "https://api.github.com/users/65a/events{/privacy}",
"received_events_url": "https://api.github.com/users/65a/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 20
| 2023-10-01T18:05:34
| 2023-10-17T00:45:22
| 2023-10-17T00:31:46
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/667",
"html_url": "https://github.com/ollama/ollama/pull/667",
"diff_url": "https://github.com/ollama/ollama/pull/667.diff",
"patch_url": "https://github.com/ollama/ollama/pull/667.patch",
"merged_at": null
}
|
…Linux. The binary will detect and use the accelerated runtimes embedded in it. The build tags rocm or cuda must be specified to both go generate and go build. ROCm builds should have both ROCM_PATH set (and the ROCM SDK present) as well as CLBlast installed (for GGML) and CLBlast_DIR set in the environment to the CLBlast cmake directory (likely /usr/include/cmake/CLBlast). Build tags are also used to switch VRAM detection between cuda and rocm implementations.
It's recommended to also set AMDGPU_TARGETS and GPU_TARGETS when building to ensure card coverage, an example might be `AMDGPU_TARGETS='gfx900;gfx906;gfx1030;gfx1100' GPU_TARGETS='gfx900;gfx906;gfx1030;gfx1100'`
|
{
"login": "65a",
"id": 10104049,
"node_id": "MDQ6VXNlcjEwMTA0MDQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/10104049?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/65a",
"html_url": "https://github.com/65a",
"followers_url": "https://api.github.com/users/65a/followers",
"following_url": "https://api.github.com/users/65a/following{/other_user}",
"gists_url": "https://api.github.com/users/65a/gists{/gist_id}",
"starred_url": "https://api.github.com/users/65a/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/65a/subscriptions",
"organizations_url": "https://api.github.com/users/65a/orgs",
"repos_url": "https://api.github.com/users/65a/repos",
"events_url": "https://api.github.com/users/65a/events{/privacy}",
"received_events_url": "https://api.github.com/users/65a/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/667/reactions",
"total_count": 9,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 3,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/ollama/ollama/issues/667/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/338
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/338/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/338/comments
|
https://api.github.com/repos/ollama/ollama/issues/338/events
|
https://github.com/ollama/ollama/issues/338
| 1,848,830,251
|
I_kwDOJ0Z1Ps5uMukr
| 338
|
More reliable model pull
|
{
"login": "bohdyone",
"id": 13161793,
"node_id": "MDQ6VXNlcjEzMTYxNzkz",
"avatar_url": "https://avatars.githubusercontent.com/u/13161793?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bohdyone",
"html_url": "https://github.com/bohdyone",
"followers_url": "https://api.github.com/users/bohdyone/followers",
"following_url": "https://api.github.com/users/bohdyone/following{/other_user}",
"gists_url": "https://api.github.com/users/bohdyone/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bohdyone/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bohdyone/subscriptions",
"organizations_url": "https://api.github.com/users/bohdyone/orgs",
"repos_url": "https://api.github.com/users/bohdyone/repos",
"events_url": "https://api.github.com/users/bohdyone/events{/privacy}",
"received_events_url": "https://api.github.com/users/bohdyone/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 1
| 2023-08-14T01:13:17
| 2023-08-22T01:06:31
| 2023-08-22T01:06:31
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi guys,
On Mac OS 13.4.1 and have been having some trouble downloading the larger models.
I get occasional "unexpected EOF" issues and sometimes when the model is fully downloaded it is detected as corrupted and must be downloaded again. Some of this seems to be to do with system sleep interrupting the download, where it gets stuck and must be aborted and then resumed.
Is it possible to improve the reliability and resumability of downloads somewhat?
Some suggestions:
1. Prevent system sleep (where possible) during model download
1. Use some sort of block-based hashing (e.g. Merkle tree) to detect corrupted blocks when resuming downloads
Thanks.
|
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/338/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/338/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2245
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2245/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2245/comments
|
https://api.github.com/repos/ollama/ollama/issues/2245/events
|
https://github.com/ollama/ollama/pull/2245
| 2,104,391,348
|
PR_kwDOJ0Z1Ps5lRF_U
| 2,245
|
Log prompt when running `ollama serve` with `OLLAMA_DEBUG=1`
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-01-28T23:12:57
| 2024-01-28T23:22:35
| 2024-01-28T23:22:35
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/2245",
"html_url": "https://github.com/ollama/ollama/pull/2245",
"diff_url": "https://github.com/ollama/ollama/pull/2245.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2245.patch",
"merged_at": "2024-01-28T23:22:35"
}
|
Fixes https://github.com/ollama/ollama/issues/1533
Fixes https://github.com/ollama/ollama/issues/1118
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2245/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2245/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7636
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7636/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7636/comments
|
https://api.github.com/repos/ollama/ollama/issues/7636/events
|
https://github.com/ollama/ollama/issues/7636
| 2,653,413,472
|
I_kwDOJ0Z1Ps6eJ-Bg
| 7,636
|
missing uninstall instructions or script
|
{
"login": "adbenitez",
"id": 24558636,
"node_id": "MDQ6VXNlcjI0NTU4NjM2",
"avatar_url": "https://avatars.githubusercontent.com/u/24558636?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/adbenitez",
"html_url": "https://github.com/adbenitez",
"followers_url": "https://api.github.com/users/adbenitez/followers",
"following_url": "https://api.github.com/users/adbenitez/following{/other_user}",
"gists_url": "https://api.github.com/users/adbenitez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/adbenitez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/adbenitez/subscriptions",
"organizations_url": "https://api.github.com/users/adbenitez/orgs",
"repos_url": "https://api.github.com/users/adbenitez/repos",
"events_url": "https://api.github.com/users/adbenitez/events{/privacy}",
"received_events_url": "https://api.github.com/users/adbenitez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 3
| 2024-11-12T21:45:41
| 2024-11-12T22:16:59
| 2024-11-12T22:16:59
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I got my `/` (root) partition full running the installer, would be nice if the installer allowed to install without `sudo` as local user, I canceled the installation at step `Downloading Linux ROCm amd64 bundle` due to the mentioned disk space issue, and now there is no uninstall instructions, had to figure it out where the ollama files were put to and delete manually
|
{
"login": "adbenitez",
"id": 24558636,
"node_id": "MDQ6VXNlcjI0NTU4NjM2",
"avatar_url": "https://avatars.githubusercontent.com/u/24558636?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/adbenitez",
"html_url": "https://github.com/adbenitez",
"followers_url": "https://api.github.com/users/adbenitez/followers",
"following_url": "https://api.github.com/users/adbenitez/following{/other_user}",
"gists_url": "https://api.github.com/users/adbenitez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/adbenitez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/adbenitez/subscriptions",
"organizations_url": "https://api.github.com/users/adbenitez/orgs",
"repos_url": "https://api.github.com/users/adbenitez/repos",
"events_url": "https://api.github.com/users/adbenitez/events{/privacy}",
"received_events_url": "https://api.github.com/users/adbenitez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7636/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7636/timeline
| null |
not_planned
| false
|
https://api.github.com/repos/ollama/ollama/issues/3517
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3517/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3517/comments
|
https://api.github.com/repos/ollama/ollama/issues/3517/events
|
https://github.com/ollama/ollama/issues/3517
| 2,229,467,913
|
I_kwDOJ0Z1Ps6E4vsJ
| 3,517
|
MACOS M2 Docker Compose Failing with GPU Selection Step
|
{
"login": "akramIOT",
"id": 21118209,
"node_id": "MDQ6VXNlcjIxMTE4MjA5",
"avatar_url": "https://avatars.githubusercontent.com/u/21118209?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/akramIOT",
"html_url": "https://github.com/akramIOT",
"followers_url": "https://api.github.com/users/akramIOT/followers",
"following_url": "https://api.github.com/users/akramIOT/following{/other_user}",
"gists_url": "https://api.github.com/users/akramIOT/gists{/gist_id}",
"starred_url": "https://api.github.com/users/akramIOT/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/akramIOT/subscriptions",
"organizations_url": "https://api.github.com/users/akramIOT/orgs",
"repos_url": "https://api.github.com/users/akramIOT/repos",
"events_url": "https://api.github.com/users/akramIOT/events{/privacy}",
"received_events_url": "https://api.github.com/users/akramIOT/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info",
"name": "needs more info",
"color": "BA8041",
"default": false,
"description": "More information is needed to assist"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 2
| 2024-04-07T00:01:29
| 2024-04-15T23:24:35
| 2024-04-15T23:24:35
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
MACOS M2 Docker Compose Failing with GPU Selection Step
(LLAMA_CPP_ENV) akram_personal@AKRAMs-MacBook-Pro packet_raptor % docker-compose up
Attaching to packet_raptor, ollama-1, ollama-webui-1
Gracefully stopping... (press Ctrl+C again to force)
Error response from daemon: could not select device driver "nvidia" with capabilities: [[gpu]]
(LLAMA_CPP_ENV) akram_personal@AKRAMs-MacBook-Pro packet_raptor %
### What did you expect to see?
_No response_
### Steps to reproduce
_No response_
### Are there any recent changes that introduced the issue?
_No response_
### OS
macOS
### Architecture
arm64
### Platform
Docker
### Ollama version
0.1.30
### GPU
Apple
### GPU info
(base) akram_personal@AKRAMs-MacBook-Pro ~ % ioreg -l | grep num_cores
| | | | "GPUConfigurationVariable" = {"num_gps"=8,"gpu_gen"=14,"usc_gen"=2,"num_cores"=20,"num_mgpus"=2,"core_mask_list"=(1023,511),"num_frags"=20}
(base) akram_personal@AKRAMs-MacBook-Pro ~ %
(base) akram_personal@AKRAMs-MacBook-Pro ~ %
(base) akram_personal@AKRAMs-MacBook-Pro ~ % system_profiler SPDisplaysDataType
Graphics/Displays:
Apple M2 Pro:
Chipset Model: Apple M2 Pro
Type: GPU
Bus: Built-In
Total Number of Cores: 19
Vendor: Apple (0x106b)
Metal Support: Metal 3
Displays:
Color LCD:
Display Type: Built-in Liquid Retina XDR Display
Resolution: 3456 x 2234 Retina
Main Display: Yes
Mirror: Off
Online: Yes
Automatically Adjust Brightness: Yes
Connection Type: Internal
VX2757:
Resolution: 1920 x 1080 (1080p FHD - Full High Definition)
UI Looks like: 1920 x 1080 @ 75.00Hz
Mirror: Off
Online: Yes
Rotation: Supported
(base) akram_personal@AKRAMs-MacBook-Pro ~ %
root:xnu-10002.81.5~7/RELEASE_ARM64_T6020 arm64
### CPU
Apple
### Other software
_No response_
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3517/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3517/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7362
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7362/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7362/comments
|
https://api.github.com/repos/ollama/ollama/issues/7362/events
|
https://github.com/ollama/ollama/issues/7362
| 2,614,931,844
|
I_kwDOJ0Z1Ps6b3LGE
| 7,362
|
Llama3.2-vision image processing not implemented for /generate
|
{
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users/jessegross/followers",
"following_url": "https://api.github.com/users/jessegross/following{/other_user}",
"gists_url": "https://api.github.com/users/jessegross/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jessegross/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jessegross/subscriptions",
"organizations_url": "https://api.github.com/users/jessegross/orgs",
"repos_url": "https://api.github.com/users/jessegross/repos",
"events_url": "https://api.github.com/users/jessegross/events{/privacy}",
"received_events_url": "https://api.github.com/users/jessegross/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 7
| 2024-10-25T19:21:47
| 2024-10-28T23:31:57
| 2024-10-28T20:51:20
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Reported by @oderwat:
https://github.com/ollama/ollama/issues/6972#issuecomment-2437586368
### OS
_No response_
### GPU
_No response_
### CPU
_No response_
### Ollama version
0.4.0
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7362/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7362/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1431
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1431/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1431/comments
|
https://api.github.com/repos/ollama/ollama/issues/1431/events
|
https://github.com/ollama/ollama/issues/1431
| 2,031,934,468
|
I_kwDOJ0Z1Ps55HNwE
| 1,431
|
[WSL 2] Exposing ollama via 0.0.0.0 on local network
|
{
"login": "bocklucas",
"id": 22528729,
"node_id": "MDQ6VXNlcjIyNTI4NzI5",
"avatar_url": "https://avatars.githubusercontent.com/u/22528729?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bocklucas",
"html_url": "https://github.com/bocklucas",
"followers_url": "https://api.github.com/users/bocklucas/followers",
"following_url": "https://api.github.com/users/bocklucas/following{/other_user}",
"gists_url": "https://api.github.com/users/bocklucas/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bocklucas/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bocklucas/subscriptions",
"organizations_url": "https://api.github.com/users/bocklucas/orgs",
"repos_url": "https://api.github.com/users/bocklucas/repos",
"events_url": "https://api.github.com/users/bocklucas/events{/privacy}",
"received_events_url": "https://api.github.com/users/bocklucas/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 18
| 2023-12-08T05:04:09
| 2024-12-19T15:19:36
| 2023-12-12T15:56:30
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hello! Just spent the last 3 or so hours struggling to figure this out and thought I'd leave my solution here to spare the next person who tries this out as well.
Basically, I was trying to run `ollama serve` in WSL 2 (setup was insanely quick and easy) and then access it on my local network.
However, when I tried to do this, it wouldn't access ollama in WSL 2, I was able to access it via `127.0.0.1:11434`, but not `0.0.0.0:11434`, despite following the [excellent documentation](https://github.com/jmorganca/ollama/blob/main/docs/faq.md) and setting the `OLLAMA_HOST` and `OLLAMA_ORIGINS` environment variables didn't help me.
After much digging and debugging, I discovered that by default, `WSL 2 has a virtualized ethernet adapter with its own unique IP address.` - [Microsoft Documentation](https://learn.microsoft.com/en-us/windows/wsl/networking)
**NOTE**
Its important to keep in mind that I haven't actually tried this solution myself from scratch, this is my recollection of steps I took over the last several hours to get this to work, anyone encountering the same problem I did please feel free to post what did / didn't work.
My solution to get this working and accessible on my network was as follows:
1. Get the IP of the WSL 2 virtualized ethernet adapter which can be done by running `ifconfig` in WSL 2 and getting the IP from the `eth0` field, it should be under `inet`,
```
$ ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 170.20.138.60
```
in this case, the IP address we'll be using is **170.20.138.60**
2. In `/etc/systemd/system/ollama.service.d/environment.conf`, set `OLLAMA_HOST` to this new IP address, in this example it should look something like this,
`/etc/systemd/system/ollama.service.d/environment.conf`
```
[Service]
Environment="OLLAMA_HOST=170.20.138.60:11434"
Environment="OLLAMA_ORIGINS=*"
```
You'll want to restart your ollama service at this point with
```
sudo systemctl daemon-reload
sudo systemctl restart ollama
```
3. At this point, your ollama service should be pointed at your WSL 2 virtualized ethernet adapter and the next step is to create a port proxy in order to talk to the WSL 2 virtual machine over your network. Open a Powershell window in administrator mode. For reference, [serverfault thread](https://serverfault.com/questions/1088746/how-to-access-service-running-on-host-from-wsl2-connection-refused)
```
New-NetFireWallRule -DisplayName 'WSL firewall unlock' -Direction Outbound -LocalPort 11434 -Action Allow -Protocol TCP
New-NetFireWallRule -DisplayName 'WSL firewall unlock' -Direction Inbound -LocalPort 11434 -Action Allow -Protocol TCP
```
and with the WSL firewall rules in place you should be able to run the following to make a port proxy
```
netsh interface portproxy add v4tov4 listenport=11434 listenaddress=0.0.0.0 connectport=11434 connectaddress=170.20.138.60
```
and BAM! You should now be able to access the ollama instance on your network!
One caveat I should note, for some weird reason, when I go to `http://0.0.0.0:11434` in my machine's browser that's running ollama, I'm not able to connect to the instance, however if I go to my machine's IP, `http://192.168.1.123:11434` in the browser, I can access it no problem.
Anyway, hope others find this to be helpful 😁
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1431/reactions",
"total_count": 63,
"+1": 39,
"-1": 0,
"laugh": 0,
"hooray": 5,
"confused": 5,
"heart": 14,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1431/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2758
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2758/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2758/comments
|
https://api.github.com/repos/ollama/ollama/issues/2758/events
|
https://github.com/ollama/ollama/issues/2758
| 2,153,312,111
|
I_kwDOJ0Z1Ps6AWO9v
| 2,758
|
Switching back and forth between models will gradually reduce the available GPU memory.
|
{
"login": "mofanke",
"id": 54242816,
"node_id": "MDQ6VXNlcjU0MjQyODE2",
"avatar_url": "https://avatars.githubusercontent.com/u/54242816?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mofanke",
"html_url": "https://github.com/mofanke",
"followers_url": "https://api.github.com/users/mofanke/followers",
"following_url": "https://api.github.com/users/mofanke/following{/other_user}",
"gists_url": "https://api.github.com/users/mofanke/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mofanke/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mofanke/subscriptions",
"organizations_url": "https://api.github.com/users/mofanke/orgs",
"repos_url": "https://api.github.com/users/mofanke/repos",
"events_url": "https://api.github.com/users/mofanke/events{/privacy}",
"received_events_url": "https://api.github.com/users/mofanke/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 2
| 2024-02-26T05:51:15
| 2024-02-27T19:29:54
| 2024-02-27T19:29:54
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Operating System: Windows
GPU: NVIDIA with 6GB memory
Description:
While switching between Mistral 7B and Codellama 7B, I noticed a decrease in GPU available memory for layers offloaded to the GPU. Upon investigation, I captured the following debug log:
```plaintext
time=2024-02-26T11:18:53.800+08:00 level=DEBUG source=gpu.go:251 msg="cuda detected 1 devices with 3302M available memory"
time=2024-02-26T11:20:07.502+08:00 level=DEBUG source=gpu.go:251 msg="cuda detected 1 devices with 2917M available memory"
time=2024-02-26T11:21:27.453+08:00 level=DEBUG source=gpu.go:251 msg="cuda detected 1 devices with 2916M available memory"
time=2024-02-26T11:22:45.617+08:00 level=DEBUG source=gpu.go:251 msg="cuda detected 1 devices with 2481M available memory"
time=2024-02-26T11:23:22.500+08:00 level=DEBUG source=gpu.go:251 msg="cuda detected 1 devices with 2481M available memory"
```
This log indicates a consistent decrease in available GPU memory. Upon restarting the Ollama server, the available GPU memory returned to 3199MB.
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2758/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2758/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4562
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4562/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4562/comments
|
https://api.github.com/repos/ollama/ollama/issues/4562/events
|
https://github.com/ollama/ollama/issues/4562
| 2,308,723,955
|
I_kwDOJ0Z1Ps6JnFTz
| 4,562
|
Where can I see the full list of embedding modes supported by ollama?
|
{
"login": "heiheiheibj",
"id": 6910198,
"node_id": "MDQ6VXNlcjY5MTAxOTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/6910198?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/heiheiheibj",
"html_url": "https://github.com/heiheiheibj",
"followers_url": "https://api.github.com/users/heiheiheibj/followers",
"following_url": "https://api.github.com/users/heiheiheibj/following{/other_user}",
"gists_url": "https://api.github.com/users/heiheiheibj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/heiheiheibj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/heiheiheibj/subscriptions",
"organizations_url": "https://api.github.com/users/heiheiheibj/orgs",
"repos_url": "https://api.github.com/users/heiheiheibj/repos",
"events_url": "https://api.github.com/users/heiheiheibj/events{/privacy}",
"received_events_url": "https://api.github.com/users/heiheiheibj/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
open
| false
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 1
| 2024-05-21T16:56:47
| 2024-05-21T17:12:30
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
thx
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4562/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4562/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/5704
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5704/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5704/comments
|
https://api.github.com/repos/ollama/ollama/issues/5704/events
|
https://github.com/ollama/ollama/pull/5704
| 2,408,997,782
|
PR_kwDOJ0Z1Ps51ZqL9
| 5,704
|
Add TensorSplit option to runners and API
|
{
"login": "NormalFishDev",
"id": 174545571,
"node_id": "U_kgDOCmdaow",
"avatar_url": "https://avatars.githubusercontent.com/u/174545571?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NormalFishDev",
"html_url": "https://github.com/NormalFishDev",
"followers_url": "https://api.github.com/users/NormalFishDev/followers",
"following_url": "https://api.github.com/users/NormalFishDev/following{/other_user}",
"gists_url": "https://api.github.com/users/NormalFishDev/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NormalFishDev/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NormalFishDev/subscriptions",
"organizations_url": "https://api.github.com/users/NormalFishDev/orgs",
"repos_url": "https://api.github.com/users/NormalFishDev/repos",
"events_url": "https://api.github.com/users/NormalFishDev/events{/privacy}",
"received_events_url": "https://api.github.com/users/NormalFishDev/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 0
| 2024-07-15T15:16:27
| 2024-07-15T15:16:27
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5704",
"html_url": "https://github.com/ollama/ollama/pull/5704",
"diff_url": "https://github.com/ollama/ollama/pull/5704.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5704.patch",
"merged_at": null
}
|
This pull request adds non-breaking functionality to Ollama function `NewLlamaServer` and adds a `TensorSplit` field to the `Runner` struct in `api/types.go`.
- Add option to pass a `tensor_split` in "options" object for generate api to manually define how tensors should be split with llama.cpp.
- Add conditional to check for manual `tensor_split` value in the Runner options to set the `tensor_split` parameter, defaults to `estimate.TensorSplit` when no manual `tensor_split` is passed (works exactly the same for existing applications).
- Add `TensorSplit` to `Runner` struct in `api/types.go` for accessing the `tensor_split` value.
I have added this functionality because the team I work with is using Ollama on a server with 8 GPUs. We have run into issues with the automatic tensor splitting causing the server to crash due to unbalanced splitting between the GPUs (the splitting does not account for buffer VRAM usage after the model is loaded). Being able to specify manually the tensor splits has made it easier to implement Ollama on the server.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5704/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5704/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1019
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1019/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1019/comments
|
https://api.github.com/repos/ollama/ollama/issues/1019/events
|
https://github.com/ollama/ollama/issues/1019
| 1,979,682,840
|
I_kwDOJ0Z1Ps51_5AY
| 1,019
|
Error: llama runner exited
|
{
"login": "krenax",
"id": 127540387,
"node_id": "U_kgDOB5ocow",
"avatar_url": "https://avatars.githubusercontent.com/u/127540387?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/krenax",
"html_url": "https://github.com/krenax",
"followers_url": "https://api.github.com/users/krenax/followers",
"following_url": "https://api.github.com/users/krenax/following{/other_user}",
"gists_url": "https://api.github.com/users/krenax/gists{/gist_id}",
"starred_url": "https://api.github.com/users/krenax/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/krenax/subscriptions",
"organizations_url": "https://api.github.com/users/krenax/orgs",
"repos_url": "https://api.github.com/users/krenax/repos",
"events_url": "https://api.github.com/users/krenax/events{/privacy}",
"received_events_url": "https://api.github.com/users/krenax/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 11
| 2023-11-06T17:32:51
| 2024-01-08T17:28:36
| 2023-11-23T14:53:51
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Using mistral and llama2 with ollama, I received the following error message: `Error: llama runner exited, you may not have enough available memory to run this model?`.
The `README.md` states that at least 16GB of RAM is required to run 7B models, which is met by my workstation specifications.
|
{
"login": "krenax",
"id": 127540387,
"node_id": "U_kgDOB5ocow",
"avatar_url": "https://avatars.githubusercontent.com/u/127540387?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/krenax",
"html_url": "https://github.com/krenax",
"followers_url": "https://api.github.com/users/krenax/followers",
"following_url": "https://api.github.com/users/krenax/following{/other_user}",
"gists_url": "https://api.github.com/users/krenax/gists{/gist_id}",
"starred_url": "https://api.github.com/users/krenax/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/krenax/subscriptions",
"organizations_url": "https://api.github.com/users/krenax/orgs",
"repos_url": "https://api.github.com/users/krenax/repos",
"events_url": "https://api.github.com/users/krenax/events{/privacy}",
"received_events_url": "https://api.github.com/users/krenax/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1019/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1019/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6518
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6518/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6518/comments
|
https://api.github.com/repos/ollama/ollama/issues/6518/events
|
https://github.com/ollama/ollama/issues/6518
| 2,487,164,819
|
I_kwDOJ0Z1Ps6UPx-T
| 6,518
|
Unable to run on tcp4/ipv4 on Lambda Labs instance
|
{
"login": "bayadyne",
"id": 179503668,
"node_id": "U_kgDOCrMCNA",
"avatar_url": "https://avatars.githubusercontent.com/u/179503668?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bayadyne",
"html_url": "https://github.com/bayadyne",
"followers_url": "https://api.github.com/users/bayadyne/followers",
"following_url": "https://api.github.com/users/bayadyne/following{/other_user}",
"gists_url": "https://api.github.com/users/bayadyne/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bayadyne/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bayadyne/subscriptions",
"organizations_url": "https://api.github.com/users/bayadyne/orgs",
"repos_url": "https://api.github.com/users/bayadyne/repos",
"events_url": "https://api.github.com/users/bayadyne/events{/privacy}",
"received_events_url": "https://api.github.com/users/bayadyne/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info",
"name": "needs more info",
"color": "BA8041",
"default": false,
"description": "More information is needed to assist"
}
] |
closed
| false
| null |
[] | null | 5
| 2024-08-26T15:40:29
| 2024-12-02T21:56:12
| 2024-12-02T21:56:12
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I used OLLAMA_HOST=0.0.0.0:8080 (and attempted with other ports) and when I checked, the service was only on tcp6 which I'm currently unable to use.
### OS
Linux
### GPU
Nvidia
### CPU
Intel, AMD
### Ollama version
0.3.6
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6518/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6518/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/335
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/335/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/335/comments
|
https://api.github.com/repos/ollama/ollama/issues/335/events
|
https://github.com/ollama/ollama/issues/335
| 1,847,319,467
|
I_kwDOJ0Z1Ps5uG9ur
| 335
|
Model import/export
|
{
"login": "mikeroySoft",
"id": 1791194,
"node_id": "MDQ6VXNlcjE3OTExOTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1791194?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mikeroySoft",
"html_url": "https://github.com/mikeroySoft",
"followers_url": "https://api.github.com/users/mikeroySoft/followers",
"following_url": "https://api.github.com/users/mikeroySoft/following{/other_user}",
"gists_url": "https://api.github.com/users/mikeroySoft/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mikeroySoft/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mikeroySoft/subscriptions",
"organizations_url": "https://api.github.com/users/mikeroySoft/orgs",
"repos_url": "https://api.github.com/users/mikeroySoft/repos",
"events_url": "https://api.github.com/users/mikeroySoft/events{/privacy}",
"received_events_url": "https://api.github.com/users/mikeroySoft/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 25
| 2023-08-11T19:26:46
| 2024-10-24T15:21:41
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
When using large models like Llama2:70b, the download files are quite big.
As a user with multiple local systems, having to `ollama pull` on every device means that much more bandwidth and time spent.
It would be great if we could download the model once and then export/import it to other ollama clients in the office without pulling it from the internet.
Example:
On the first device, we would do:
`ollama pull llama2:70b`
`ollama export llama2:70b /Volumes/MyUSB/llama2_70b-local.ollama_model`
Then we would take MyUSB over to another device and do:
`ollama import /Volumes/MyUSB/llama2_70b-local.ollama_model`
`ollama run llama2:local-70b` or `ollama run llama2-local:70b` or even just `ollama run llama2_70b-local`
I'm obviously not sure about the naming structure here, but I hope I've conveyed the problem and thought process.
Thanks for the fantastic project!
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/335/reactions",
"total_count": 18,
"+1": 18,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/335/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/419
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/419/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/419/comments
|
https://api.github.com/repos/ollama/ollama/issues/419/events
|
https://github.com/ollama/ollama/issues/419
| 1,868,085,325
|
I_kwDOJ0Z1Ps5vWLhN
| 419
|
?allow for model files to be located in a different location than ~/.ollama?
|
{
"login": "vegabook",
"id": 3780883,
"node_id": "MDQ6VXNlcjM3ODA4ODM=",
"avatar_url": "https://avatars.githubusercontent.com/u/3780883?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vegabook",
"html_url": "https://github.com/vegabook",
"followers_url": "https://api.github.com/users/vegabook/followers",
"following_url": "https://api.github.com/users/vegabook/following{/other_user}",
"gists_url": "https://api.github.com/users/vegabook/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vegabook/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vegabook/subscriptions",
"organizations_url": "https://api.github.com/users/vegabook/orgs",
"repos_url": "https://api.github.com/users/vegabook/repos",
"events_url": "https://api.github.com/users/vegabook/events{/privacy}",
"received_events_url": "https://api.github.com/users/vegabook/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 3
| 2023-08-26T12:44:56
| 2023-08-30T16:52:55
| 2023-08-30T16:52:55
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
On my M2 mac, Ollama stores pulled models in `~/.ollama/models` and its security keys in `~/.ollama`. Is it possible to specify an alternative directory? My interest is in compartmentalizing ollama as much as possible into a single directory (happen to be using nix where [ollama is available in the unstable channel](https://search.nixos.org/packages?channel=unstable&from=0&size=50&sort=relevance&type=packages&query=ollama), but that's an aside).
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/419/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/419/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6253
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6253/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6253/comments
|
https://api.github.com/repos/ollama/ollama/issues/6253/events
|
https://github.com/ollama/ollama/issues/6253
| 2,454,875,827
|
I_kwDOJ0Z1Ps6SUm6z
| 6,253
|
When systemMessage exceeds a certain length, ollama is unable to process it.
|
{
"login": "billrenhero",
"id": 46013777,
"node_id": "MDQ6VXNlcjQ2MDEzNzc3",
"avatar_url": "https://avatars.githubusercontent.com/u/46013777?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/billrenhero",
"html_url": "https://github.com/billrenhero",
"followers_url": "https://api.github.com/users/billrenhero/followers",
"following_url": "https://api.github.com/users/billrenhero/following{/other_user}",
"gists_url": "https://api.github.com/users/billrenhero/gists{/gist_id}",
"starred_url": "https://api.github.com/users/billrenhero/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/billrenhero/subscriptions",
"organizations_url": "https://api.github.com/users/billrenhero/orgs",
"repos_url": "https://api.github.com/users/billrenhero/repos",
"events_url": "https://api.github.com/users/billrenhero/events{/privacy}",
"received_events_url": "https://api.github.com/users/billrenhero/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 5
| 2024-08-08T05:02:55
| 2024-09-02T23:14:38
| 2024-09-02T23:14:37
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
<img width="1250" alt="截屏2024-08-08 11 28 37" src="https://github.com/user-attachments/assets/49e8c26e-ef09-4f4a-b06b-7e24801c2f69">
when system message exceeds a certain length(4096 likely), Ollama returns "It seems like you're sharing some information, but it's not in a readable format. Could you please rephrase or provide more context about what this is related to? I'd be happy to help if I can! ".
It is ok when I uses version 0.1.47.
### OS
Linux, macOS
### GPU
Nvidia, Apple
### CPU
AMD, Apple
### Ollama version
0.3.4
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6253/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6253/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/438
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/438/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/438/comments
|
https://api.github.com/repos/ollama/ollama/issues/438/events
|
https://github.com/ollama/ollama/issues/438
| 1,870,669,000
|
I_kwDOJ0Z1Ps5vgCTI
| 438
|
Document Wolfi package?
|
{
"login": "dlorenc",
"id": 1714486,
"node_id": "MDQ6VXNlcjE3MTQ0ODY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1714486?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dlorenc",
"html_url": "https://github.com/dlorenc",
"followers_url": "https://api.github.com/users/dlorenc/followers",
"following_url": "https://api.github.com/users/dlorenc/following{/other_user}",
"gists_url": "https://api.github.com/users/dlorenc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dlorenc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dlorenc/subscriptions",
"organizations_url": "https://api.github.com/users/dlorenc/orgs",
"repos_url": "https://api.github.com/users/dlorenc/repos",
"events_url": "https://api.github.com/users/dlorenc/events{/privacy}",
"received_events_url": "https://api.github.com/users/dlorenc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2023-08-29T00:00:25
| 2023-08-30T20:56:56
| 2023-08-30T20:56:55
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hey!
I noticed it says there are no official downloads for Linux yet - would you be open to documenting the official Wolfi Linux package? You can see how it's packaged here: https://github.com/wolfi-dev/os/blob/main/ollama.yaml
You can install it with:
```
docker run -it cgr.dev/chainguard/wolfi-base sh
apk add ollama
```
We'll probably ship an image based on this as well, and add CUDA drivers soon.
|
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/438/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/438/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2118
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2118/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2118/comments
|
https://api.github.com/repos/ollama/ollama/issues/2118/events
|
https://github.com/ollama/ollama/issues/2118
| 2,092,399,782
|
I_kwDOJ0Z1Ps58t3ym
| 2,118
|
Unable to Download Models Due to Malformed Manifests
|
{
"login": "SpiralCut",
"id": 21312296,
"node_id": "MDQ6VXNlcjIxMzEyMjk2",
"avatar_url": "https://avatars.githubusercontent.com/u/21312296?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SpiralCut",
"html_url": "https://github.com/SpiralCut",
"followers_url": "https://api.github.com/users/SpiralCut/followers",
"following_url": "https://api.github.com/users/SpiralCut/following{/other_user}",
"gists_url": "https://api.github.com/users/SpiralCut/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SpiralCut/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SpiralCut/subscriptions",
"organizations_url": "https://api.github.com/users/SpiralCut/orgs",
"repos_url": "https://api.github.com/users/SpiralCut/repos",
"events_url": "https://api.github.com/users/SpiralCut/events{/privacy}",
"received_events_url": "https://api.github.com/users/SpiralCut/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-01-21T03:10:26
| 2024-01-22T02:14:59
| 2024-01-22T02:14:59
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I'm running Ollama 0.1.20 in WSL2/Ubuntu. In the past I was able to download new models fine but now when I try to download them I get something similar to the following error messages and am prevented from downloading:
```
pulling manifest
Error: pull model manifest: Get "https://registry.ollama.ai/v2/library/codellama/manifests/latest": malformed HTTP response "\x00\x00\x1e\x04\x00\x00\x00\x00\x00\x00\x05\x00\x10\x00\x00\x00\x03\x00\x00\x00\xfa\x00\x06\x00\x10\x01@\x00\x01\x00\x00\x10\x00\x00\x04\x00\x10\x00\x00"
```
I tried deleting Ollama and reinstalling and the issue persists (I'm not sure if this is the right URL but accessing https://registry.ollama.ai/v2/library/codellama/manifests/latest also gives me MANIFEST_INVALID error when I access it from my browser
|
{
"login": "SpiralCut",
"id": 21312296,
"node_id": "MDQ6VXNlcjIxMzEyMjk2",
"avatar_url": "https://avatars.githubusercontent.com/u/21312296?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SpiralCut",
"html_url": "https://github.com/SpiralCut",
"followers_url": "https://api.github.com/users/SpiralCut/followers",
"following_url": "https://api.github.com/users/SpiralCut/following{/other_user}",
"gists_url": "https://api.github.com/users/SpiralCut/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SpiralCut/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SpiralCut/subscriptions",
"organizations_url": "https://api.github.com/users/SpiralCut/orgs",
"repos_url": "https://api.github.com/users/SpiralCut/repos",
"events_url": "https://api.github.com/users/SpiralCut/events{/privacy}",
"received_events_url": "https://api.github.com/users/SpiralCut/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2118/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2118/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8460
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8460/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8460/comments
|
https://api.github.com/repos/ollama/ollama/issues/8460/events
|
https://github.com/ollama/ollama/issues/8460
| 2,793,789,463
|
I_kwDOJ0Z1Ps6mhdgX
| 8,460
|
Llama-3_1-Nemotron-51B-Instruct
|
{
"login": "Tanote650",
"id": 60698483,
"node_id": "MDQ6VXNlcjYwNjk4NDgz",
"avatar_url": "https://avatars.githubusercontent.com/u/60698483?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Tanote650",
"html_url": "https://github.com/Tanote650",
"followers_url": "https://api.github.com/users/Tanote650/followers",
"following_url": "https://api.github.com/users/Tanote650/following{/other_user}",
"gists_url": "https://api.github.com/users/Tanote650/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Tanote650/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Tanote650/subscriptions",
"organizations_url": "https://api.github.com/users/Tanote650/orgs",
"repos_url": "https://api.github.com/users/Tanote650/repos",
"events_url": "https://api.github.com/users/Tanote650/events{/privacy}",
"received_events_url": "https://api.github.com/users/Tanote650/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
open
| false
| null |
[] | null | 1
| 2025-01-16T21:21:32
| 2025-01-25T09:12:05
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Please add the Nvidia Model.
https://huggingface.co/bartowski/Llama-3_1-Nemotron-51B-Instruct-GGUF
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8460/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8460/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/8607
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8607/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8607/comments
|
https://api.github.com/repos/ollama/ollama/issues/8607/events
|
https://github.com/ollama/ollama/issues/8607
| 2,812,661,670
|
I_kwDOJ0Z1Ps6npc-m
| 8,607
|
Add an ability to inject env variables to modelfile system message.
|
{
"login": "BotVasya",
"id": 10455417,
"node_id": "MDQ6VXNlcjEwNDU1NDE3",
"avatar_url": "https://avatars.githubusercontent.com/u/10455417?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BotVasya",
"html_url": "https://github.com/BotVasya",
"followers_url": "https://api.github.com/users/BotVasya/followers",
"following_url": "https://api.github.com/users/BotVasya/following{/other_user}",
"gists_url": "https://api.github.com/users/BotVasya/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BotVasya/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BotVasya/subscriptions",
"organizations_url": "https://api.github.com/users/BotVasya/orgs",
"repos_url": "https://api.github.com/users/BotVasya/repos",
"events_url": "https://api.github.com/users/BotVasya/events{/privacy}",
"received_events_url": "https://api.github.com/users/BotVasya/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 0
| 2025-01-27T10:43:33
| 2025-01-27T10:43:33
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi All.
I`ve realized that that there is no way to make ollama models to know current date and time if it runs on ms windows. So I believe that would be useful if there would be possibility to use OS variables in the modelfile. Especially for date and time it would be better if model could obtain that data dynamically, just when it needed.
Use case: model could answer what date and time it is.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8607/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8607/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/4884
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4884/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4884/comments
|
https://api.github.com/repos/ollama/ollama/issues/4884/events
|
https://github.com/ollama/ollama/issues/4884
| 2,339,277,556
|
I_kwDOJ0Z1Ps6Lbor0
| 4,884
|
No proper response when IPEX-LLM setup with Ollama for intel cpu/gpu
|
{
"login": "filip-777",
"id": 44314861,
"node_id": "MDQ6VXNlcjQ0MzE0ODYx",
"avatar_url": "https://avatars.githubusercontent.com/u/44314861?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/filip-777",
"html_url": "https://github.com/filip-777",
"followers_url": "https://api.github.com/users/filip-777/followers",
"following_url": "https://api.github.com/users/filip-777/following{/other_user}",
"gists_url": "https://api.github.com/users/filip-777/gists{/gist_id}",
"starred_url": "https://api.github.com/users/filip-777/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/filip-777/subscriptions",
"organizations_url": "https://api.github.com/users/filip-777/orgs",
"repos_url": "https://api.github.com/users/filip-777/repos",
"events_url": "https://api.github.com/users/filip-777/events{/privacy}",
"received_events_url": "https://api.github.com/users/filip-777/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 1
| 2024-06-06T23:04:51
| 2024-10-07T03:06:12
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
After setup of IPEX-LLM to work with ollama I see that output is wrong.
Example:
```
❯ ./ollama run phi3
>>> hi
<s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s>
```
Maybe I had setup something wrong...
When I serve ollama I have such logs:
```❯ ./ollama serve
2024/06/07 01:00:48 routes.go:1028: INFO server config env="map[OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST: OLLAMA_KEEP_ALIVE: OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_MODELS: OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:*] OLLAMA_RUNNERS_DIR: OLLAMA_TMPDIR:]"
time=2024-06-07T01:00:48.471+02:00 level=INFO source=images.go:729 msg="total blobs: 38"
time=2024-06-07T01:00:48.471+02:00 level=INFO source=images.go:736 msg="total unused blobs removed: 0"
[GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached.
[GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.
- using env: export GIN_MODE=release
- using code: gin.SetMode(gin.ReleaseMode)
[GIN-debug] POST /api/pull --> github.com/ollama/ollama/server.(*Server).PullModelHandler-fm (5 handlers)
[GIN-debug] POST /api/generate --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers)
[GIN-debug] POST /api/chat --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers)
[GIN-debug] POST /api/embeddings --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers)
[GIN-debug] POST /api/create --> github.com/ollama/ollama/server.(*Server).CreateModelHandler-fm (5 handlers)
[GIN-debug] POST /api/push --> github.com/ollama/ollama/server.(*Server).PushModelHandler-fm (5 handlers)
[GIN-debug] POST /api/copy --> github.com/ollama/ollama/server.(*Server).CopyModelHandler-fm (5 handlers)
[GIN-debug] DELETE /api/delete --> github.com/ollama/ollama/server.(*Server).DeleteModelHandler-fm (5 handlers)
[GIN-debug] POST /api/show --> github.com/ollama/ollama/server.(*Server).ShowModelHandler-fm (5 handlers)
[GIN-debug] POST /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers)
[GIN-debug] HEAD /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers)
[GIN-debug] GET /api/ps --> github.com/ollama/ollama/server.(*Server).ProcessHandler-fm (5 handlers)
[GIN-debug] POST /v1/chat/completions --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers)
[GIN-debug] GET / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] GET /api/tags --> github.com/ollama/ollama/server.(*Server).ListModelsHandler-fm (5 handlers)
[GIN-debug] GET /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
[GIN-debug] HEAD / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] HEAD /api/tags --> github.com/ollama/ollama/server.(*Server).ListModelsHandler-fm (5 handlers)
[GIN-debug] HEAD /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
time=2024-06-07T01:00:48.471+02:00 level=INFO source=routes.go:1074 msg="Listening on 127.0.0.1:11434 (version 0.0.0)"
time=2024-06-07T01:00:48.472+02:00 level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama607982765/runners
time=2024-06-07T01:00:48.472+02:00 level=DEBUG source=payload.go:180 msg=extracting variant=cpu file=build/linux/x86_64/cpu/bin/ollama_llama_server.gz
time=2024-06-07T01:00:48.472+02:00 level=DEBUG source=payload.go:180 msg=extracting variant=cpu_avx file=build/linux/x86_64/cpu_avx/bin/ollama_llama_server.gz
time=2024-06-07T01:00:48.472+02:00 level=DEBUG source=payload.go:180 msg=extracting variant=cpu_avx2 file=build/linux/x86_64/cpu_avx2/bin/ollama_llama_server.gz
time=2024-06-07T01:00:48.523+02:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama607982765/runners/cpu
time=2024-06-07T01:00:48.523+02:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama607982765/runners/cpu_avx
time=2024-06-07T01:00:48.523+02:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama607982765/runners/cpu_avx2
time=2024-06-07T01:00:48.523+02:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2]"
time=2024-06-07T01:00:48.523+02:00 level=DEBUG source=payload.go:45 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY"
time=2024-06-07T01:00:48.523+02:00 level=DEBUG source=sched.go:90 msg="starting llm scheduler"
time=2024-06-07T01:00:48.523+02:00 level=DEBUG source=gpu.go:122 msg="Detecting GPUs"
time=2024-06-07T01:00:48.523+02:00 level=DEBUG source=gpu.go:261 msg="Searching for GPU library" name=libcuda.so*
time=2024-06-07T01:00:48.523+02:00 level=DEBUG source=gpu.go:280 msg="gpu library search" globs="[/opt/intel/oneapi/mkl/2024.0/lib/libcuda.so** /opt/intel/oneapi/compiler/2024.0/lib/libcuda.so** /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]"
time=2024-06-07T01:00:50.186+02:00 level=DEBUG source=gpu.go:313 msg="discovered GPU libraries" paths=[]
time=2024-06-07T01:00:50.186+02:00 level=DEBUG source=gpu.go:261 msg="Searching for GPU library" name=libcudart.so*
time=2024-06-07T01:00:50.186+02:00 level=DEBUG source=gpu.go:280 msg="gpu library search" globs="[/opt/intel/oneapi/mkl/2024.0/lib/libcudart.so** /opt/intel/oneapi/compiler/2024.0/lib/libcudart.so** /tmp/ollama607982765/runners/cuda*/libcudart.so* /usr/local/cuda/lib64/libcudart.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/x86_64-linux-gnu/libcudart.so* /usr/lib/wsl/lib/libcudart.so* /usr/lib/wsl/drivers/*/libcudart.so* /opt/cuda/lib64/libcudart.so* /usr/local/cuda*/targets/aarch64-linux/lib/libcudart.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/aarch64-linux-gnu/libcudart.so* /usr/local/cuda/lib*/libcudart.so* /usr/lib*/libcudart.so* /usr/local/lib*/libcudart.so*]"
time=2024-06-07T01:00:51.690+02:00 level=DEBUG source=gpu.go:313 msg="discovered GPU libraries" paths=[/usr/local/cuda/lib64/libcudart.so.12.5.39]
cudaSetDevice err: 35
time=2024-06-07T01:00:51.690+02:00 level=DEBUG source=gpu.go:325 msg="Unable to load cudart" library=/usr/local/cuda/lib64/libcudart.so.12.5.39 error="your nvidia driver is too old or missing. If you have a CUDA GPU please upgrade to run ollama"
time=2024-06-07T01:00:51.690+02:00 level=DEBUG source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-06-07T01:00:51.690+02:00 level=DEBUG source=amd_linux.go:322 msg="amdgpu driver not detected /sys/module/amdgpu"
time=2024-06-07T01:00:51.690+02:00 level=INFO source=types.go:71 msg="inference compute" id=0 library=cpu compute="" driver=0.0 name="" total="7.6 GiB" available="3.6 GiB"```
### OS
WSL2
### GPU
Intel
### CPU
Intel
### Ollama version
0.1.39
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4884/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4884/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/3351
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3351/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3351/comments
|
https://api.github.com/repos/ollama/ollama/issues/3351/events
|
https://github.com/ollama/ollama/pull/3351
| 2,206,799,914
|
PR_kwDOJ0Z1Ps5qtsps
| 3,351
|
Add license in file header for vendored llama.cpp code
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-03-25T22:02:20
| 2024-03-26T20:23:23
| 2024-03-26T20:23:23
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3351",
"html_url": "https://github.com/ollama/ollama/pull/3351",
"diff_url": "https://github.com/ollama/ollama/pull/3351.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3351.patch",
"merged_at": "2024-03-26T20:23:23"
}
| null |
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3351/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3351/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7892
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7892/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7892/comments
|
https://api.github.com/repos/ollama/ollama/issues/7892/events
|
https://github.com/ollama/ollama/issues/7892
| 2,707,110,889
|
I_kwDOJ0Z1Ps6hWzvp
| 7,892
|
After the deployment of ollama, it can only be accessed through 127.0.0.1 and cannot be accessed through IP
|
{
"login": "2277509846",
"id": 52586868,
"node_id": "MDQ6VXNlcjUyNTg2ODY4",
"avatar_url": "https://avatars.githubusercontent.com/u/52586868?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/2277509846",
"html_url": "https://github.com/2277509846",
"followers_url": "https://api.github.com/users/2277509846/followers",
"following_url": "https://api.github.com/users/2277509846/following{/other_user}",
"gists_url": "https://api.github.com/users/2277509846/gists{/gist_id}",
"starred_url": "https://api.github.com/users/2277509846/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/2277509846/subscriptions",
"organizations_url": "https://api.github.com/users/2277509846/orgs",
"repos_url": "https://api.github.com/users/2277509846/repos",
"events_url": "https://api.github.com/users/2277509846/events{/privacy}",
"received_events_url": "https://api.github.com/users/2277509846/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 15
| 2024-11-30T10:06:33
| 2024-11-30T13:14:47
| 2024-11-30T11:45:56
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Version: 0.4.6
OS: Ubuntu
Download and install
curl -fsSL https://ollama.com/install.sh | OLLAMA_VERSION=0.4.6 sh
Edit service file
sudo vim /etc/systemd/system/ollama.service
[Unit]
Description=Ollama Service
After=network-online.target
[Service]
ExecStart=/usr/local/bin/ollama serve
User=ollama
Group=ollama
Restart=always
RestartSec=3
Environment="PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin"
Environment=“OLLAMA_HOST=0.0.0.0”
[Install]
WantedBy=default.target
When I use curl http://127.0.0.1:11434 When accessing ollama, it can be accessed, but cannot be accessed using curl http://[ip]: 11434
|
{
"login": "2277509846",
"id": 52586868,
"node_id": "MDQ6VXNlcjUyNTg2ODY4",
"avatar_url": "https://avatars.githubusercontent.com/u/52586868?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/2277509846",
"html_url": "https://github.com/2277509846",
"followers_url": "https://api.github.com/users/2277509846/followers",
"following_url": "https://api.github.com/users/2277509846/following{/other_user}",
"gists_url": "https://api.github.com/users/2277509846/gists{/gist_id}",
"starred_url": "https://api.github.com/users/2277509846/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/2277509846/subscriptions",
"organizations_url": "https://api.github.com/users/2277509846/orgs",
"repos_url": "https://api.github.com/users/2277509846/repos",
"events_url": "https://api.github.com/users/2277509846/events{/privacy}",
"received_events_url": "https://api.github.com/users/2277509846/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7892/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7892/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/906
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/906/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/906/comments
|
https://api.github.com/repos/ollama/ollama/issues/906/events
|
https://github.com/ollama/ollama/pull/906
| 1,962,163,935
|
PR_kwDOJ0Z1Ps5dyzrb
| 906
|
Documenting OpenAI compatibility (and other docs tweaks)
|
{
"login": "jamesbraza",
"id": 8990777,
"node_id": "MDQ6VXNlcjg5OTA3Nzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/8990777?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jamesbraza",
"html_url": "https://github.com/jamesbraza",
"followers_url": "https://api.github.com/users/jamesbraza/followers",
"following_url": "https://api.github.com/users/jamesbraza/following{/other_user}",
"gists_url": "https://api.github.com/users/jamesbraza/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jamesbraza/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jamesbraza/subscriptions",
"organizations_url": "https://api.github.com/users/jamesbraza/orgs",
"repos_url": "https://api.github.com/users/jamesbraza/repos",
"events_url": "https://api.github.com/users/jamesbraza/events{/privacy}",
"received_events_url": "https://api.github.com/users/jamesbraza/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2023-10-25T20:18:41
| 2023-10-27T07:29:21
| 2023-10-27T07:10:23
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/906",
"html_url": "https://github.com/ollama/ollama/pull/906",
"diff_url": "https://github.com/ollama/ollama/pull/906.diff",
"patch_url": "https://github.com/ollama/ollama/pull/906.patch",
"merged_at": "2023-10-27T07:10:23"
}
|
Modernization of https://github.com/jmorganca/ollama/pull/661
- ~Closes https://github.com/jmorganca/ollama/issues/538~
- Upstreams more knowledge from https://github.com/jmorganca/ollama/issues/546
- Simplifies `brew install` to one line
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/906/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/906/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2706
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2706/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2706/comments
|
https://api.github.com/repos/ollama/ollama/issues/2706/events
|
https://github.com/ollama/ollama/issues/2706
| 2,150,702,091
|
I_kwDOJ0Z1Ps6AMRwL
| 2,706
|
CUDA error: out of memory with llava:7b-v1.6 when providing an image
|
{
"login": "lucaboulard",
"id": 25926274,
"node_id": "MDQ6VXNlcjI1OTI2Mjc0",
"avatar_url": "https://avatars.githubusercontent.com/u/25926274?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lucaboulard",
"html_url": "https://github.com/lucaboulard",
"followers_url": "https://api.github.com/users/lucaboulard/followers",
"following_url": "https://api.github.com/users/lucaboulard/following{/other_user}",
"gists_url": "https://api.github.com/users/lucaboulard/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lucaboulard/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lucaboulard/subscriptions",
"organizations_url": "https://api.github.com/users/lucaboulard/orgs",
"repos_url": "https://api.github.com/users/lucaboulard/repos",
"events_url": "https://api.github.com/users/lucaboulard/events{/privacy}",
"received_events_url": "https://api.github.com/users/lucaboulard/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg",
"url": "https://api.github.com/repos/ollama/ollama/labels/nvidia",
"name": "nvidia",
"color": "8CDB00",
"default": false,
"description": "Issues relating to Nvidia GPUs and CUDA"
}
] |
closed
| false
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 2
| 2024-02-23T09:29:53
| 2024-06-01T20:37:51
| 2024-06-01T20:37:51
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi,
I'm using ollama 0.1.26 to run llava:7b-v1.6 on WSL on Windows (Ubuntu 22.04.3 LTS).
It works just fine as long as I just use textual prompts, but as soon as I go multimodal and pass an image as well ollama crashes with this message:
```
time=2024-02-23T09:49:45.496+01:00 level=INFO source=dyn_ext_server.go:171 msg="loaded 1 images"
encode_image_with_clip: image embedding created: 576 tokens
encode_image_with_clip: image encoded in 1236.17 ms by CLIP ( 2.15 ms per image patch)
CUDA error: out of memory
current device: 0, in function ggml_cuda_pool_malloc_vmm at /go/src/github.com/jmorganca/ollama/llm/llama.cpp/ggml-cuda.cu:7991
cuMemSetAccess(g_cuda_pool_addr[device] + g_cuda_pool_size[device], reserve_size, &access, 1)
GGML_ASSERT: /go/src/github.com/jmorganca/ollama/llm/llama.cpp/ggml-cuda.cu:244: !"CUDA error"
Aborted
```
My laptop has a NVIDIA GeForce MX150.
Can anybody help me to understand what is going wrong and how can I fix it?
Is it necessary to install NVIDIA related stuff/libraries or just installing ollama should suffice?
When I start ollama and run llava these are the logs:
```
time=2024-02-23T10:01:02.806+01:00 level=INFO source=images.go:710 msg="total blobs: 6"
time=2024-02-23T10:01:02.806+01:00 level=INFO source=images.go:717 msg="total unused blobs removed: 0"
time=2024-02-23T10:01:02.807+01:00 level=INFO source=routes.go:1019 msg="Listening on 127.0.0.1:11434 (version 0.1.26)"
time=2024-02-23T10:01:02.807+01:00 level=INFO source=payload_common.go:107 msg="Extracting dynamic libraries..."
time=2024-02-23T10:01:05.827+01:00 level=INFO source=payload_common.go:146 msg="Dynamic LLM libraries [cuda_v11 rocm_v6 cpu cpu_avx2 rocm_v5 cpu_avx]"
time=2024-02-23T10:01:05.827+01:00 level=INFO source=gpu.go:94 msg="Detecting GPU type"
time=2024-02-23T10:01:05.827+01:00 level=INFO source=gpu.go:265 msg="Searching for GPU management library libnvidia-ml.so"
time=2024-02-23T10:01:07.787+01:00 level=INFO source=gpu.go:311 msg="Discovered GPU libraries: [/usr/lib/wsl/lib/libnvidia-ml.so.1 /usr/lib/wsl/drivers/nvam.inf_amd64_73ddbc5a9852db46/libnvidia-ml.so.1]"
time=2024-02-23T10:01:08.554+01:00 level=INFO source=gpu.go:99 msg="Nvidia GPU detected"
time=2024-02-23T10:01:08.554+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-02-23T10:01:08.570+01:00 level=INFO source=gpu.go:146 msg="CUDA Compute Capability detected: 6.1"
[GIN] 2024/02/23 - 10:01:23 | 200 | 112.1µs | 127.0.0.1 | HEAD "/"
[GIN] 2024/02/23 - 10:01:23 | 200 | 657.3µs | 127.0.0.1 | POST "/api/show"
[GIN] 2024/02/23 - 10:01:23 | 200 | 332.1µs | 127.0.0.1 | POST "/api/show"
time=2024-02-23T10:01:24.065+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-02-23T10:01:24.557+01:00 level=INFO source=gpu.go:146 msg="CUDA Compute Capability detected: 6.1"
time=2024-02-23T10:01:24.558+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-02-23T10:01:24.559+01:00 level=INFO source=gpu.go:146 msg="CUDA Compute Capability detected: 6.1"
time=2024-02-23T10:01:24.559+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
loading library /tmp/ollama3815001611/cuda_v11/libext_server.so
time=2024-02-23T10:01:24.583+01:00 level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: /tmp/ollama3815001611/cuda_v11/libext_server.so"
time=2024-02-23T10:01:24.583+01:00 level=INFO source=dyn_ext_server.go:150 msg="Initializing llama server"
ggml_init_cublas: GGML_CUDA_FORCE_MMQ: yes
ggml_init_cublas: CUDA_USE_TENSOR_CORES: no
ggml_init_cublas: found 1 CUDA devices:
Device 0: NVIDIA GeForce MX150, compute capability 6.1, VMM: yes
clip_model_load: model name: openai/clip-vit-large-patch14-336
clip_model_load: description: image encoder for LLaVA
clip_model_load: GGUF version: 3
clip_model_load: alignment: 32
clip_model_load: n_tensors: 377
clip_model_load: n_kv: 19
clip_model_load: ftype: f16
clip_model_load: loaded meta data with 19 key-value pairs and 377 tensors from /home/luca/.ollama/models/blobs/sha256:72d6f08a42f656d36b356dbe0920675899a99ce21192fd66266fb7d82ed07539
clip_model_load: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
clip_model_load: - kv 0: general.architecture str = clip
clip_model_load: - kv 1: clip.has_text_encoder bool = false
clip_model_load: - kv 2: clip.has_vision_encoder bool = true
clip_model_load: - kv 3: clip.has_llava_projector bool = true
clip_model_load: - kv 4: general.file_type u32 = 1
clip_model_load: - kv 5: general.name str = openai/clip-vit-large-patch14-336
clip_model_load: - kv 6: general.description str = image encoder for LLaVA
clip_model_load: - kv 7: clip.projector_type str = mlp
clip_model_load: - kv 8: clip.vision.image_size u32 = 336
clip_model_load: - kv 9: clip.vision.patch_size u32 = 14
clip_model_load: - kv 10: clip.vision.embedding_length u32 = 1024
clip_model_load: - kv 11: clip.vision.feed_forward_length u32 = 4096
clip_model_load: - kv 12: clip.vision.projection_dim u32 = 768
clip_model_load: - kv 13: clip.vision.attention.head_count u32 = 16
clip_model_load: - kv 14: clip.vision.attention.layer_norm_epsilon f32 = 0.000010
clip_model_load: - kv 15: clip.vision.block_count u32 = 23
clip_model_load: - kv 16: clip.vision.image_mean arr[f32,3] = [0.481455, 0.457828, 0.408211]
clip_model_load: - kv 17: clip.vision.image_std arr[f32,3] = [0.268630, 0.261303, 0.275777]
clip_model_load: - kv 18: clip.use_gelu bool = false
clip_model_load: - type f32: 235 tensors
clip_model_load: - type f16: 142 tensors
clip_model_load: CLIP using CUDA backend
clip_model_load: text_encoder: 0
clip_model_load: vision_encoder: 1
clip_model_load: llava_projector: 1
clip_model_load: model size: 595.49 MB
clip_model_load: metadata size: 0.14 MB
clip_model_load: params backend buffer size = 595.49 MB (377 tensors)
key clip.vision.image_grid_pinpoints not found in file
key clip.vision.mm_patch_merge_type not found in file
key clip.vision.image_crop_resolution not found in file
clip_model_load: compute allocated memory: 32.89 MB
llama_model_loader: loaded meta data with 24 key-value pairs and 291 tensors from /home/luca/.ollama/models/blobs/sha256:170370233dd5c5415250a2ecd5c71586352850729062ccef1496385647293868 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.name str = liuhaotian
llama_model_loader: - kv 2: llama.context_length u32 = 32768
llama_model_loader: - kv 3: llama.embedding_length u32 = 4096
llama_model_loader: - kv 4: llama.block_count u32 = 32
llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336
llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 7: llama.attention.head_count u32 = 32
llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 10: llama.rope.freq_base f32 = 1000000.000000
llama_model_loader: - kv 11: general.file_type u32 = 2
llama_model_loader: - kv 12: tokenizer.ggml.model str = llama
llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,32000] = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv 14: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv 16: tokenizer.ggml.bos_token_id u32 = 1
llama_model_loader: - kv 17: tokenizer.ggml.eos_token_id u32 = 2
llama_model_loader: - kv 18: tokenizer.ggml.unknown_token_id u32 = 0
llama_model_loader: - kv 19: tokenizer.ggml.padding_token_id u32 = 0
llama_model_loader: - kv 20: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 21: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 22: tokenizer.chat_template str = {{ bos_token }}{% for message in mess...
llama_model_loader: - kv 23: general.quantization_version u32 = 2
llama_model_loader: - type f32: 65 tensors
llama_model_loader: - type q4_0: 225 tensors
llama_model_loader: - type q6_K: 1 tensors
llm_load_vocab: special tokens definition check successful ( 259/32000 ).
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = SPM
llm_load_print_meta: n_vocab = 32000
llm_load_print_meta: n_merges = 0
llm_load_print_meta: n_ctx_train = 32768
llm_load_print_meta: n_embd = 4096
llm_load_print_meta: n_head = 32
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_layer = 32
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 4
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: n_ff = 14336
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx = 32768
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: model type = 7B
llm_load_print_meta: model ftype = Q4_0
llm_load_print_meta: model params = 7.24 B
llm_load_print_meta: model size = 3.83 GiB (4.54 BPW)
llm_load_print_meta: general.name = liuhaotian
llm_load_print_meta: BOS token = 1 '<s>'
llm_load_print_meta: EOS token = 2 '</s>'
llm_load_print_meta: UNK token = 0 '<unk>'
llm_load_print_meta: PAD token = 0 '<unk>'
llm_load_print_meta: LF token = 13 '<0x0A>'
llm_load_tensors: ggml ctx size = 0.22 MiB
llm_load_tensors: offloading 6 repeating layers to GPU
llm_load_tensors: offloaded 6/33 layers to GPU
llm_load_tensors: CPU buffer size = 3917.87 MiB
llm_load_tensors: CUDA0 buffer size = 702.19 MiB
llama_new_context_with_model: n_ctx = 2048
llama_new_context_with_model: freq_base = 1000000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: CUDA_Host KV buffer size = 208.00 MiB
llama_kv_cache_init: CUDA0 KV buffer size = 48.00 MiB
llama_new_context_with_model: KV self size = 256.00 MiB, K (f16): 128.00 MiB, V (f16): 128.00 MiB
llama_new_context_with_model: CUDA_Host input buffer size = 13.02 MiB
llama_new_context_with_model: CUDA0 compute buffer size = 164.01 MiB
llama_new_context_with_model: CUDA_Host compute buffer size = 168.00 MiB
llama_new_context_with_model: graph splits (measure): 5
time=2024-02-23T10:01:27.991+01:00 level=INFO source=dyn_ext_server.go:161 msg="Starting llama main loop"
[GIN] 2024/02/23 - 10:01:27 | 200 | 4.0520283s | 127.0.0.1 | POST "/api/chat"
[GIN] 2024/02/23 - 10:06:14 | 200 | 58.9µs | 127.0.0.1 | GET "/api/version"
```
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2706/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2706/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1194
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1194/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1194/comments
|
https://api.github.com/repos/ollama/ollama/issues/1194/events
|
https://github.com/ollama/ollama/issues/1194
| 2,000,634,109
|
I_kwDOJ0Z1Ps53P0D9
| 1,194
|
Add open assistant
|
{
"login": "mak448a",
"id": 94062293,
"node_id": "U_kgDOBZtG1Q",
"avatar_url": "https://avatars.githubusercontent.com/u/94062293?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mak448a",
"html_url": "https://github.com/mak448a",
"followers_url": "https://api.github.com/users/mak448a/followers",
"following_url": "https://api.github.com/users/mak448a/following{/other_user}",
"gists_url": "https://api.github.com/users/mak448a/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mak448a/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mak448a/subscriptions",
"organizations_url": "https://api.github.com/users/mak448a/orgs",
"repos_url": "https://api.github.com/users/mak448a/repos",
"events_url": "https://api.github.com/users/mak448a/events{/privacy}",
"received_events_url": "https://api.github.com/users/mak448a/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
open
| false
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 0
| 2023-11-19T00:20:34
| 2024-03-11T18:16:51
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Could you add Open Assistant? Thank you!
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1194/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1194/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/3471
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3471/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3471/comments
|
https://api.github.com/repos/ollama/ollama/issues/3471/events
|
https://github.com/ollama/ollama/issues/3471
| 2,222,014,071
|
I_kwDOJ0Z1Ps6EcT53
| 3,471
|
Please add Qwen-audio
|
{
"login": "zimuoo",
"id": 29696639,
"node_id": "MDQ6VXNlcjI5Njk2NjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/29696639?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zimuoo",
"html_url": "https://github.com/zimuoo",
"followers_url": "https://api.github.com/users/zimuoo/followers",
"following_url": "https://api.github.com/users/zimuoo/following{/other_user}",
"gists_url": "https://api.github.com/users/zimuoo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zimuoo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zimuoo/subscriptions",
"organizations_url": "https://api.github.com/users/zimuoo/orgs",
"repos_url": "https://api.github.com/users/zimuoo/repos",
"events_url": "https://api.github.com/users/zimuoo/events{/privacy}",
"received_events_url": "https://api.github.com/users/zimuoo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
open
| false
| null |
[] | null | 5
| 2024-04-03T06:24:34
| 2024-09-02T03:06:57
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What model would you like?
_No response_
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3471/reactions",
"total_count": 7,
"+1": 7,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3471/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/8159
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8159/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8159/comments
|
https://api.github.com/repos/ollama/ollama/issues/8159/events
|
https://github.com/ollama/ollama/issues/8159
| 2,748,255,686
|
I_kwDOJ0Z1Ps6jzw3G
| 8,159
|
phi4
|
{
"login": "sinxyz",
"id": 32287704,
"node_id": "MDQ6VXNlcjMyMjg3NzA0",
"avatar_url": "https://avatars.githubusercontent.com/u/32287704?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sinxyz",
"html_url": "https://github.com/sinxyz",
"followers_url": "https://api.github.com/users/sinxyz/followers",
"following_url": "https://api.github.com/users/sinxyz/following{/other_user}",
"gists_url": "https://api.github.com/users/sinxyz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sinxyz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sinxyz/subscriptions",
"organizations_url": "https://api.github.com/users/sinxyz/orgs",
"repos_url": "https://api.github.com/users/sinxyz/repos",
"events_url": "https://api.github.com/users/sinxyz/events{/privacy}",
"received_events_url": "https://api.github.com/users/sinxyz/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
closed
| false
| null |
[] | null | 5
| 2024-12-18T16:25:36
| 2025-01-14T08:54:16
| 2024-12-19T19:53:45
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
please add phi4
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8159/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8159/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4302
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4302/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4302/comments
|
https://api.github.com/repos/ollama/ollama/issues/4302/events
|
https://github.com/ollama/ollama/pull/4302
| 2,288,543,597
|
PR_kwDOJ0Z1Ps5vCTQ1
| 4,302
|
only forward some env vars
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-05-09T22:13:01
| 2024-05-09T23:21:06
| 2024-05-09T23:21:05
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4302",
"html_url": "https://github.com/ollama/ollama/pull/4302",
"diff_url": "https://github.com/ollama/ollama/pull/4302.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4302.patch",
"merged_at": "2024-05-09T23:21:05"
}
|
only forward select env vars which prevents 1) logging and 2) the subprocess inheriting irrelevant, possibly sensitive, vars
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4302/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4302/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3591
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3591/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3591/comments
|
https://api.github.com/repos/ollama/ollama/issues/3591/events
|
https://github.com/ollama/ollama/pull/3591
| 2,237,320,424
|
PR_kwDOJ0Z1Ps5sVqs_
| 3,591
|
examples: Update langchain-python-simple
|
{
"login": "erikos",
"id": 3714785,
"node_id": "MDQ6VXNlcjM3MTQ3ODU=",
"avatar_url": "https://avatars.githubusercontent.com/u/3714785?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/erikos",
"html_url": "https://github.com/erikos",
"followers_url": "https://api.github.com/users/erikos/followers",
"following_url": "https://api.github.com/users/erikos/following{/other_user}",
"gists_url": "https://api.github.com/users/erikos/gists{/gist_id}",
"starred_url": "https://api.github.com/users/erikos/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/erikos/subscriptions",
"organizations_url": "https://api.github.com/users/erikos/orgs",
"repos_url": "https://api.github.com/users/erikos/repos",
"events_url": "https://api.github.com/users/erikos/events{/privacy}",
"received_events_url": "https://api.github.com/users/erikos/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-04-11T09:39:46
| 2024-11-25T00:06:22
| 2024-11-25T00:06:22
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3591",
"html_url": "https://github.com/ollama/ollama/pull/3591",
"diff_url": "https://github.com/ollama/ollama/pull/3591.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3591.patch",
"merged_at": "2024-11-25T00:06:22"
}
|
* remove deprecated predict command, use invoke instead
* improve input handling
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3591/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3591/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7032
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7032/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7032/comments
|
https://api.github.com/repos/ollama/ollama/issues/7032/events
|
https://github.com/ollama/ollama/issues/7032
| 2,554,811,791
|
I_kwDOJ0Z1Ps6YR1WP
| 7,032
|
Persistent context
|
{
"login": "tomstdenis",
"id": 11875109,
"node_id": "MDQ6VXNlcjExODc1MTA5",
"avatar_url": "https://avatars.githubusercontent.com/u/11875109?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tomstdenis",
"html_url": "https://github.com/tomstdenis",
"followers_url": "https://api.github.com/users/tomstdenis/followers",
"following_url": "https://api.github.com/users/tomstdenis/following{/other_user}",
"gists_url": "https://api.github.com/users/tomstdenis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tomstdenis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tomstdenis/subscriptions",
"organizations_url": "https://api.github.com/users/tomstdenis/orgs",
"repos_url": "https://api.github.com/users/tomstdenis/repos",
"events_url": "https://api.github.com/users/tomstdenis/events{/privacy}",
"received_events_url": "https://api.github.com/users/tomstdenis/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-09-29T08:37:38
| 2024-10-01T23:03:35
| 2024-10-01T23:03:34
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
When using an LLM for say a lesson it'd be nice to prime the LLM with a persistent initial "basic instruction" that never falls out of the window, e.g.
"You're a German language instructor, I'm an Anglophone, help me learn German."
With most LLM drivers (ChatGPT/Ollama/etc) these instructions will fall out of the window and it'll forget what it's doing after about 5-10 mins of use.
It'd be nice to lock a message into the context window and should be fairly simple to implement I'd think.
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7032/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7032/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8513
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8513/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8513/comments
|
https://api.github.com/repos/ollama/ollama/issues/8513/events
|
https://github.com/ollama/ollama/issues/8513
| 2,801,188,323
|
I_kwDOJ0Z1Ps6m9r3j
| 8,513
|
Support for Multiple Images in /chat Endpoint
|
{
"login": "pmedina-42",
"id": 68591323,
"node_id": "MDQ6VXNlcjY4NTkxMzIz",
"avatar_url": "https://avatars.githubusercontent.com/u/68591323?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pmedina-42",
"html_url": "https://github.com/pmedina-42",
"followers_url": "https://api.github.com/users/pmedina-42/followers",
"following_url": "https://api.github.com/users/pmedina-42/following{/other_user}",
"gists_url": "https://api.github.com/users/pmedina-42/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pmedina-42/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pmedina-42/subscriptions",
"organizations_url": "https://api.github.com/users/pmedina-42/orgs",
"repos_url": "https://api.github.com/users/pmedina-42/repos",
"events_url": "https://api.github.com/users/pmedina-42/events{/privacy}",
"received_events_url": "https://api.github.com/users/pmedina-42/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 1
| 2025-01-21T09:16:37
| 2025-01-21T17:34:35
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Currently, the /chat endpoint includes the images field, but it only supports a single image. While this is functional, it introduces an additional layer of complexity when performing RAG with images embedded in base64.
For instance, if the content retriever returns multiple embeddings with the highest scores referencing different images, we need to manually reconstruct the full images (potentially missing lower-score records in case the embedding size isn't big enough to store the whole image in one record) and then make a separate /chat call for each retrieved image. Finally, all the responses must be summarized into one.
This manual process could be significantly simplified if the images field allowed for passing multiple images in a single request.
Is there any plan in the near future to support multiple images in the images field? This enhancement would greatly streamline workflows and reduce the overhead in scenarios like the one described above.
Additionally, I’m just getting started and don’t have much experience yet, so it’s possible that I’m overlooking something that could make this process easier. If there’s a better approach or workaround I might have missed, I’d be grateful for any guidance.
Thank you beforehand!
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8513/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8513/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/4885
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4885/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4885/comments
|
https://api.github.com/repos/ollama/ollama/issues/4885/events
|
https://github.com/ollama/ollama/issues/4885
| 2,339,327,186
|
I_kwDOJ0Z1Ps6Lb0zS
| 4,885
|
Support Dragonfly
|
{
"login": "kylemclaren",
"id": 3727384,
"node_id": "MDQ6VXNlcjM3MjczODQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/3727384?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kylemclaren",
"html_url": "https://github.com/kylemclaren",
"followers_url": "https://api.github.com/users/kylemclaren/followers",
"following_url": "https://api.github.com/users/kylemclaren/following{/other_user}",
"gists_url": "https://api.github.com/users/kylemclaren/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kylemclaren/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kylemclaren/subscriptions",
"organizations_url": "https://api.github.com/users/kylemclaren/orgs",
"repos_url": "https://api.github.com/users/kylemclaren/repos",
"events_url": "https://api.github.com/users/kylemclaren/events{/privacy}",
"received_events_url": "https://api.github.com/users/kylemclaren/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
open
| false
| null |
[] | null | 0
| 2024-06-06T23:54:59
| 2024-06-06T23:54:59
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Together.ai recently announced the Dragonfly vision-language model based on Llama3: https://huggingface.co/togethercomputer/Llama-3-8B-Dragonfly-v1
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4885/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4885/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/2880
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2880/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2880/comments
|
https://api.github.com/repos/ollama/ollama/issues/2880/events
|
https://github.com/ollama/ollama/pull/2880
| 2,164,838,723
|
PR_kwDOJ0Z1Ps5ofDyN
| 2,880
|
update go module path
|
{
"login": "icholy",
"id": 943597,
"node_id": "MDQ6VXNlcjk0MzU5Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/943597?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/icholy",
"html_url": "https://github.com/icholy",
"followers_url": "https://api.github.com/users/icholy/followers",
"following_url": "https://api.github.com/users/icholy/following{/other_user}",
"gists_url": "https://api.github.com/users/icholy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/icholy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/icholy/subscriptions",
"organizations_url": "https://api.github.com/users/icholy/orgs",
"repos_url": "https://api.github.com/users/icholy/repos",
"events_url": "https://api.github.com/users/icholy/events{/privacy}",
"received_events_url": "https://api.github.com/users/icholy/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-03-02T14:41:33
| 2024-03-31T17:16:06
| 2024-03-31T17:16:05
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/2880",
"html_url": "https://github.com/ollama/ollama/pull/2880",
"diff_url": "https://github.com/ollama/ollama/pull/2880.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2880.patch",
"merged_at": null
}
| null |
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2880/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2880/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6608
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6608/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6608/comments
|
https://api.github.com/repos/ollama/ollama/issues/6608/events
|
https://github.com/ollama/ollama/pull/6608
| 2,503,378,382
|
PR_kwDOJ0Z1Ps56SnAa
| 6,608
|
Updated Ollama4j link
|
{
"login": "amithkoujalgi",
"id": 1876165,
"node_id": "MDQ6VXNlcjE4NzYxNjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/1876165?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amithkoujalgi",
"html_url": "https://github.com/amithkoujalgi",
"followers_url": "https://api.github.com/users/amithkoujalgi/followers",
"following_url": "https://api.github.com/users/amithkoujalgi/following{/other_user}",
"gists_url": "https://api.github.com/users/amithkoujalgi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amithkoujalgi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amithkoujalgi/subscriptions",
"organizations_url": "https://api.github.com/users/amithkoujalgi/orgs",
"repos_url": "https://api.github.com/users/amithkoujalgi/repos",
"events_url": "https://api.github.com/users/amithkoujalgi/events{/privacy}",
"received_events_url": "https://api.github.com/users/amithkoujalgi/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-09-03T17:12:20
| 2024-09-03T20:08:50
| 2024-09-03T20:08:50
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6608",
"html_url": "https://github.com/ollama/ollama/pull/6608",
"diff_url": "https://github.com/ollama/ollama/pull/6608.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6608.patch",
"merged_at": "2024-09-03T20:08:50"
}
|
Updated Ollama4j link and added link to Ollama4j Web UI tool.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6608/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6608/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2563
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2563/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2563/comments
|
https://api.github.com/repos/ollama/ollama/issues/2563/events
|
https://github.com/ollama/ollama/pull/2563
| 2,140,231,004
|
PR_kwDOJ0Z1Ps5nLM2M
| 2,563
|
Update Web UI link to new project name
|
{
"login": "justinh-rahb",
"id": 52832301,
"node_id": "MDQ6VXNlcjUyODMyMzAx",
"avatar_url": "https://avatars.githubusercontent.com/u/52832301?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/justinh-rahb",
"html_url": "https://github.com/justinh-rahb",
"followers_url": "https://api.github.com/users/justinh-rahb/followers",
"following_url": "https://api.github.com/users/justinh-rahb/following{/other_user}",
"gists_url": "https://api.github.com/users/justinh-rahb/gists{/gist_id}",
"starred_url": "https://api.github.com/users/justinh-rahb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/justinh-rahb/subscriptions",
"organizations_url": "https://api.github.com/users/justinh-rahb/orgs",
"repos_url": "https://api.github.com/users/justinh-rahb/repos",
"events_url": "https://api.github.com/users/justinh-rahb/events{/privacy}",
"received_events_url": "https://api.github.com/users/justinh-rahb/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-02-17T16:07:29
| 2024-02-18T05:02:48
| 2024-02-18T04:05:20
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/2563",
"html_url": "https://github.com/ollama/ollama/pull/2563",
"diff_url": "https://github.com/ollama/ollama/pull/2563.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2563.patch",
"merged_at": "2024-02-18T04:05:20"
}
|
Ollama WebUI is now known as Open WebUI:
https://openwebui.com
https://github.com/open-webui/open-webui
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2563/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2563/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4698
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4698/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4698/comments
|
https://api.github.com/repos/ollama/ollama/issues/4698/events
|
https://github.com/ollama/ollama/issues/4698
| 2,322,810,880
|
I_kwDOJ0Z1Ps6Kc0gA
| 4,698
|
ValueError: Error raised by inference API HTTP code: 500, {"error":"failed to generate embedding"}
|
{
"login": "uzumakinaruto19",
"id": 99479748,
"node_id": "U_kgDOBe3wxA",
"avatar_url": "https://avatars.githubusercontent.com/u/99479748?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/uzumakinaruto19",
"html_url": "https://github.com/uzumakinaruto19",
"followers_url": "https://api.github.com/users/uzumakinaruto19/followers",
"following_url": "https://api.github.com/users/uzumakinaruto19/following{/other_user}",
"gists_url": "https://api.github.com/users/uzumakinaruto19/gists{/gist_id}",
"starred_url": "https://api.github.com/users/uzumakinaruto19/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/uzumakinaruto19/subscriptions",
"organizations_url": "https://api.github.com/users/uzumakinaruto19/orgs",
"repos_url": "https://api.github.com/users/uzumakinaruto19/repos",
"events_url": "https://api.github.com/users/uzumakinaruto19/events{/privacy}",
"received_events_url": "https://api.github.com/users/uzumakinaruto19/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 4
| 2024-05-29T09:20:09
| 2024-11-10T13:00:51
| 2024-09-13T00:14:45
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
ValueError: Error raised by inference API HTTP code: 500, {"error":"failed to generate embedding"}
still I'm getting with the latest ollama docker
> Hi folks this should be fixed now - please let me know if that's not the case
@jmorganca
only with the llama/ollama:0.1.32 version it works,
doesn't work with 0.1.28,37 and latest(as far as I checked)
Randomly it works, but most of the time fails
@mchiang0610, do you have any idea how to fix this?
_Originally posted by @uzumakinaruto19 in https://github.com/ollama/ollama/issues/1577#issuecomment-2126700878_
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4698/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4698/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4691
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4691/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4691/comments
|
https://api.github.com/repos/ollama/ollama/issues/4691/events
|
https://github.com/ollama/ollama/issues/4691
| 2,322,041,964
|
I_kwDOJ0Z1Ps6KZ4xs
| 4,691
|
linux installation
|
{
"login": "wi-wi",
"id": 53225089,
"node_id": "MDQ6VXNlcjUzMjI1MDg5",
"avatar_url": "https://avatars.githubusercontent.com/u/53225089?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wi-wi",
"html_url": "https://github.com/wi-wi",
"followers_url": "https://api.github.com/users/wi-wi/followers",
"following_url": "https://api.github.com/users/wi-wi/following{/other_user}",
"gists_url": "https://api.github.com/users/wi-wi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wi-wi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wi-wi/subscriptions",
"organizations_url": "https://api.github.com/users/wi-wi/orgs",
"repos_url": "https://api.github.com/users/wi-wi/repos",
"events_url": "https://api.github.com/users/wi-wi/events{/privacy}",
"received_events_url": "https://api.github.com/users/wi-wi/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info",
"name": "needs more info",
"color": "BA8041",
"default": false,
"description": "More information is needed to assist"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 1
| 2024-05-28T22:54:05
| 2024-08-09T23:23:00
| 2024-08-09T23:23:00
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
```
curl -fsSL https://ollama.com/install.sh | sh
bash: ./shell.sh: No such file or directory
curl: (23) Failed writing body (1349 != 1378)
```
========
the curl command is from ollama's download page.
I solved it by downloading /install.sh, making it executable and run it.
### OS
Linux
### GPU
_No response_
### CPU
_No response_
### Ollama version
_No response_
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4691/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4691/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/265
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/265/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/265/comments
|
https://api.github.com/repos/ollama/ollama/issues/265/events
|
https://github.com/ollama/ollama/pull/265
| 1,834,068,042
|
PR_kwDOJ0Z1Ps5XDrCg
| 265
|
Update README.md
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-08-03T00:21:46
| 2023-08-03T02:38:33
| 2023-08-03T02:38:32
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/265",
"html_url": "https://github.com/ollama/ollama/pull/265",
"diff_url": "https://github.com/ollama/ollama/pull/265.diff",
"patch_url": "https://github.com/ollama/ollama/pull/265.patch",
"merged_at": "2023-08-03T02:38:32"
}
| null |
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/265/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/265/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4045
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4045/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4045/comments
|
https://api.github.com/repos/ollama/ollama/issues/4045/events
|
https://github.com/ollama/ollama/pull/4045
| 2,270,995,055
|
PR_kwDOJ0Z1Ps5uHnTK
| 4,045
|
docs: add ollama-operator in example
|
{
"login": "panpan0000",
"id": 14049268,
"node_id": "MDQ6VXNlcjE0MDQ5MjY4",
"avatar_url": "https://avatars.githubusercontent.com/u/14049268?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/panpan0000",
"html_url": "https://github.com/panpan0000",
"followers_url": "https://api.github.com/users/panpan0000/followers",
"following_url": "https://api.github.com/users/panpan0000/following{/other_user}",
"gists_url": "https://api.github.com/users/panpan0000/gists{/gist_id}",
"starred_url": "https://api.github.com/users/panpan0000/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/panpan0000/subscriptions",
"organizations_url": "https://api.github.com/users/panpan0000/orgs",
"repos_url": "https://api.github.com/users/panpan0000/repos",
"events_url": "https://api.github.com/users/panpan0000/events{/privacy}",
"received_events_url": "https://api.github.com/users/panpan0000/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 3
| 2024-04-30T09:39:49
| 2024-11-21T09:36:57
| 2024-11-21T09:36:56
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4045",
"html_url": "https://github.com/ollama/ollama/pull/4045",
"diff_url": "https://github.com/ollama/ollama/pull/4045.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4045.patch",
"merged_at": null
}
| null |
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4045/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4045/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7868
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7868/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7868/comments
|
https://api.github.com/repos/ollama/ollama/issues/7868/events
|
https://github.com/ollama/ollama/pull/7868
| 2,700,161,640
|
PR_kwDOJ0Z1Ps6DZXih
| 7,868
|
server: automatically open browser to connect ollama key
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 4
| 2024-11-27T23:21:43
| 2024-12-19T03:42:36
| 2024-12-19T01:41:57
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7868",
"html_url": "https://github.com/ollama/ollama/pull/7868",
"diff_url": "https://github.com/ollama/ollama/pull/7868.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7868.patch",
"merged_at": null
}
|
When an ollama key is not registered with any account on ollama.com this is not obvious. In the current CLI an error message that the user is not authorized is displayed. This change brings back previous behavior to show the user their key and where they should add it. It protects against adding unexpected keys by checking that the key is available locally.
This change also opens the browser to prompt the user to connect their key, when possible.
A follow-up change should add structured errors from the API. This change just relies on a known error message.
Example output for unregistered key on MacOS:
```
$ ./ollama push brxce/test
retrieving manifest
pushing 74701a8c35f6... 100% ▕██████▏ 1.3 GB
pushing 966de95ca8a6... 100% ▕██████▏ 1.4 KB
pushing fcc5a6bec9da... 100% ▕██████▏ 7.7 KB
pushing a70ff7e570d9... 100% ▕█████▏ 6.0 KB
pushing 4f659a1e86d7... 100% ▕████▏ 485 B
pushing manifest
Opening browser to connect your device...
```
Example output when browser cannot be opened:
```
$ ./ollama push brxce/llava
retrieving manifest
Error: unauthorized: unknown ollama key "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPB/puRhawCPJJ+rOUQJqW2O6QVuIAKovk7wjTRrhXlF"
Your ollama key is:
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPB/puRhawCPJJ+rOUQJqW2O6QVuIAKovk7wjTRrhXlF
Add your key at:
https://ollama.com/settings/keys
```
## How to run this branch
1. Build the server and CLI (make and go build)
2. Start the development server
3. Try to push a model to ollama.com with a key that has not been registered
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7868/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7868/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/152
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/152/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/152/comments
|
https://api.github.com/repos/ollama/ollama/issues/152/events
|
https://github.com/ollama/ollama/pull/152
| 1,814,879,032
|
PR_kwDOJ0Z1Ps5WDMBF
| 152
|
add ls alias
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-07-20T22:27:09
| 2023-07-20T22:28:28
| 2023-07-20T22:28:28
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/152",
"html_url": "https://github.com/ollama/ollama/pull/152",
"diff_url": "https://github.com/ollama/ollama/pull/152.diff",
"patch_url": "https://github.com/ollama/ollama/pull/152.patch",
"merged_at": "2023-07-20T22:28:28"
}
| null |
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/152/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/152/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2878
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2878/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2878/comments
|
https://api.github.com/repos/ollama/ollama/issues/2878/events
|
https://github.com/ollama/ollama/pull/2878
| 2,164,826,251
|
PR_kwDOJ0Z1Ps5ofBUF
| 2,878
|
api: start adding documentation to package api
|
{
"login": "eliben",
"id": 1130906,
"node_id": "MDQ6VXNlcjExMzA5MDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1130906?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eliben",
"html_url": "https://github.com/eliben",
"followers_url": "https://api.github.com/users/eliben/followers",
"following_url": "https://api.github.com/users/eliben/following{/other_user}",
"gists_url": "https://api.github.com/users/eliben/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eliben/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eliben/subscriptions",
"organizations_url": "https://api.github.com/users/eliben/orgs",
"repos_url": "https://api.github.com/users/eliben/repos",
"events_url": "https://api.github.com/users/eliben/events{/privacy}",
"received_events_url": "https://api.github.com/users/eliben/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-03-02T14:08:51
| 2024-04-10T17:31:55
| 2024-04-10T17:31:55
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/2878",
"html_url": "https://github.com/ollama/ollama/pull/2878",
"diff_url": "https://github.com/ollama/ollama/pull/2878.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2878.patch",
"merged_at": "2024-04-10T17:31:55"
}
|
Updates #2840
This is an initial PR just to double check that I'm heading in the right direction. If it looks good, I can update it (or send separate ones) to fill up the whole documentation for the `api` package.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2878/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2878/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7600
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7600/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7600/comments
|
https://api.github.com/repos/ollama/ollama/issues/7600/events
|
https://github.com/ollama/ollama/issues/7600
| 2,647,312,984
|
I_kwDOJ0Z1Ps6dyspY
| 7,600
|
`/save` overwrites everything including system and template and previous messages
|
{
"login": "belfie13",
"id": 39270867,
"node_id": "MDQ6VXNlcjM5MjcwODY3",
"avatar_url": "https://avatars.githubusercontent.com/u/39270867?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/belfie13",
"html_url": "https://github.com/belfie13",
"followers_url": "https://api.github.com/users/belfie13/followers",
"following_url": "https://api.github.com/users/belfie13/following{/other_user}",
"gists_url": "https://api.github.com/users/belfie13/gists{/gist_id}",
"starred_url": "https://api.github.com/users/belfie13/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/belfie13/subscriptions",
"organizations_url": "https://api.github.com/users/belfie13/orgs",
"repos_url": "https://api.github.com/users/belfie13/repos",
"events_url": "https://api.github.com/users/belfie13/events{/privacy}",
"received_events_url": "https://api.github.com/users/belfie13/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 1
| 2024-11-10T14:44:12
| 2024-11-10T22:21:48
| null |
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
the `/save` command overwrites everything and only includes the current context, any previously saved data is lost including the system and template
to recreate
```shell
ollama create model -f customtemplate.modelfile
ollama run model
>>> /set system you are an assistant
>>> how are you?
>>> /save 02
>>> what's the time?
>>> /save 02
>>> /load 02
>>> /show template
>>> /show system
>>> what questions have i asked you so far?
You've asked me three questions so far:
1. How are you?
2. What's the time?
>>> /bye
ollama run 02
/show modelfile
>>> what questions have i asked you so far?
You've asked me one question so far:
1. What questions have I asked you so far?
```
### OS
macOS
### GPU
AMD
### CPU
Intel
### Ollama version
0.4.0
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7600/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7600/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/306
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/306/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/306/comments
|
https://api.github.com/repos/ollama/ollama/issues/306/events
|
https://github.com/ollama/ollama/pull/306
| 1,840,328,464
|
PR_kwDOJ0Z1Ps5XYby9
| 306
|
automatically set num_keep if num_keep < 0
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-08-07T23:17:42
| 2023-08-08T16:25:36
| 2023-08-08T16:25:35
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/306",
"html_url": "https://github.com/ollama/ollama/pull/306",
"diff_url": "https://github.com/ollama/ollama/pull/306.diff",
"patch_url": "https://github.com/ollama/ollama/pull/306.patch",
"merged_at": "2023-08-08T16:25:35"
}
|
num_keep defines how many tokens to keep in the context when truncating inputs. if left to its default value of -1, the server will calculate num_keep to be the left of the system instructions
resolves #299
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/306/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/306/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5637
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5637/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5637/comments
|
https://api.github.com/repos/ollama/ollama/issues/5637/events
|
https://github.com/ollama/ollama/pull/5637
| 2,404,030,923
|
PR_kwDOJ0Z1Ps51JEVc
| 5,637
|
llm: avoid loading model if system memory is too small
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-07-11T20:17:35
| 2024-07-11T23:42:58
| 2024-07-11T23:42:57
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5637",
"html_url": "https://github.com/ollama/ollama/pull/5637",
"diff_url": "https://github.com/ollama/ollama/pull/5637.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5637.patch",
"merged_at": "2024-07-11T23:42:57"
}
| null |
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5637/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5637/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5132
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5132/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5132/comments
|
https://api.github.com/repos/ollama/ollama/issues/5132/events
|
https://github.com/ollama/ollama/issues/5132
| 2,361,175,255
|
I_kwDOJ0Z1Ps6MvKzX
| 5,132
|
CANNOT DOWNLOAD MODELS
|
{
"login": "Udacv",
"id": 126667614,
"node_id": "U_kgDOB4zLXg",
"avatar_url": "https://avatars.githubusercontent.com/u/126667614?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Udacv",
"html_url": "https://github.com/Udacv",
"followers_url": "https://api.github.com/users/Udacv/followers",
"following_url": "https://api.github.com/users/Udacv/following{/other_user}",
"gists_url": "https://api.github.com/users/Udacv/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Udacv/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Udacv/subscriptions",
"organizations_url": "https://api.github.com/users/Udacv/orgs",
"repos_url": "https://api.github.com/users/Udacv/repos",
"events_url": "https://api.github.com/users/Udacv/events{/privacy}",
"received_events_url": "https://api.github.com/users/Udacv/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 4
| 2024-06-19T03:16:38
| 2024-06-19T06:07:21
| 2024-06-19T06:07:21
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Recently, when I use 'ollama run' to download models, I cannot download anything with the bug following.

Im from China, I cannot download either with the local Internet or with a VPN.
### OS
Windows
### GPU
AMD
### CPU
AMD
### Ollama version
0.1.44
|
{
"login": "Udacv",
"id": 126667614,
"node_id": "U_kgDOB4zLXg",
"avatar_url": "https://avatars.githubusercontent.com/u/126667614?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Udacv",
"html_url": "https://github.com/Udacv",
"followers_url": "https://api.github.com/users/Udacv/followers",
"following_url": "https://api.github.com/users/Udacv/following{/other_user}",
"gists_url": "https://api.github.com/users/Udacv/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Udacv/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Udacv/subscriptions",
"organizations_url": "https://api.github.com/users/Udacv/orgs",
"repos_url": "https://api.github.com/users/Udacv/repos",
"events_url": "https://api.github.com/users/Udacv/events{/privacy}",
"received_events_url": "https://api.github.com/users/Udacv/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5132/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5132/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3527
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3527/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3527/comments
|
https://api.github.com/repos/ollama/ollama/issues/3527/events
|
https://github.com/ollama/ollama/issues/3527
| 2,229,893,144
|
I_kwDOJ0Z1Ps6E6XgY
| 3,527
|
Ollama conflict with amdgpu driver on Debian
|
{
"login": "hpsaturn",
"id": 423856,
"node_id": "MDQ6VXNlcjQyMzg1Ng==",
"avatar_url": "https://avatars.githubusercontent.com/u/423856?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hpsaturn",
"html_url": "https://github.com/hpsaturn",
"followers_url": "https://api.github.com/users/hpsaturn/followers",
"following_url": "https://api.github.com/users/hpsaturn/following{/other_user}",
"gists_url": "https://api.github.com/users/hpsaturn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hpsaturn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hpsaturn/subscriptions",
"organizations_url": "https://api.github.com/users/hpsaturn/orgs",
"repos_url": "https://api.github.com/users/hpsaturn/repos",
"events_url": "https://api.github.com/users/hpsaturn/events{/privacy}",
"received_events_url": "https://api.github.com/users/hpsaturn/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6433346500,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf3UTxA",
"url": "https://api.github.com/repos/ollama/ollama/labels/amd",
"name": "amd",
"color": "000000",
"default": false,
"description": "Issues relating to AMD GPUs and ROCm"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 2
| 2024-04-07T18:42:54
| 2024-05-21T18:25:32
| 2024-05-21T18:24:42
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I notice that my Debian fails after the first suspend, it can't suspend again because the amdgpu driver has a kernel exception. Researching that, I found the Ollama service can't stop and also it produces this behavior. My current workaround is disable the systemd ollama service in the boot, with that my Debian is able to resume after each suspend.
### What did you expect to see?
Not conflicts with my amdgpu driver and able to stop ollama service.
### Steps to reproduce
- install ollama via curl (official installation)
- suspend your machine
- wakeup your machine
- suspend again (the machine dies, and you cant wakeup it again)
### Are there any recent changes that introduced the issue?
I'm new with Ollama, but the last version (0.1.30) of curl reproduce the problem
### OS
Linux
### Architecture
amd64
### Platform
_No response_
### Ollama version
0.1.30
### GPU
AMD
### GPU info
kernel exception details after first wakeup after suspend:
```bash
Apr 7 20:01:45 minisf kernel: [ 75.265979] ACPI: PM: Waking up from system sleep state S3
Apr 7 20:01:45 minisf kernel: [ 75.267852] pci 0000:00:00.2: can't derive routing for PCI INT A
Apr 7 20:01:45 minisf kernel: [ 75.267854] pci 0000:00:00.2: PCI INT A: no GSI
Apr 7 20:01:45 minisf kernel: [ 75.267886] xhci_hcd 0000:01:00.0: xHC error in resume, USBSTS 0x401, Reinit
Apr 7 20:01:45 minisf kernel: [ 75.267888] usb usb1: root hub lost power or was reset
Apr 7 20:01:45 minisf kernel: [ 75.267889] usb usb2: root hub lost power or was reset
Apr 7 20:01:45 minisf kernel: [ 75.268081] [drm] PCIE GART of 1024M enabled.
Apr 7 20:01:45 minisf kernel: [ 75.268085] [drm] PTB located at 0x000000F41FC00000
Apr 7 20:01:45 minisf kernel: [ 75.268099] [drm] PSP is resuming...
Apr 7 20:01:45 minisf kernel: [ 75.287957] [drm] reserve 0x400000 from 0xf41f800000 for PSP TMR
Apr 7 20:01:45 minisf kernel: [ 75.339908] nvme nvme1: 8/0/0 default/read/poll queues
Apr 7 20:01:45 minisf kernel: [ 75.375226] amdgpu 0000:07:00.0: amdgpu: RAS: optional ras ta ucode is not available
Apr 7 20:01:45 minisf kernel: [ 75.383457] amdgpu 0000:07:00.0: amdgpu: RAP: optional rap ta ucode is not available
Apr 7 20:01:45 minisf kernel: [ 75.383458] amdgpu 0000:07:00.0: amdgpu: SECUREDISPLAY: securedisplay ta ucode is not available
Apr 7 20:01:45 minisf kernel: [ 75.383460] amdgpu 0000:07:00.0: amdgpu: SMU is resuming...
Apr 7 20:01:45 minisf kernel: [ 75.383502] amdgpu 0000:07:00.0: amdgpu: dpm has been disabled
Apr 7 20:01:45 minisf kernel: [ 75.385720] amdgpu 0000:07:00.0: amdgpu: SMU is resumed successfully!
Apr 7 20:01:45 minisf kernel: [ 75.386290] [drm] DMUB hardware initialized: version=0x0101000A
Apr 7 20:01:45 minisf kernel: [ 75.414203] [drm] Unknown EDID CEA parser results
Apr 7 20:01:45 minisf kernel: [ 75.441521] [drm] Unknown EDID CEA parser results
Apr 7 20:01:45 minisf kernel: [ 75.506752] [drm] kiq ring mec 2 pipe 1 q 0
Apr 7 20:01:45 minisf kernel: [ 75.510655] [drm] VCN decode and encode initialized successfully(under DPG Mode).
Apr 7 20:01:45 minisf kernel: [ 75.510892] [drm] JPEG decode initialized successfully.
Apr 7 20:01:45 minisf kernel: [ 75.510898] amdgpu 0000:07:00.0: amdgpu: ring gfx uses VM inv eng 0 on hub 0
Apr 7 20:01:45 minisf kernel: [ 75.510899] amdgpu 0000:07:00.0: amdgpu: ring comp_1.0.0 uses VM inv eng 1 on hub 0
Apr 7 20:01:45 minisf kernel: [ 75.510900] amdgpu 0000:07:00.0: amdgpu: ring comp_1.1.0 uses VM inv eng 4 on hub 0
Apr 7 20:01:45 minisf kernel: [ 75.510900] amdgpu 0000:07:00.0: amdgpu: ring comp_1.2.0 uses VM inv eng 5 on hub 0
Apr 7 20:01:45 minisf kernel: [ 75.510901] amdgpu 0000:07:00.0: amdgpu: ring comp_1.3.0 uses VM inv eng 6 on hub 0
Apr 7 20:01:45 minisf kernel: [ 75.510901] amdgpu 0000:07:00.0: amdgpu: ring comp_1.0.1 uses VM inv eng 7 on hub 0
Apr 7 20:01:45 minisf kernel: [ 75.510901] amdgpu 0000:07:00.0: amdgpu: ring comp_1.1.1 uses VM inv eng 8 on hub 0
Apr 7 20:01:45 minisf kernel: [ 75.510902] amdgpu 0000:07:00.0: amdgpu: ring comp_1.2.1 uses VM inv eng 9 on hub 0
Apr 7 20:01:45 minisf kernel: [ 75.510902] amdgpu 0000:07:00.0: amdgpu: ring comp_1.3.1 uses VM inv eng 10 on hub 0
Apr 7 20:01:45 minisf kernel: [ 75.510903] amdgpu 0000:07:00.0: amdgpu: ring kiq_2.1.0 uses VM inv eng 11 on hub 0
Apr 7 20:01:45 minisf kernel: [ 75.510903] amdgpu 0000:07:00.0: amdgpu: ring sdma0 uses VM inv eng 0 on hub 1
Apr 7 20:01:45 minisf kernel: [ 75.510904] amdgpu 0000:07:00.0: amdgpu: ring vcn_dec uses VM inv eng 1 on hub 1
Apr 7 20:01:45 minisf kernel: [ 75.510904] amdgpu 0000:07:00.0: amdgpu: ring vcn_enc0 uses VM inv eng 4 on hub 1
Apr 7 20:01:45 minisf kernel: [ 75.510905] amdgpu 0000:07:00.0: amdgpu: ring vcn_enc1 uses VM inv eng 5 on hub 1
Apr 7 20:01:45 minisf kernel: [ 75.510905] amdgpu 0000:07:00.0: amdgpu: ring jpeg_dec uses VM inv eng 6 on hub 1
Apr 7 20:01:45 minisf kernel: [ 75.512851] PGD 0 P4D 0
Apr 7 20:01:45 minisf kernel: [ 75.512853] Oops: 0000 [#1] PREEMPT SMP NOPTI
Apr 7 20:01:45 minisf kernel: [ 75.512854] CPU: 4 PID: 102 Comm: kworker/u64:8 Not tainted 6.1.0-0.deb11.17-amd64 #1 Debian 6.1.69-1~bpo11+1
Apr 7 20:01:45 minisf kernel: [ 75.512857] Hardware name: BESSTAR TECH LIMITED B550/B550, BIOS 5.17 03/31/2022
Apr 7 20:01:45 minisf kernel: [ 75.512858] Workqueue: kfd_restore_wq restore_process_worker [amdgpu]
Apr 7 20:01:45 minisf kernel: [ 75.512993] RIP: 0010:amdgpu_amdkfd_gpuvm_restore_process_bos+0x75/0x680 [amdgpu]
Apr 7 20:01:45 minisf kernel: [ 75.513109] Code: 00 00 48 89 84 24 e0 00 00 00 48 89 84 24 e8 00 00 00 48 8d 84 24 f0 00 00 00 48 89 84 24 f0 00 00 00 48 89 84 24 f8 00 00 00 <8b> 47 60 48 8d 3c c0 48 c1 e7 03 e8 6b 85 02 e1 48 85 c0 0f 84 bd
Apr 7 20:01:45 minisf kernel: [ 75.513110] RSP: 0018:ffffae858054fcb0 EFLAGS: 00010246
Apr 7 20:01:45 minisf kernel: [ 75.513111] RAX: ffffae858054fda0 RBX: 0000000000000000 RCX: ffff8caa80059028
Apr 7 20:01:45 minisf kernel: [ 75.513112] RDX: 0000000000000001 RSI: 0000000000000dc0 RDI: 0000000000000000
Apr 7 20:01:45 minisf kernel: [ 75.513113] RBP: ffff8caa9396a800 R08: ffff8caa9396aa28 R09: ffff8caa80e2a074
Apr 7 20:01:45 minisf kernel: [ 75.513113] R10: 000000000000000f R11: 000000000000000f R12: ffff8caa9396aa20
Apr 7 20:01:45 minisf kernel: [ 75.513114] R13: ffff8caa865f1800 R14: 0000000000000000 R15: ffff8caa865f1805
Apr 7 20:01:45 minisf kernel: [ 75.513115] FS: 0000000000000000(0000) GS:ffff8cb17e300000(0000) knlGS:0000000000000000
Apr 7 20:01:45 minisf kernel: [ 75.513115] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Apr 7 20:01:45 minisf kernel: [ 75.513116] CR2: 0000000000000060 CR3: 0000000477810000 CR4: 0000000000750ee0
Apr 7 20:01:45 minisf kernel: [ 75.513117] PKRU: 55555554
Apr 7 20:01:45 minisf kernel: [ 75.513117] Call Trace:
Apr 7 20:01:45 minisf kernel: [ 75.513120] <TASK>
Apr 7 20:01:45 minisf kernel: [ 75.513122] ? __die_body.cold+0x1a/0x1f
Apr 7 20:01:45 minisf kernel: [ 75.513125] ? page_fault_oops+0xae/0x280
Apr 7 20:01:45 minisf kernel: [ 75.513127] ? exc_page_fault+0x71/0x170
Apr 7 20:01:45 minisf kernel: [ 75.513129] ? asm_exc_page_fault+0x22/0x30
Apr 7 20:01:45 minisf kernel: [ 75.513133] ? amdgpu_amdkfd_gpuvm_restore_process_bos+0x75/0x680 [amdgpu]
Apr 7 20:01:45 minisf kernel: [ 75.513235] ? load_balance+0xa95/0xd70
Apr 7 20:01:45 minisf kernel: [ 75.513238] ? psi_group_change+0x151/0x340
Apr 7 20:01:45 minisf kernel: [ 75.513240] ? psi_task_switch+0xd7/0x230
Apr 7 20:01:45 minisf kernel: [ 75.513242] ? __switch_to_asm+0x3a/0x60
Apr 7 20:01:45 minisf kernel: [ 75.513244] ? finish_task_switch.isra.0+0x8f/0x2d0
Apr 7 20:01:45 minisf kernel: [ 75.513246] restore_process_worker+0x30/0xf0 [amdgpu]
Apr 7 20:01:45 minisf kernel: [ 75.513347] process_one_work+0x1e5/0x3b0
Apr 7 20:01:45 minisf kernel: [ 75.513351] worker_thread+0x50/0x3a0
Apr 7 20:01:45 minisf kernel: [ 75.513353] ? rescuer_thread+0x390/0x390
Apr 7 20:01:45 minisf kernel: [ 75.513354] kthread+0xd8/0x100
Apr 7 20:01:45 minisf kernel: [ 75.513356] ? kthread_complete_and_exit+0x20/0x20
Apr 7 20:01:45 minisf kernel: [ 75.513358] ret_from_fork+0x22/0x30
Apr 7 20:01:45 minisf kernel: [ 75.513360] </TASK>
Apr 7 20:01:45 minisf kernel: [ 75.513361] Modules linked in: nft_chain_nat xt_MASQUERADE nf_nat nf_conntrack_netlink xfrm_user xfrm_algo br_netfilter bridge stp llc binfmt_misc cmac algif_hash algif_skcipher af_alg bnep overlay ip6t_REJECT nf_reject_ipv6 xt_hl ip6_tables ip6t_rt ipt_REJECT nf_reject_ipv4 mt7921e btusb xt_LOG btrtl mt7921_common nf_log_syslog btbcm btintel mt76_connac_lib btmtk amdgpu xt_comment mt76 bluetooth squashfs snd_hda_codec_hdmi nft_limit snd_hda_intel mac80211 jitterentropy_rng snd_intel_dspcfg intel_rapl_msr snd_intel_sdw_acpi snd_usb_audio gpu_sched intel_rapl_common libarc4 snd_hda_codec drm_buddy snd_usbmidi_lib nls_ascii edac_mce_amd drbg uvcvideo drm_display_helper nls_cp437 snd_rawmidi snd_hda_core ansi_cprng videobuf2_vmalloc cfg80211 snd_pci_acp6x snd_seq_device snd_hwdep cec vfat videobuf2_memops rc_core kvm_amd ecdh_generic snd_pcm videobuf2_v4l2 fat snd_pci_acp5x drm_ttm_helper ecc videobuf2_common joydev cdc_acm loop rfkill snd_timer ttm snd_rn_pci_acp3x kvm drm_kms_helper
Apr 7 20:01:45 minisf kernel: [ 75.513392] snd_acp_config snd xt_limit sp5100_tco irqbypass snd_soc_acpi i2c_algo_bit ccp snd_pci_acp3x soundcore xt_addrtype k10temp rapl watchdog wmi_bmof efi_pstore pcspkr xt_tcpudp evdev acpi_cpufreq button xt_conntrack nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 nft_compat nf_tables videodev libcrc32c nfnetlink mc drm fuse configfs efivarfs ip_tables x_tables autofs4 ext4 crc16 mbcache jbd2 crc32c_generic hid_cmedia hid_logitech_hidpp hid_logitech_dj hid_generic crc32_pclmul crc32c_intel usbhid hid ghash_clmulni_intel sha512_ssse3 sha512_generic sha256_ssse3 sha1_ssse3 nvme ahci libahci xhci_pci nvme_core libata xhci_hcd t10_pi crc64_rocksoft_generic aesni_intel crypto_simd crc64_rocksoft cryptd usbcore scsi_mod igc crc_t10dif crct10dif_generic i2c_piix4 crct10dif_pclmul crc64 crct10dif_common usb_common scsi_common video wmi gpio_amdpt gpio_generic
Apr 7 20:01:45 minisf kernel: [ 75.513421] CR2: 0000000000000060
Apr 7 20:01:45 minisf kernel: [ 75.513422] ---[ end trace 0000000000000000 ]---
Apr 7 20:01:45 minisf kernel: [ 75.584387] ata4: SATA link down (SStatus 0 SControl 330)
Apr 7 20:01:45 minisf kernel: [ 75.584408] ata5: SATA link down (SStatus 0 SControl 330)
Apr 7 20:01:45 minisf kernel: [ 75.584430] ata1: SATA link down (SStatus 0 SControl 330)
Apr 7 20:01:45 minisf kernel: [ 75.584441] ata7: SATA link down (SStatus 0 SControl 300)
Apr 7 20:01:45 minisf kernel: [ 75.584443] ata8: SATA link down (SStatus 0 SControl 300)
Apr 7 20:01:45 minisf kernel: [ 75.584449] ata2: SATA link down (SStatus 0 SControl 330)
Apr 7 20:01:45 minisf kernel: [ 75.584466] ata3: SATA link down (SStatus 0 SControl 330)
Apr 7 20:01:45 minisf kernel: [ 75.584487] ata6: SATA link down (SStatus 0 SControl 330)
Apr 7 20:01:45 minisf kernel: [ 75.616395] usb 1-1: reset full-speed USB device number 2 using xhci_hcd
Apr 7 20:01:45 minisf kernel: [ 75.623721] RIP: 0010:amdgpu_amdkfd_gpuvm_restore_process_bos+0x75/0x680 [amdgpu]
Apr 7 20:01:45 minisf kernel: [ 75.623843] Code: 00 00 48 89 84 24 e0 00 00 00 48 89 84 24 e8 00 00 00 48 8d 84 24 f0 00 00 00 48 89 84 24 f0 00 00 00 48 89 84 24 f8 00 00 00 <8b> 47 60 48 8d 3c c0 48 c1 e7 03 e8 6b 85 02 e1 48 85 c0 0f 84 bd
Apr 7 20:01:45 minisf kernel: [ 75.623844] RSP: 0018:ffffae858054fcb0 EFLAGS: 00010246
Apr 7 20:01:45 minisf kernel: [ 75.623844] RAX: ffffae858054fda0 RBX: 0000000000000000 RCX: ffff8caa80059028
Apr 7 20:01:45 minisf kernel: [ 75.623845] RDX: 0000000000000001 RSI: 0000000000000dc0 RDI: 0000000000000000
Apr 7 20:01:45 minisf kernel: [ 75.623845] RBP: ffff8caa9396a800 R08: ffff8caa9396aa28 R09: ffff8caa80e2a074
Apr 7 20:01:45 minisf kernel: [ 75.623846] R10: 000000000000000f R11: 000000000000000f R12: ffff8caa9396aa20
Apr 7 20:01:45 minisf kernel: [ 75.623846] R13: ffff8caa865f1800 R14: 0000000000000000 R15: ffff8caa865f1805
Apr 7 20:01:45 minisf kernel: [ 75.623847] FS: 0000000000000000(0000) GS:ffff8cb17e300000(0000) knlGS:0000000000000000
Apr 7 20:01:45 minisf kernel: [ 75.623847] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Apr 7 20:01:45 minisf kernel: [ 75.623848] CR2: 0000000000000060 CR3: 0000000477810000 CR4: 0000000000750ee0
Apr 7 20:01:45 minisf kernel: [ 75.623849] PKRU: 55555554
Apr 7 20:01:45 minisf kernel: [ 75.623849] note: kworker/u64:8[102] exited with irqs disabled
Apr 7 20:01:45 minisf kernel: [ 75.628470] nvme nvme0: 8/0/0 default/read/poll queues
Apr 7 20:01:45 minisf kernel: [ 76.039922] usb 1-6: reset full-speed USB device number 6 using xhci_hcd
Apr 7 20:01:45 minisf kernel: [ 76.532304] usb 1-2: reset high-speed USB device number 3 using xhci_hcd
Apr 7 20:01:45 minisf kernel: [ 76.892313] usb 1-5: reset high-speed USB device number 4 using xhci_hcd
Apr 7 20:01:45 minisf kernel: [ 77.246326] usb 2-2: reset SuperSpeed USB device number 2 using xhci_hcd
Apr 7 20:01:45 minisf kernel: [ 77.360800] usb 1-2.4: reset high-speed USB device number 5 using xhci_hcd
Apr 7 20:01:45 minisf kernel: [ 77.512863] OOM killer enabled.
Apr 7 20:01:45 minisf kernel: [ 77.512865] Restarting tasks ... done.
```
### CPU
AMD
### Other software
No LSB modules are available.
Distributor ID: Debian
Description: Debian GNU/Linux 11 (bullseye)
Release: 11
Codename: bullseye
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3527/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3527/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3710
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3710/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3710/comments
|
https://api.github.com/repos/ollama/ollama/issues/3710/events
|
https://github.com/ollama/ollama/pull/3710
| 2,249,201,612
|
PR_kwDOJ0Z1Ps5s-NFn
| 3,710
|
update jetson tutorial
|
{
"login": "remy415",
"id": 105550370,
"node_id": "U_kgDOBkqSIg",
"avatar_url": "https://avatars.githubusercontent.com/u/105550370?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/remy415",
"html_url": "https://github.com/remy415",
"followers_url": "https://api.github.com/users/remy415/followers",
"following_url": "https://api.github.com/users/remy415/following{/other_user}",
"gists_url": "https://api.github.com/users/remy415/gists{/gist_id}",
"starred_url": "https://api.github.com/users/remy415/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/remy415/subscriptions",
"organizations_url": "https://api.github.com/users/remy415/orgs",
"repos_url": "https://api.github.com/users/remy415/repos",
"events_url": "https://api.github.com/users/remy415/events{/privacy}",
"received_events_url": "https://api.github.com/users/remy415/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-04-17T20:18:07
| 2024-04-18T23:02:09
| 2024-04-18T23:02:09
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3710",
"html_url": "https://github.com/ollama/ollama/pull/3710",
"diff_url": "https://github.com/ollama/ollama/pull/3710.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3710.patch",
"merged_at": "2024-04-18T23:02:09"
}
| null |
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3710/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3710/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4390
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4390/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4390/comments
|
https://api.github.com/repos/ollama/ollama/issues/4390/events
|
https://github.com/ollama/ollama/issues/4390
| 2,291,969,108
|
I_kwDOJ0Z1Ps6InKxU
| 4,390
|
Feature Request: Customizable JSON Encoder/Decoder Configuration for REST API Endpoints or other that might need
|
{
"login": "H0llyW00dzZ",
"id": 17626300,
"node_id": "MDQ6VXNlcjE3NjI2MzAw",
"avatar_url": "https://avatars.githubusercontent.com/u/17626300?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/H0llyW00dzZ",
"html_url": "https://github.com/H0llyW00dzZ",
"followers_url": "https://api.github.com/users/H0llyW00dzZ/followers",
"following_url": "https://api.github.com/users/H0llyW00dzZ/following{/other_user}",
"gists_url": "https://api.github.com/users/H0llyW00dzZ/gists{/gist_id}",
"starred_url": "https://api.github.com/users/H0llyW00dzZ/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/H0llyW00dzZ/subscriptions",
"organizations_url": "https://api.github.com/users/H0llyW00dzZ/orgs",
"repos_url": "https://api.github.com/users/H0llyW00dzZ/repos",
"events_url": "https://api.github.com/users/H0llyW00dzZ/events{/privacy}",
"received_events_url": "https://api.github.com/users/H0llyW00dzZ/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 7706482389,
"node_id": "LA_kwDOJ0Z1Ps8AAAABy1eW1Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/api",
"name": "api",
"color": "bfdadc",
"default": false,
"description": ""
}
] |
open
| false
| null |
[] | null | 1
| 2024-05-13T06:49:23
| 2024-11-06T17:37:06
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
## Description
Since this repository is written in Go, it is possible to customize the `JSON encoding/decoding` configuration for `REST API endpoints`.
However, it may require refactoring from scratch since there are many `hard-coded` instances of `JSON` using the `standard library`.
## Proposed Feature
The proposed feature would work as follows:
- Users can customize the `JSON Encoder/Decoder` by using `another package`. This idea is inspired by [`Fiber`](https://docs.gofiber.io/guide/faster-fiber), which provides improved flexibility.
- By implementing this feature, users will have more control over the JSON serialization and deserialization process, allowing for optimizations and customizations specific to their needs.
## Benefits
- Improved flexibility in handling JSON encoding/decoding for REST API endpoints.
- Ability to optimize performance by using alternative JSON packages.
- Greater control over the JSON serialization and deserialization process.
## Considerations
- Refactoring the existing codebase may be required to remove hard-coded instances of JSON using the standard library.
- Thorough testing should be conducted to ensure compatibility and performance improvements.
## References
- [`Fiber` - Custom JSON Encoding/Decoding](https://docs.gofiber.io/guide/faster-fiber)
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4390/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4390/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/2752
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2752/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2752/comments
|
https://api.github.com/repos/ollama/ollama/issues/2752/events
|
https://github.com/ollama/ollama/issues/2752
| 2,152,976,820
|
I_kwDOJ0Z1Ps6AU9G0
| 2,752
|
CUDA error: out of memory
|
{
"login": "kennethwork101",
"id": 147571330,
"node_id": "U_kgDOCMvCgg",
"avatar_url": "https://avatars.githubusercontent.com/u/147571330?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kennethwork101",
"html_url": "https://github.com/kennethwork101",
"followers_url": "https://api.github.com/users/kennethwork101/followers",
"following_url": "https://api.github.com/users/kennethwork101/following{/other_user}",
"gists_url": "https://api.github.com/users/kennethwork101/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kennethwork101/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kennethwork101/subscriptions",
"organizations_url": "https://api.github.com/users/kennethwork101/orgs",
"repos_url": "https://api.github.com/users/kennethwork101/repos",
"events_url": "https://api.github.com/users/kennethwork101/events{/privacy}",
"received_events_url": "https://api.github.com/users/kennethwork101/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 3
| 2024-02-25T22:47:20
| 2024-02-25T23:06:51
| 2024-02-25T23:01:38
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
CUDA error: out of memory
ollama version is 0.1.27
windows 11 wsl2 ubuntu 22.04
RTX 4070 TI
Running a set of tests with each test loading a different model using ollama.
It takes some time during testing we ran into the CUDA error: out of memory 3 times.
Note each of the models being loaded is less than 10 GB in size and the RTX 4070 TI should have 12 GB VRAM
Is this an issue with ollama or I should reduce the number of test?
..................................................................................................^M
llama_new_context_with_model: n_ctx = 4096^M
llama_new_context_with_model: freq_base = 10000.0^M
llama_new_context_with_model: freq_scale = 1^M
ggml_init_cublas: GGML_CUDA_FORCE_MMQ: yes^M
ggml_init_cublas: CUDA_USE_TENSOR_CORES: no^M
ggml_init_cublas: found 1 CUDA devices:^M
Device 0: NVIDIA GeForce RTX 4070 Ti, compute capability 8.9, VMM: yes^M
llama_kv_cache_init: CUDA0 KV buffer size = 2048.00 MiB^M
llama_new_context_with_model: KV self size = 2048.00 MiB, K (f16): 1024.00 MiB, V (f16): 1024.00 MiB^M
llama_new_context_with_model: CUDA_Host input buffer size = 17.04 MiB^M
llama_new_context_with_model: CUDA0 compute buffer size = 296.02 MiB^M
llama_new_context_with_model: CUDA_Host compute buffer size = 8.00 MiB^M
llama_new_context_with_model: graph splits (measure): 3^M
CUDA error: out of memory^M
current device: 0, in function ggml_cuda_pool_malloc_vmm at /go/src/github.com/jmorganca/ollama/llm/llama.cpp/ggml-cuda.cu:7976^M
cuMemAddressReserve(&g_cuda_pool_addr[device], CUDA_POOL_VMM_MAX_SIZE, 0, 0, 0)^M
GGML_ASSERT: /go/src/github.com/jmorganca/ollama/llm/llama.cpp/ggml-cuda.cu:243: !"CUDA error"^M
SIGABRT: abort^M
PC=0x7fbf6e5f79fc m=26 sigcode=18446744073709551610^M
signal arrived during cgo execution^M
^M
goroutine 296 [syscall, 13 minutes]:^M
runtime.cgocall(0x9bcdd0, 0xc000520748)^M
/usr/local/go/src/runtime/cgocall.go:157 +0x4b fp=0xc000520720 sp=0xc0005206e8 pc=0x409b0b^M
github.com/jmorganca/ollama/llm._Cfunc_dyn_llama_server_init({0x7fbed8001270, 0x7fbecc44c250, 0x7fbecc43cca0, 0x7fbecc43ff20, 0x7fbecc44fc00, 0x7fbecc449840, 0x7fbecc43fba0, 0x7fbecc43cd20, 0x7fbecc450500, 0x7fbecc44f7a0, ...}, ...)^M
The second error is similar but some of the buffer sizes are different:
ggml_init_cublas: found 1 CUDA devices:^M
Device 0: NVIDIA GeForce RTX 4070 Ti, compute capability 8.9, VMM: yes^M
llama_kv_cache_init: CUDA0 KV buffer size = 768.00 MiB^M
llama_new_context_with_model: KV self size = 768.00 MiB, K (f16): 384.00 MiB, V (f16): 384.00 MiB^M
llama_new_context_with_model: CUDA_Host input buffer size = 17.04 MiB^M
llama_new_context_with_model: CUDA0 compute buffer size = 296.02 MiB^M
llama_new_context_with_model: CUDA_Host compute buffer size = 8.00 MiB^M
llama_new_context_with_model: graph splits (measure): 3^M
CUDA error: out of memory^
----------
ggml_init_cublas: found 1 CUDA devices:
Device 0: NVIDIA GeForce RTX 4070 Ti, compute capability 8.9, VMM: yes
llama_kv_cache_init: CUDA0 KV buffer size = 768.00 MiB
llama_new_context_with_model: KV self size = 768.00 MiB, K (f16): 384.00 MiB, V (f16): 384.00 MiB
llama_new_context_with_model: CUDA_Host input buffer size = 17.04 MiB
llama_new_context_with_model: CUDA0 compute buffer size = 296.02 MiB
llama_new_context_with_model: CUDA_Host compute buffer size = 8.00 MiB
llama_new_context_with_model: graph splits (measure): 3
[GIN] 2024/02/25 - 14:24:49 | 200 | 421.392µs | 127.0.0.1 | GET "/api/version"
CUDA error: out of memory
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2752/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2752/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4769
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4769/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4769/comments
|
https://api.github.com/repos/ollama/ollama/issues/4769/events
|
https://github.com/ollama/ollama/issues/4769
| 2,329,291,838
|
I_kwDOJ0Z1Ps6K1iw-
| 4,769
|
Infinetely generating irrelavent response when running phi3-mini in Linux Terminal
|
{
"login": "MomenAbdelwadoud",
"id": 66366532,
"node_id": "MDQ6VXNlcjY2MzY2NTMy",
"avatar_url": "https://avatars.githubusercontent.com/u/66366532?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MomenAbdelwadoud",
"html_url": "https://github.com/MomenAbdelwadoud",
"followers_url": "https://api.github.com/users/MomenAbdelwadoud/followers",
"following_url": "https://api.github.com/users/MomenAbdelwadoud/following{/other_user}",
"gists_url": "https://api.github.com/users/MomenAbdelwadoud/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MomenAbdelwadoud/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MomenAbdelwadoud/subscriptions",
"organizations_url": "https://api.github.com/users/MomenAbdelwadoud/orgs",
"repos_url": "https://api.github.com/users/MomenAbdelwadoud/repos",
"events_url": "https://api.github.com/users/MomenAbdelwadoud/events{/privacy}",
"received_events_url": "https://api.github.com/users/MomenAbdelwadoud/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 1
| 2024-06-01T19:00:46
| 2024-06-01T19:01:37
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I setted up a modelFile that loads PHI-3-mini-instruct, and whatever input I give it it starts to generate infinite response related to coding as shown in the screenshot.
Here is the Content of the model file:
```
FROM ./Phi-3-mini-4k-instruct.Q4_0.gguf
PARAMETER temperature 0.1
PARAMETER num_predict 50
```
Loaded by `ollama create phi3 -f FileName` without adding any extension to the file
Also the issue exists before adding the parameters

### OS
Linux
### GPU
Intel
### CPU
Intel
### Ollama version
0.1.39
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4769/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4769/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/6279
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6279/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6279/comments
|
https://api.github.com/repos/ollama/ollama/issues/6279/events
|
https://github.com/ollama/ollama/pull/6279
| 2,457,284,038
|
PR_kwDOJ0Z1Ps536jVv
| 6,279
|
feat: Introduce K/V Context Quantisation (vRAM improvements)
|
{
"login": "sammcj",
"id": 862951,
"node_id": "MDQ6VXNlcjg2Mjk1MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/862951?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sammcj",
"html_url": "https://github.com/sammcj",
"followers_url": "https://api.github.com/users/sammcj/followers",
"following_url": "https://api.github.com/users/sammcj/following{/other_user}",
"gists_url": "https://api.github.com/users/sammcj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sammcj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sammcj/subscriptions",
"organizations_url": "https://api.github.com/users/sammcj/orgs",
"repos_url": "https://api.github.com/users/sammcj/repos",
"events_url": "https://api.github.com/users/sammcj/events{/privacy}",
"received_events_url": "https://api.github.com/users/sammcj/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 181
| 2024-08-09T07:22:10
| 2024-12-07T05:14:34
| 2024-12-03T23:57:20
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6279",
"html_url": "https://github.com/ollama/ollama/pull/6279",
"diff_url": "https://github.com/ollama/ollama/pull/6279.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6279.patch",
"merged_at": "2024-12-03T23:57:19"
}
|
This PR introduces optional K/V (context) cache quantisation.
> TLDR; Set your k/v cache to Q8_0 and use 50% less vRAM for no noticeable quality impact.
Ollama is arguably the only remaining the popular model server to not support this.
This PR brings Ollama's K/V memory usage inline with likes of ExLlamav2, MistralRS, MLX, vLLM and those using llama.cpp directly which have supported this for some time.
- The scheduler has been updated to take quantised K/V estimates into account.
- Documentation added in the FAQ.
- Re-factored (_many_ times) over the past n months (since July) to fix various merge conflicts and the new runners/cgo implementation.
I've been running from this branch with q8_0 since I raised the original PR on **2024-07-24**. It's been stable and unlocks a lot of models I wouldn't otherwise be able to run with a decent context size.
For future reference, llama.cpp's perplexity benchmarks are scattered all over the place and things have improved since these were done but to give you an idea - https://github.com/ggerganov/llama.cpp/pull/7412
---
Context:
Without K/V context cache quantisation every user is likely wasting vast amounts of (v)RAM - or simply thinking they're not able to run larger models or context sizes due to their available (v)RAM.
LLM Servers that support K/V context quantisation:
- ✅ llama.cpp
- ✅ exllamav2 (along with TabbyAPI)
- ✅ MLX
- ✅ Mistral.RS
- ✅ vLLM
- ✅ Transformers
- ❌ Ollama
As things are currently with Ollama your only options are to:
- Use another LLM server.
- Not run the models/context sizes you want.
- Build and run Ollama from the branch in this PR.
None of those are ideal, and over the second half of this year I've spoken with a lot of folks that are building and running Ollama from this feature branch, which has pushed me to keep it updated with frequent rebasing and refactoring while awaiting review.
This is not ideal and I would like to close off this PR and not have people rely on my fork.
---
_PR recreated after Github broke https://github.com/ollama/ollama/pull/5894_
<img width="480" alt="image" src="https://github.com/user-attachments/assets/739ffc99-4fc5-49b3-b5e1-4116852ae2f3">
## Impact
- With defaults (f16) - none, behaviour is the same as the current defaults.
- With q8_0
- **The K/V context cache will consume 1/2 the vRAM** (!)
- A _very_ small loss in quality within the cache
- With q4_0
- **the K/V context cache will consume 1/4 the vRAM** (!!)
- A small/medium loss in quality within the cache
- For example, loading llama3.1 8b with a 32K context drops vRAM usage by cache from 4GB to 1.1GB
- The and q4_1 -> q5_1 in between however Ollama is not currently supporting other llama.cpp quantisation types (`q5_1`, `q5_0`, `q4_1`, `iq4_nl`)
- Fixes https://github.com/ollama/ollama/issues/5091
- Related discussion in llama.cpp - https://github.com/ggerganov/llama.cpp/discussions/5932
- (Note that ExllamaV2 has a similar feature - https://github.com/turboderp/exllamav2/blob/master/doc/qcache_eval.md)
## Screenshots
Example of estimated (v)RAM savings
The numbers within each column (`F16 (Q8_0,Q4_0)`) are how much (v)RAM is required to run the model at the given K/V cache quant type.
For example: `30.8(22.8,18.8)` would mean:
- 30.8GB for F16 K/V
- 22.8GB for Q8_0 K/V
- 18.8 for Q4_0 K/V
<img width="1788" alt="SCR-20241116-haow" src="https://github.com/user-attachments/assets/cd0ebacd-caa3-4215-9901-2aedc0607903">
(via [ingest](https://github.com/sammcj/ingest/) or [gollama](https://github.com/sammcj/gollama))
### f16

### q4_0

### q8_0

## Performance
llama.cpp perplexity measurements: https://github.com/ggerganov/llama.cpp/pull/7412#issuecomment-2120427347 - note that things have improved even further since these measurements which are now quite dated.
---
I broke down why this is important in a conversation with someone recently:
> Let's say you're running a model, it's (v)RAM usage is determined by two things:
> - The size of the model (params, quant type)
> - The size of your context
>
> Let's assume:
> - Your model is 7b at q4_k_m or something and takes up 7GB of memory.
> - You're working with a small code repo, or a few Obsidian documents that are around 30-40K tokens in total.
>
> Your memory usage might look like this:
> - 7GB for the model
> - 5GB for the context
> - = 12GB of memory required
>
> With q8 quantised k/v, that becomes:
> - 7GB for the model
> - 2.5GB for the context
> - = 9.5GB of memory required
>
> If you go here: https://huggingface.co/spaces/NyxKrage/LLM-Model-VRAM-Calculator
>
> - Select exl2 (exllamav2 models)
> - Enter a model like Qwen/Qwen2.5-Coder-32B-Instruct
> - Enter a context size that would commonly be used for coding, e.g. 32k, or maybe 64k
>
> Note the calculated memory requirements at full f16, now try q8 or q4 even (exllama is very good at this and q4 has essentially no loss)
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6279/reactions",
"total_count": 118,
"+1": 30,
"-1": 0,
"laugh": 0,
"hooray": 27,
"confused": 0,
"heart": 40,
"rocket": 15,
"eyes": 6
}
|
https://api.github.com/repos/ollama/ollama/issues/6279/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2942
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2942/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2942/comments
|
https://api.github.com/repos/ollama/ollama/issues/2942/events
|
https://github.com/ollama/ollama/pull/2942
| 2,170,198,208
|
PR_kwDOJ0Z1Ps5oxQ-k
| 2,942
|
[FEAT] Add `init` command
|
{
"login": "m4tt72",
"id": 20604769,
"node_id": "MDQ6VXNlcjIwNjA0NzY5",
"avatar_url": "https://avatars.githubusercontent.com/u/20604769?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/m4tt72",
"html_url": "https://github.com/m4tt72",
"followers_url": "https://api.github.com/users/m4tt72/followers",
"following_url": "https://api.github.com/users/m4tt72/following{/other_user}",
"gists_url": "https://api.github.com/users/m4tt72/gists{/gist_id}",
"starred_url": "https://api.github.com/users/m4tt72/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/m4tt72/subscriptions",
"organizations_url": "https://api.github.com/users/m4tt72/orgs",
"repos_url": "https://api.github.com/users/m4tt72/repos",
"events_url": "https://api.github.com/users/m4tt72/events{/privacy}",
"received_events_url": "https://api.github.com/users/m4tt72/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-03-05T21:47:35
| 2024-05-10T11:18:06
| 2024-05-07T16:59:22
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/2942",
"html_url": "https://github.com/ollama/ollama/pull/2942",
"diff_url": "https://github.com/ollama/ollama/pull/2942.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2942.patch",
"merged_at": null
}
|
Inspired by `docker init`, this command will create a new `Modelfile` in the current directory.
If the file already exists, it will ask for confirmation before overwriting it.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2942/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2942/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4914
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4914/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4914/comments
|
https://api.github.com/repos/ollama/ollama/issues/4914/events
|
https://github.com/ollama/ollama/issues/4914
| 2,340,874,859
|
I_kwDOJ0Z1Ps6Lhupr
| 4,914
|
Request Ollama Web API to fetch all models data in the remote
|
{
"login": "edwinjhlee",
"id": 4426319,
"node_id": "MDQ6VXNlcjQ0MjYzMTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/4426319?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/edwinjhlee",
"html_url": "https://github.com/edwinjhlee",
"followers_url": "https://api.github.com/users/edwinjhlee/followers",
"following_url": "https://api.github.com/users/edwinjhlee/following{/other_user}",
"gists_url": "https://api.github.com/users/edwinjhlee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/edwinjhlee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/edwinjhlee/subscriptions",
"organizations_url": "https://api.github.com/users/edwinjhlee/orgs",
"repos_url": "https://api.github.com/users/edwinjhlee/repos",
"events_url": "https://api.github.com/users/edwinjhlee/events{/privacy}",
"received_events_url": "https://api.github.com/users/edwinjhlee/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-06-07T17:20:26
| 2024-06-09T17:51:32
| 2024-06-09T17:25:09
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Is is possible that ollama officials provides API to fetch remote model data ?
---
I am developing a Ollama CLI. I currently use the data scraped from ollama website. If this API available, I can just request data using official API.
This is the demo:
https://www.x-cmd.com/mod/ollama
<img width="1494" alt="image" src="https://github.com/ollama/ollama/assets/4426319/04db018d-08f9-4e2d-bbb3-41077bbb0807">
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4914/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4914/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6560
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6560/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6560/comments
|
https://api.github.com/repos/ollama/ollama/issues/6560/events
|
https://github.com/ollama/ollama/issues/6560
| 2,495,102,637
|
I_kwDOJ0Z1Ps6UuD6t
| 6,560
|
Logging final input after prompting specified in model file as a debug flag
|
{
"login": "adela185",
"id": 77362834,
"node_id": "MDQ6VXNlcjc3MzYyODM0",
"avatar_url": "https://avatars.githubusercontent.com/u/77362834?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/adela185",
"html_url": "https://github.com/adela185",
"followers_url": "https://api.github.com/users/adela185/followers",
"following_url": "https://api.github.com/users/adela185/following{/other_user}",
"gists_url": "https://api.github.com/users/adela185/gists{/gist_id}",
"starred_url": "https://api.github.com/users/adela185/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/adela185/subscriptions",
"organizations_url": "https://api.github.com/users/adela185/orgs",
"repos_url": "https://api.github.com/users/adela185/repos",
"events_url": "https://api.github.com/users/adela185/events{/privacy}",
"received_events_url": "https://api.github.com/users/adela185/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-08-29T17:09:11
| 2024-08-29T18:12:27
| 2024-08-29T18:12:27
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Just as the title says, it would be useful to log the final input text given to the model after it undergoes the prompting specified in the model file. This is for the windows preview. The logging already included only prints the parameters and API request, but not the final input.
|
{
"login": "adela185",
"id": 77362834,
"node_id": "MDQ6VXNlcjc3MzYyODM0",
"avatar_url": "https://avatars.githubusercontent.com/u/77362834?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/adela185",
"html_url": "https://github.com/adela185",
"followers_url": "https://api.github.com/users/adela185/followers",
"following_url": "https://api.github.com/users/adela185/following{/other_user}",
"gists_url": "https://api.github.com/users/adela185/gists{/gist_id}",
"starred_url": "https://api.github.com/users/adela185/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/adela185/subscriptions",
"organizations_url": "https://api.github.com/users/adela185/orgs",
"repos_url": "https://api.github.com/users/adela185/repos",
"events_url": "https://api.github.com/users/adela185/events{/privacy}",
"received_events_url": "https://api.github.com/users/adela185/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6560/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6560/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/400
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/400/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/400/comments
|
https://api.github.com/repos/ollama/ollama/issues/400/events
|
https://github.com/ollama/ollama/pull/400
| 1,862,304,627
|
PR_kwDOJ0Z1Ps5Yi0A6
| 400
|
wip: decode gguf
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2023-08-22T22:52:05
| 2023-09-14T20:33:49
| 2023-08-30T14:21:10
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | true
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/400",
"html_url": "https://github.com/ollama/ollama/pull/400",
"diff_url": "https://github.com/ollama/ollama/pull/400.diff",
"patch_url": "https://github.com/ollama/ollama/pull/400.patch",
"merged_at": null
}
| null |
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/400/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/400/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/615
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/615/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/615/comments
|
https://api.github.com/repos/ollama/ollama/issues/615/events
|
https://github.com/ollama/ollama/pull/615
| 1,914,581,798
|
PR_kwDOJ0Z1Ps5bSd8Q
| 615
|
add `ollama run` flags: template, context, stop
|
{
"login": "sqs",
"id": 1976,
"node_id": "MDQ6VXNlcjE5NzY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1976?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sqs",
"html_url": "https://github.com/sqs",
"followers_url": "https://api.github.com/users/sqs/followers",
"following_url": "https://api.github.com/users/sqs/following{/other_user}",
"gists_url": "https://api.github.com/users/sqs/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sqs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sqs/subscriptions",
"organizations_url": "https://api.github.com/users/sqs/orgs",
"repos_url": "https://api.github.com/users/sqs/repos",
"events_url": "https://api.github.com/users/sqs/events{/privacy}",
"received_events_url": "https://api.github.com/users/sqs/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2023-09-27T02:35:30
| 2024-11-21T09:17:13
| 2024-11-21T09:17:13
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/615",
"html_url": "https://github.com/ollama/ollama/pull/615",
"diff_url": "https://github.com/ollama/ollama/pull/615.diff",
"patch_url": "https://github.com/ollama/ollama/pull/615.patch",
"merged_at": null
}
|
These new `ollama run` flags make `ollama run` useful for debugging more advanced invocations of the Ollama generate API.
For example, the following command generates completions with context tokens for `const primes=[1,2,3,5,7`, a stop sequence (`;`), and a custom template:
```
ollama run --verbose --context 3075,544,1355,353,518,29896,29892,29906,29892,29941,29892,29945,29892,29955,29892 --template '{{.Prompt}}' --stop ';' codellama:7b-code ''
```
You can accomplish something similar with `curl` and the Ollama API, but it is easier to use the `ollama run` CLI and then you get the nice verbose timings output as well in an easy-to-consume form.
|
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/615/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/615/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4148
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4148/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4148/comments
|
https://api.github.com/repos/ollama/ollama/issues/4148/events
|
https://github.com/ollama/ollama/issues/4148
| 2,278,721,519
|
I_kwDOJ0Z1Ps6H0ofv
| 4,148
|
Importing a Mistral finetune into Ollama fails with `invalid file magic`
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 0
| 2024-05-04T04:17:02
| 2024-05-04T05:27:41
| null |
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Importing a custom Mistral-7B finetune into Ollama from safetensors fails with `invalid file magic`. Converting the same safetensors to gguf with llama.cpp works on import.
Steps to reproduce:
- Finetune Mistral with MLX, fuse the lora to the model to get the resulting safetensors.
- Create the modelfile:
```
FROM ./model
TEMPLATE """[INST] {{ .System }} {{ .Prompt }} [/INST]"""
PARAMETER stop "[INST]"
PARAMETER stop "[/INST]"
```
- Import the model
```
❯ ollama create my-finetune -f ~/models/my-finetune/Modelfile
transferring model data
unpacking model metadata
processing tensors
creating model layer
Error: invalid file magic
```
This Modelfile works, after converting the safetensors with llama.cpp:
```
FROM ./ggml-model-Q4_K_M.gguf
TEMPLATE """[INST] {{ .System }} {{ .Prompt }} [/INST]"""
PARAMETER stop "[INST]"
PARAMETER stop "[/INST]"
```
### OS
macOS
### GPU
_No response_
### CPU
_No response_
### Ollama version
0.1.33
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4148/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4148/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/2151
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2151/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2151/comments
|
https://api.github.com/repos/ollama/ollama/issues/2151/events
|
https://github.com/ollama/ollama/issues/2151
| 2,095,079,810
|
I_kwDOJ0Z1Ps584GGC
| 2,151
|
Layer splitting on macOS
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-01-23T01:39:49
| 2024-05-10T01:07:59
| 2024-05-10T01:07:59
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Llama.cpp now supports splitting layers over Metal and CPU, we should implement this once we fix #1952
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2151/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2151/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7458
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7458/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7458/comments
|
https://api.github.com/repos/ollama/ollama/issues/7458/events
|
https://github.com/ollama/ollama/issues/7458
| 2,627,875,374
|
I_kwDOJ0Z1Ps6cojIu
| 7,458
|
mistake:ollama run llama3_8b_chat_uncensored_q4_0
|
{
"login": "1015g",
"id": 185006875,
"node_id": "U_kgDOCwb7Gw",
"avatar_url": "https://avatars.githubusercontent.com/u/185006875?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/1015g",
"html_url": "https://github.com/1015g",
"followers_url": "https://api.github.com/users/1015g/followers",
"following_url": "https://api.github.com/users/1015g/following{/other_user}",
"gists_url": "https://api.github.com/users/1015g/gists{/gist_id}",
"starred_url": "https://api.github.com/users/1015g/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/1015g/subscriptions",
"organizations_url": "https://api.github.com/users/1015g/orgs",
"repos_url": "https://api.github.com/users/1015g/repos",
"events_url": "https://api.github.com/users/1015g/events{/privacy}",
"received_events_url": "https://api.github.com/users/1015g/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 9
| 2024-10-31T21:35:52
| 2024-11-13T22:05:27
| 2024-11-13T22:05:26
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Why is this happening? I've successfully converted it to Ollama format using the ollama create llama3_8b_chat_uncensored_q4_0 -f Modelfile command and successfully loaded and run it with the ollama run command, but why is it answering such an unreasonable nonsense? Is this GUFF model incompatible with Ollama?
Microsoft Windows [version 10.0.19045.5011]
(c) Microsoft Corporation.All rights reserved.
C:\Users\Administrator>ollama -v
ollama version is 0.3.14 C:\Users\Administrator>ollama run llama3_8b_chat_uncensored_q4_0
>>> Hello
,I'm Wang Wei. Now, let's talk about collection classes in Java, namely List and Map. For example, there is such a piece of code: ```
public static void main(String[] args) {
List list = new ArrayList();
for (int i=0; i<10; ++i) {
list.add(i);
}
// Use the for-each statement
for (Integer integer : list) {
System.out.println(integer + " ");
}
}
``` Let's break down this code: - Start by creating an array with ArrayList, and then add the data;
- Then use the for-each statement to loop through this array; You may ask, why is there no 'int i=0; 'Such a statement? This is because in Java 8, the variables of the for-each loop are only valid in the internal scope. In other words, the variables in the for-each loop are only used in the loop body and cannot be used externally. If you want to use it externally, you can do so with a normal for loop. Now let's take a look at the 'Map', which looks like this: ```
public static void main(String[] args) {
Map map = new HashMap();
for (int i=0; i<10; ++i) {
map.put(i, "value" + i);
}
// Use the for-each statement
for (Entry entry : map.entrySet()) {
System.out.println(entry.getKey() + ":" + entry.getValue());
}
}
``` Let's analyze this code: - Create a map with HashMap and add the data.
- Then use a for-each loop to iterate over this array; In Java 8, Maps can also use the for-each statement. If you want to use key-value pairs externally for variables, you can do so using a normal for loop. So why do ArrayList and HashMap use the for-each statement? Because they're all iterative. Of course, there are other types of collections that can also be iterable. Like what
: ```
List list = Arrays.asList(1, 2, 3);
Map map = EnumMap.of(Color.class, Color.RED);
``` If you want to use the for-each statement in Java 8, make sure your class implements the Iterable <T>interface or Collection<T>or other iterable interfaces. If you want to create your own collection class and can use the for-each statement, just implement the Iterable <T>interface. Specifically, just override the iterator() method. ```
public class MyList implements Iterable {
...
private List list; public MyList() {
this.list = new ArrayList();
} public void add(Object o) {
this.list.add(o);
}
@Override
public Iterator iterator() {
return this.list.iterator();
}
}
``` If you want to use the for-each statement to iterate over an array, you can also use Stream. Specifically, just call 'Arrays.stream(arrayName)'. ```
public static void main(String[] args) {
int[] array = new int[10];
for (int i=0; i<10; ++i) {
array[i] = i;
}
Arrays.stream(array).forEach(x -> System.out.println(x));
}
```
>>> hello
######### ### H2
This is a test header #### H3
This is another test header
[Link text](http://www.example.com)
**Bold text**
*Italicized text*
~Strike through text~
>Blockquote
- [ ] Todo list item 1
- [x] Todo list item 2

```
python
def hello():
print("Hello, world!")
``` ```css
body {
background-color: red;
}
```
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.3.14
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7458/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7458/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2463
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2463/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2463/comments
|
https://api.github.com/repos/ollama/ollama/issues/2463/events
|
https://github.com/ollama/ollama/issues/2463
| 2,130,246,460
|
I_kwDOJ0Z1Ps5--Ps8
| 2,463
|
Resume does not seem to work
|
{
"login": "da-z",
"id": 3681019,
"node_id": "MDQ6VXNlcjM2ODEwMTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3681019?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/da-z",
"html_url": "https://github.com/da-z",
"followers_url": "https://api.github.com/users/da-z/followers",
"following_url": "https://api.github.com/users/da-z/following{/other_user}",
"gists_url": "https://api.github.com/users/da-z/gists{/gist_id}",
"starred_url": "https://api.github.com/users/da-z/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/da-z/subscriptions",
"organizations_url": "https://api.github.com/users/da-z/orgs",
"repos_url": "https://api.github.com/users/da-z/repos",
"events_url": "https://api.github.com/users/da-z/events{/privacy}",
"received_events_url": "https://api.github.com/users/da-z/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-02-12T14:23:08
| 2024-02-12T15:27:33
| 2024-02-12T15:27:33
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I had about 4.5GB out of 49GB already downloaded but on a retry it restarted from scratch (same layer - edb02981b596...).
`ollama pull nous-hermes2-mixtral:8x7b-dpo-q8_0`
|
{
"login": "da-z",
"id": 3681019,
"node_id": "MDQ6VXNlcjM2ODEwMTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3681019?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/da-z",
"html_url": "https://github.com/da-z",
"followers_url": "https://api.github.com/users/da-z/followers",
"following_url": "https://api.github.com/users/da-z/following{/other_user}",
"gists_url": "https://api.github.com/users/da-z/gists{/gist_id}",
"starred_url": "https://api.github.com/users/da-z/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/da-z/subscriptions",
"organizations_url": "https://api.github.com/users/da-z/orgs",
"repos_url": "https://api.github.com/users/da-z/repos",
"events_url": "https://api.github.com/users/da-z/events{/privacy}",
"received_events_url": "https://api.github.com/users/da-z/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2463/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2463/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/886
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/886/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/886/comments
|
https://api.github.com/repos/ollama/ollama/issues/886/events
|
https://github.com/ollama/ollama/pull/886
| 1,958,046,331
|
PR_kwDOJ0Z1Ps5dk8k-
| 886
|
during linux install add the ollama service user to the current resolved user's group
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-10-23T21:23:35
| 2023-10-24T17:52:05
| 2023-10-24T17:52:05
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/886",
"html_url": "https://github.com/ollama/ollama/pull/886",
"diff_url": "https://github.com/ollama/ollama/pull/886.diff",
"patch_url": "https://github.com/ollama/ollama/pull/886.patch",
"merged_at": null
}
| null |
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/886/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/886/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1632
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1632/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1632/comments
|
https://api.github.com/repos/ollama/ollama/issues/1632/events
|
https://github.com/ollama/ollama/issues/1632
| 2,050,719,207
|
I_kwDOJ0Z1Ps56O33n
| 1,632
|
Only utilizing one thread - Unraid
|
{
"login": "evanrodgers",
"id": 36175609,
"node_id": "MDQ6VXNlcjM2MTc1NjA5",
"avatar_url": "https://avatars.githubusercontent.com/u/36175609?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/evanrodgers",
"html_url": "https://github.com/evanrodgers",
"followers_url": "https://api.github.com/users/evanrodgers/followers",
"following_url": "https://api.github.com/users/evanrodgers/following{/other_user}",
"gists_url": "https://api.github.com/users/evanrodgers/gists{/gist_id}",
"starred_url": "https://api.github.com/users/evanrodgers/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/evanrodgers/subscriptions",
"organizations_url": "https://api.github.com/users/evanrodgers/orgs",
"repos_url": "https://api.github.com/users/evanrodgers/repos",
"events_url": "https://api.github.com/users/evanrodgers/events{/privacy}",
"received_events_url": "https://api.github.com/users/evanrodgers/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2023-12-20T14:56:36
| 2024-03-11T17:44:40
| 2024-03-11T17:44:39
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi all,
Right now I am using Ollama 0.1.17 in an Unraid environment. The app works great, but it's only utilizing one thread. I did some light troubleshooting by adding the "num_thread" parameter to the modelfile as shown below, but it still only utilizes one thread. I've checked this by looking at the dashboard in Unraid and HTOP.

Modelfile:
```
# Modelfile generated by "ollama show"
# To build a new Modelfile based on this one, replace the FROM line with:
# FROM solar:10.7b-instruct-v1-q4_K_S
FROM /root/.ollama/models/blobs/sha256:034d7d28eda7c13de702e09978ffbc85d8fcc0ac173e5ecb4d0c626507fb25b6
TEMPLATE """### System:
{{ .System }}
### User:
{{ .Prompt }}
### Assistant:
"""
PARAMETER num_ctx 4096
PARAMETER num_thread 6
PARAMETER stop "</s>"
PARAMETER stop "### System:"
PARAMETER stop "### User:"
PARAMETER stop "### Assistant:"
```
Thanks everyone for all your great work!
**EDIT: I was able to fix this**, but to be honest I'm unclear what the solution was. I rebooted (always good), disabled SMT, disabled CSM, enabled 4G Decoding, enabled Resizable Bar. Sorry for the trouble!
|
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1632/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1632/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/371
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/371/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/371/comments
|
https://api.github.com/repos/ollama/ollama/issues/371/events
|
https://github.com/ollama/ollama/issues/371
| 1,855,464,365
|
I_kwDOJ0Z1Ps5umCOt
| 371
|
Strip `https://` from model in `ollama run <model>`
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5667396210,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2acg",
"url": "https://api.github.com/repos/ollama/ollama/labels/good%20first%20issue",
"name": "good first issue",
"color": "7057ff",
"default": true,
"description": "Good for newcomers"
}
] |
closed
| false
| null |
[] | null | 1
| 2023-08-17T17:59:48
| 2023-08-23T17:52:23
| 2023-08-23T17:52:23
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Users may prefix the model name with `https://` and we should accept it and strip it out.
For example:
```
ollama run https://ollama.ai/m/wb
```
Should be equivalent to
```
ollama run ollama.ai/m/wb
```
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/371/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/371/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3878
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3878/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3878/comments
|
https://api.github.com/repos/ollama/ollama/issues/3878/events
|
https://github.com/ollama/ollama/issues/3878
| 2,261,502,860
|
I_kwDOJ0Z1Ps6Gy8uM
| 3,878
|
Simple guide for using/uploading custom models from Windows onto Ollama.
|
{
"login": "Avroboros",
"id": 146421595,
"node_id": "U_kgDOCLo3Ww",
"avatar_url": "https://avatars.githubusercontent.com/u/146421595?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Avroboros",
"html_url": "https://github.com/Avroboros",
"followers_url": "https://api.github.com/users/Avroboros/followers",
"following_url": "https://api.github.com/users/Avroboros/following{/other_user}",
"gists_url": "https://api.github.com/users/Avroboros/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Avroboros/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Avroboros/subscriptions",
"organizations_url": "https://api.github.com/users/Avroboros/orgs",
"repos_url": "https://api.github.com/users/Avroboros/repos",
"events_url": "https://api.github.com/users/Avroboros/events{/privacy}",
"received_events_url": "https://api.github.com/users/Avroboros/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 0
| 2024-04-24T14:55:15
| 2024-04-24T14:55:15
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Ollama is fantastic, however until now i've been dependant on models that are already on the website.. There are many models from hugging face that I wanna use with Ollama (since Ollama is highly optimized and larger models run better on my computer using it). However, there are literally zero guides online about how to do that on windows.. The tutorials on youtube are all about Mac or Linux (which is frustrating)... And the guide on how to do that in this page is extremely confusing and unnecessarily overcomplicated (and again, it uses Mac and Linux terminology).
If someone could do a simple step by step for Windows users that'd be cool (such as how to upload models from HF onto the Ollama website so that they can be downloaded in the natural way via "Ollama run [model]" in the console like usual ollama models).
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3878/reactions",
"total_count": 7,
"+1": 7,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3878/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/7273
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7273/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7273/comments
|
https://api.github.com/repos/ollama/ollama/issues/7273/events
|
https://github.com/ollama/ollama/pull/7273
| 2,599,542,799
|
PR_kwDOJ0Z1Ps5_MIoP
| 7,273
|
server: allow vscode-webview origins
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-10-19T19:53:36
| 2024-10-19T21:06:42
| 2024-10-19T21:06:41
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7273",
"html_url": "https://github.com/ollama/ollama/pull/7273",
"diff_url": "https://github.com/ollama/ollama/pull/7273.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7273.patch",
"merged_at": "2024-10-19T21:06:41"
}
| null |
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7273/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7273/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6997
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6997/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6997/comments
|
https://api.github.com/repos/ollama/ollama/issues/6997/events
|
https://github.com/ollama/ollama/issues/6997
| 2,552,121,384
|
I_kwDOJ0Z1Ps6YHkgo
| 6,997
|
CUDA error: device kernel image is invalid - CC 7.5
|
{
"login": "nikita228gym",
"id": 66132104,
"node_id": "MDQ6VXNlcjY2MTMyMTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/66132104?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nikita228gym",
"html_url": "https://github.com/nikita228gym",
"followers_url": "https://api.github.com/users/nikita228gym/followers",
"following_url": "https://api.github.com/users/nikita228gym/following{/other_user}",
"gists_url": "https://api.github.com/users/nikita228gym/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nikita228gym/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nikita228gym/subscriptions",
"organizations_url": "https://api.github.com/users/nikita228gym/orgs",
"repos_url": "https://api.github.com/users/nikita228gym/repos",
"events_url": "https://api.github.com/users/nikita228gym/events{/privacy}",
"received_events_url": "https://api.github.com/users/nikita228gym/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg",
"url": "https://api.github.com/repos/ollama/ollama/labels/windows",
"name": "windows",
"color": "0052CC",
"default": false,
"description": ""
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg",
"url": "https://api.github.com/repos/ollama/ollama/labels/nvidia",
"name": "nvidia",
"color": "8CDB00",
"default": false,
"description": "Issues relating to Nvidia GPUs and CUDA"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info",
"name": "needs more info",
"color": "BA8041",
"default": false,
"description": "More information is needed to assist"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 3
| 2024-09-27T06:36:10
| 2024-11-05T23:25:47
| 2024-11-05T23:25:47
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Hello, I would like to apologize for my poor English (I am using a translator from another language). Could you please help me?
I had a problem with Ollama. As always, I tried to run it with the command "Ollama ru llama3.1:8b," but then an error occurred: "Error: llama runner process has terminated: CUDA error: device kernel image is invalid current device: 0, in function ggml_cuda_compute_forward at C:\a\ollama\ollama/llm/llama.cpp/ggml/src/ggml-cuda.cu:2326".
At first, I thought it might be because my version 3.1 was outdated, so I needed to update it. I installed version 3.2, but now, when I try to run 3.2, it gives me this error: Error: lama runner has terminated.

my NVIDIA GeForce GTX 1660 Super graphics card
I just talked to Ollama3.1:8b for 2 weeks, and today it stopped working for me
### OS
Windows
### GPU
Nvidia
### CPU
AMD
### Ollama version
3.1-3.2
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6997/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6997/timeline
| null |
completed
| false
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.