url
stringlengths 51
54
| repository_url
stringclasses 1
value | labels_url
stringlengths 65
68
| comments_url
stringlengths 60
63
| events_url
stringlengths 58
61
| html_url
stringlengths 39
44
| id
int64 1.78B
2.82B
| node_id
stringlengths 18
19
| number
int64 1
8.69k
| title
stringlengths 1
382
| user
dict | labels
listlengths 0
5
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
2
| milestone
null | comments
int64 0
323
| created_at
timestamp[s] | updated_at
timestamp[s] | closed_at
timestamp[s] | author_association
stringclasses 4
values | sub_issues_summary
dict | active_lock_reason
null | draft
bool 2
classes | pull_request
dict | body
stringlengths 2
118k
⌀ | closed_by
dict | reactions
dict | timeline_url
stringlengths 60
63
| performed_via_github_app
null | state_reason
stringclasses 4
values | is_pull_request
bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/ollama/ollama/issues/1850
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1850/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1850/comments
|
https://api.github.com/repos/ollama/ollama/issues/1850/events
|
https://github.com/ollama/ollama/pull/1850
| 2,069,658,546
|
PR_kwDOJ0Z1Ps5jbt9h
| 1,850
|
Offload layers to GPU based on new model size estimates
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-01-08T04:52:49
| 2024-01-10T13:28:25
| 2024-01-08T21:42:00
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1850",
"html_url": "https://github.com/ollama/ollama/pull/1850",
"diff_url": "https://github.com/ollama/ollama/pull/1850.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1850.patch",
"merged_at": "2024-01-08T21:42:00"
}
|
This PR fixes a large number of crashes and "out of memory" errors related to VRAM allocation, by using a more accurate estimation of how much memory is required to run a model with a given context size.
Models such as `mixtral` will now run on lower end hardware that would previously before, even if defaulting to the CPU is required. Also, more layers are loaded to Nvidia GPUs which should result in a speedup on Linux.
Details:
- VRAM estimation now accounts for the kv cache and tensor graph (which can grow to GiBs for large context sizes)
- On macOS, Ollama will now run in CPU mode, even on Apple Silicon (`arm64`) if the GPU doesn't have enough VRAM. Models such as `mixtral`, `llama2:70b`, etc will now work (perhaps slowly) instead of crashing
- On Linux, the number of layers to be offloaded to the GPU now accounts for the kv cache which is also partially offloaded
Todo in a follow up:
- Handle smaller batch sizes as mention in #1812
- Still seeing some errors with very large context sizes (64k, 128k)
- Limit `num_ctx` to what the model is trained on
Fixes #1838
Fixes #1812
Fixes #1516
Fixes #1674
Fixes #1374
Fixes #1534
Fixes #1303
Fixes #1413
Fixes #1636
Fixes #1837
Fixes #1627
Fixes #1566
Fixes #1576
Fixes #1703
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1850/reactions",
"total_count": 4,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 4,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1850/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/660
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/660/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/660/comments
|
https://api.github.com/repos/ollama/ollama/issues/660/events
|
https://github.com/ollama/ollama/issues/660
| 1,920,410,789
|
I_kwDOJ0Z1Ps5ydySl
| 660
|
Request: Docker image build having name/tag
|
{
"login": "jamesbraza",
"id": 8990777,
"node_id": "MDQ6VXNlcjg5OTA3Nzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/8990777?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jamesbraza",
"html_url": "https://github.com/jamesbraza",
"followers_url": "https://api.github.com/users/jamesbraza/followers",
"following_url": "https://api.github.com/users/jamesbraza/following{/other_user}",
"gists_url": "https://api.github.com/users/jamesbraza/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jamesbraza/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jamesbraza/subscriptions",
"organizations_url": "https://api.github.com/users/jamesbraza/orgs",
"repos_url": "https://api.github.com/users/jamesbraza/repos",
"events_url": "https://api.github.com/users/jamesbraza/events{/privacy}",
"received_events_url": "https://api.github.com/users/jamesbraza/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2023-09-30T21:54:47
| 2023-09-30T21:58:44
| 2023-09-30T21:58:43
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Having just built the Docker image successfully 🥳 :
```bash
> sudo docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
<none> <none> acfffae34e3a About a minute ago 824MB
```
Running `docker image ls`, there is no info about the Ollama image. Any chance we can configure the `Dockerfile` such that `REPOSITORY` and `TAG` is not `<none>`?
|
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/660/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/660/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1976
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1976/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1976/comments
|
https://api.github.com/repos/ollama/ollama/issues/1976/events
|
https://github.com/ollama/ollama/issues/1976
| 2,080,282,584
|
I_kwDOJ0Z1Ps57_pfY
| 1,976
|
Cloud storage support
|
{
"login": "beliboba",
"id": 73661136,
"node_id": "MDQ6VXNlcjczNjYxMTM2",
"avatar_url": "https://avatars.githubusercontent.com/u/73661136?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/beliboba",
"html_url": "https://github.com/beliboba",
"followers_url": "https://api.github.com/users/beliboba/followers",
"following_url": "https://api.github.com/users/beliboba/following{/other_user}",
"gists_url": "https://api.github.com/users/beliboba/gists{/gist_id}",
"starred_url": "https://api.github.com/users/beliboba/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/beliboba/subscriptions",
"organizations_url": "https://api.github.com/users/beliboba/orgs",
"repos_url": "https://api.github.com/users/beliboba/repos",
"events_url": "https://api.github.com/users/beliboba/events{/privacy}",
"received_events_url": "https://api.github.com/users/beliboba/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 6
| 2024-01-13T12:50:34
| 2024-01-18T17:40:47
| 2024-01-18T17:40:47
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Is there any support for cloud storage for models? If no, will it be ever implemented?
|
{
"login": "beliboba",
"id": 73661136,
"node_id": "MDQ6VXNlcjczNjYxMTM2",
"avatar_url": "https://avatars.githubusercontent.com/u/73661136?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/beliboba",
"html_url": "https://github.com/beliboba",
"followers_url": "https://api.github.com/users/beliboba/followers",
"following_url": "https://api.github.com/users/beliboba/following{/other_user}",
"gists_url": "https://api.github.com/users/beliboba/gists{/gist_id}",
"starred_url": "https://api.github.com/users/beliboba/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/beliboba/subscriptions",
"organizations_url": "https://api.github.com/users/beliboba/orgs",
"repos_url": "https://api.github.com/users/beliboba/repos",
"events_url": "https://api.github.com/users/beliboba/events{/privacy}",
"received_events_url": "https://api.github.com/users/beliboba/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1976/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1976/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5418
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5418/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5418/comments
|
https://api.github.com/repos/ollama/ollama/issues/5418/events
|
https://github.com/ollama/ollama/issues/5418
| 2,384,830,405
|
I_kwDOJ0Z1Ps6OJZ_F
| 5,418
|
DeepSeek-Coder-V2 (Lite) spouts GGGs
|
{
"login": "lorenzodimauro97",
"id": 50343905,
"node_id": "MDQ6VXNlcjUwMzQzOTA1",
"avatar_url": "https://avatars.githubusercontent.com/u/50343905?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lorenzodimauro97",
"html_url": "https://github.com/lorenzodimauro97",
"followers_url": "https://api.github.com/users/lorenzodimauro97/followers",
"following_url": "https://api.github.com/users/lorenzodimauro97/following{/other_user}",
"gists_url": "https://api.github.com/users/lorenzodimauro97/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lorenzodimauro97/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lorenzodimauro97/subscriptions",
"organizations_url": "https://api.github.com/users/lorenzodimauro97/orgs",
"repos_url": "https://api.github.com/users/lorenzodimauro97/repos",
"events_url": "https://api.github.com/users/lorenzodimauro97/events{/privacy}",
"received_events_url": "https://api.github.com/users/lorenzodimauro97/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 9
| 2024-07-01T22:23:39
| 2024-07-02T18:10:14
| 2024-07-01T23:04:38
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Eventually when using deepseek-coder-v2:16b-lite-instruct-q8_0 and Open Web UI (but also other means like for example continuedev) the model will stop working and spout GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG regardless of the input:

The only fix is to force stop and restart the model, which is bothersome enough to be worth issuing a bug
### OS
Windows
### GPU
AMD
### CPU
AMD
### Ollama version
0.1.48
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5418/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5418/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8629
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8629/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8629/comments
|
https://api.github.com/repos/ollama/ollama/issues/8629/events
|
https://github.com/ollama/ollama/issues/8629
| 2,815,526,057
|
I_kwDOJ0Z1Ps6n0YSp
| 8,629
|
Choose path to install on Windows
|
{
"login": "EvgeniGenchev",
"id": 59848681,
"node_id": "MDQ6VXNlcjU5ODQ4Njgx",
"avatar_url": "https://avatars.githubusercontent.com/u/59848681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/EvgeniGenchev",
"html_url": "https://github.com/EvgeniGenchev",
"followers_url": "https://api.github.com/users/EvgeniGenchev/followers",
"following_url": "https://api.github.com/users/EvgeniGenchev/following{/other_user}",
"gists_url": "https://api.github.com/users/EvgeniGenchev/gists{/gist_id}",
"starred_url": "https://api.github.com/users/EvgeniGenchev/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/EvgeniGenchev/subscriptions",
"organizations_url": "https://api.github.com/users/EvgeniGenchev/orgs",
"repos_url": "https://api.github.com/users/EvgeniGenchev/repos",
"events_url": "https://api.github.com/users/EvgeniGenchev/events{/privacy}",
"received_events_url": "https://api.github.com/users/EvgeniGenchev/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 1
| 2025-01-28T12:31:56
| 2025-01-28T21:31:28
| 2025-01-28T21:31:28
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
The title is pretty self-expanatory. I would be nice to chose a folder where ollama is being installed on windows instead of defaulting to C:\Users\...
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8629/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8629/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4316
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4316/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4316/comments
|
https://api.github.com/repos/ollama/ollama/issues/4316/events
|
https://github.com/ollama/ollama/pull/4316
| 2,290,020,182
|
PR_kwDOJ0Z1Ps5vHUAi
| 4,316
|
Bump VRAM buffer back up
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-05-10T16:16:16
| 2024-05-10T17:02:38
| 2024-05-10T17:02:35
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4316",
"html_url": "https://github.com/ollama/ollama/pull/4316",
"diff_url": "https://github.com/ollama/ollama/pull/4316.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4316.patch",
"merged_at": "2024-05-10T17:02:35"
}
|
Under stress scenarios we're seeing OOMs so this should help stabilize the allocations under heavy concurrency stress.
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4316/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4316/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/524
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/524/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/524/comments
|
https://api.github.com/repos/ollama/ollama/issues/524/events
|
https://github.com/ollama/ollama/pull/524
| 1,895,148,267
|
PR_kwDOJ0Z1Ps5aRLK_
| 524
|
subprocess improvements
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-09-13T19:42:08
| 2023-09-18T19:16:34
| 2023-09-18T19:16:33
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/524",
"html_url": "https://github.com/ollama/ollama/pull/524",
"diff_url": "https://github.com/ollama/ollama/pull/524.diff",
"patch_url": "https://github.com/ollama/ollama/pull/524.patch",
"merged_at": "2023-09-18T19:16:33"
}
|
- increase start-up timeout
- when runner fails to start fail rather than timing out
- try runners in order rather than choosing 1 runner
- embed metal runner in metal dir rather than gpu
- refactor logging and error messages
resolves #485
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/524/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/524/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6013
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6013/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6013/comments
|
https://api.github.com/repos/ollama/ollama/issues/6013/events
|
https://github.com/ollama/ollama/issues/6013
| 2,433,393,234
|
I_kwDOJ0Z1Ps6RCqJS
| 6,013
|
Getting 404 page not found on chat completions endpoint with new version
|
{
"login": "ajasingh",
"id": 15189049,
"node_id": "MDQ6VXNlcjE1MTg5MDQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/15189049?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ajasingh",
"html_url": "https://github.com/ajasingh",
"followers_url": "https://api.github.com/users/ajasingh/followers",
"following_url": "https://api.github.com/users/ajasingh/following{/other_user}",
"gists_url": "https://api.github.com/users/ajasingh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ajasingh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ajasingh/subscriptions",
"organizations_url": "https://api.github.com/users/ajasingh/orgs",
"repos_url": "https://api.github.com/users/ajasingh/repos",
"events_url": "https://api.github.com/users/ajasingh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ajasingh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-07-27T09:32:32
| 2024-07-27T09:45:40
| 2024-07-27T09:45:39
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I am sucessfully running llama3.1 model locally in command prompt but when i try to access it via api it keeps giving 404 not found to me
curl --location --request GET 'http://localhost:11434/v1/chat/completions' \
--header 'Content-Type: application/json' \
--data '{
"model": "llama3.1",
"messages": [
{
"role": "system",
"content": "You are a helpful assistant, you are required to analyse the chat history betweeen Equinix Support agent and the customer.Kindly analyse the chat history and provide what problem the user is facing. Kinldy only return the problem user is facing nothing else"
},
{
"role": "user",
"content": "Thanks for your patience"
}
]
}'
### OS
macOS
### GPU
_No response_
### CPU
_No response_
### Ollama version
0.3.0
|
{
"login": "ajasingh",
"id": 15189049,
"node_id": "MDQ6VXNlcjE1MTg5MDQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/15189049?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ajasingh",
"html_url": "https://github.com/ajasingh",
"followers_url": "https://api.github.com/users/ajasingh/followers",
"following_url": "https://api.github.com/users/ajasingh/following{/other_user}",
"gists_url": "https://api.github.com/users/ajasingh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ajasingh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ajasingh/subscriptions",
"organizations_url": "https://api.github.com/users/ajasingh/orgs",
"repos_url": "https://api.github.com/users/ajasingh/repos",
"events_url": "https://api.github.com/users/ajasingh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ajasingh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6013/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6013/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7476
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7476/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7476/comments
|
https://api.github.com/repos/ollama/ollama/issues/7476/events
|
https://github.com/ollama/ollama/issues/7476
| 2,630,922,055
|
I_kwDOJ0Z1Ps6c0K9H
| 7,476
|
llama3.2 11b setup error
|
{
"login": "Teramime",
"id": 185576450,
"node_id": "U_kgDOCw-sAg",
"avatar_url": "https://avatars.githubusercontent.com/u/185576450?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Teramime",
"html_url": "https://github.com/Teramime",
"followers_url": "https://api.github.com/users/Teramime/followers",
"following_url": "https://api.github.com/users/Teramime/following{/other_user}",
"gists_url": "https://api.github.com/users/Teramime/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Teramime/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Teramime/subscriptions",
"organizations_url": "https://api.github.com/users/Teramime/orgs",
"repos_url": "https://api.github.com/users/Teramime/repos",
"events_url": "https://api.github.com/users/Teramime/events{/privacy}",
"received_events_url": "https://api.github.com/users/Teramime/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-11-03T02:44:32
| 2024-11-03T21:31:05
| 2024-11-03T21:31:05
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I'm trying to develop an agent that analyzes images using llama3.2 11b.
Development environment:
os: window11 pro
cpu: intel i9-14900K
ram: 32G
vga: rtx 4080 super
Releases v0.3.14 When I install ollama and run
ollama run x/llama3.2-vision,
the installation goes well, but it terminates with the following message when running.
Error: llama runner process has terminated: exit status 0xc0000409
Is it because the vga performance is low?
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
Releases v0.3.14
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7476/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7476/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5048
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5048/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5048/comments
|
https://api.github.com/repos/ollama/ollama/issues/5048/events
|
https://github.com/ollama/ollama/issues/5048
| 2,353,904,198
|
I_kwDOJ0Z1Ps6MTbpG
| 5,048
|
Add 'free' command, to free the currently running model out of memory.
|
{
"login": "Dalibor-P",
"id": 131712814,
"node_id": "U_kgDOB9nHLg",
"avatar_url": "https://avatars.githubusercontent.com/u/131712814?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Dalibor-P",
"html_url": "https://github.com/Dalibor-P",
"followers_url": "https://api.github.com/users/Dalibor-P/followers",
"following_url": "https://api.github.com/users/Dalibor-P/following{/other_user}",
"gists_url": "https://api.github.com/users/Dalibor-P/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Dalibor-P/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Dalibor-P/subscriptions",
"organizations_url": "https://api.github.com/users/Dalibor-P/orgs",
"repos_url": "https://api.github.com/users/Dalibor-P/repos",
"events_url": "https://api.github.com/users/Dalibor-P/events{/privacy}",
"received_events_url": "https://api.github.com/users/Dalibor-P/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-06-14T18:44:52
| 2024-09-29T09:19:12
| 2024-09-29T09:19:12
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Add a new command, possibly `ollama free`, to immediately free the currently running model out of memory, instead of waiting the default five minutes, as an alternative to `keep alive` parameter. Additionally, add the option into the context menu of ollama taskbar icon, next to `view logs `and `quit ollama` buttons.
|
{
"login": "Dalibor-P",
"id": 131712814,
"node_id": "U_kgDOB9nHLg",
"avatar_url": "https://avatars.githubusercontent.com/u/131712814?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Dalibor-P",
"html_url": "https://github.com/Dalibor-P",
"followers_url": "https://api.github.com/users/Dalibor-P/followers",
"following_url": "https://api.github.com/users/Dalibor-P/following{/other_user}",
"gists_url": "https://api.github.com/users/Dalibor-P/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Dalibor-P/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Dalibor-P/subscriptions",
"organizations_url": "https://api.github.com/users/Dalibor-P/orgs",
"repos_url": "https://api.github.com/users/Dalibor-P/repos",
"events_url": "https://api.github.com/users/Dalibor-P/events{/privacy}",
"received_events_url": "https://api.github.com/users/Dalibor-P/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5048/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5048/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5661
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5661/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5661/comments
|
https://api.github.com/repos/ollama/ollama/issues/5661/events
|
https://github.com/ollama/ollama/issues/5661
| 2,406,643,762
|
I_kwDOJ0Z1Ps6Pcngy
| 5,661
|
num_ctx parameter does not work on Linux
|
{
"login": "ronchengang",
"id": 3615985,
"node_id": "MDQ6VXNlcjM2MTU5ODU=",
"avatar_url": "https://avatars.githubusercontent.com/u/3615985?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ronchengang",
"html_url": "https://github.com/ronchengang",
"followers_url": "https://api.github.com/users/ronchengang/followers",
"following_url": "https://api.github.com/users/ronchengang/following{/other_user}",
"gists_url": "https://api.github.com/users/ronchengang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ronchengang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ronchengang/subscriptions",
"organizations_url": "https://api.github.com/users/ronchengang/orgs",
"repos_url": "https://api.github.com/users/ronchengang/repos",
"events_url": "https://api.github.com/users/ronchengang/events{/privacy}",
"received_events_url": "https://api.github.com/users/ronchengang/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 5
| 2024-07-13T02:37:41
| 2024-10-16T06:10:02
| 2024-10-16T05:55:33
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Issue: Although the log shows the n_ctx = 102400, the input prompt still truncates to 2048.
Ollama version: 0.2.1
OS: AWS Linux, instance type: g5.xlarge,
GPU: Nvidia A10 24G GPU, version 12.x
Model: Qwen2-7B-Instruct, GGUF V3
Ollama server log:
```
Device 0: NVIDIA A10G, compute capability 8.6, VMM: yes
llm_load_tensors: ggml ctx size = 0.30 MiB
llm_load_tensors: offloading 28 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 29/29 layers to GPU
llm_load_tensors: CUDA_Host buffer size = 292.36 MiB
llm_load_tensors: CUDA0 buffer size = 3928.07 MiB
**llama_new_context_with_model: n_ctx = 102400**
llama_new_context_with_model: n_batch = 512
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 1
llama_new_context_with_model: freq_base = 1000000.0
llama_new_context_with_model: freq_scale = 1
```
above log says the n_ctx=102400, but I still get below input truncate warning and the n_ctx's value is 2048.
LLM request log:
```
[GIN] 2024/07/13 - 02:27:54 | 200 | 17.512894039s | 127.0.0.1 | POST "/api/chat"
INFO [update_slots] input truncated | **n_ctx=2048** n_erase=1440 n_keep=4 n_left=2044 n_shift=1022 tid="140646121488384" timestamp=1720837794
INFO [update_slots] input truncated | **n_ctx=2048** n_erase=1432 n_keep=4 n_left=2044 n_shift=1022 tid="140646121488384" timestamp=1720837794
INFO [update_slots] input truncated | **n_ctx=2048** n_erase=1441 n_keep=4 n_left=2044 n_shift=1022 tid="140646121488384" timestamp=1720837795
```
this same model+same ollama version run well on my Mac, but when I move it to AWS Linux, the error occurs.
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.2.1
|
{
"login": "ronchengang",
"id": 3615985,
"node_id": "MDQ6VXNlcjM2MTU5ODU=",
"avatar_url": "https://avatars.githubusercontent.com/u/3615985?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ronchengang",
"html_url": "https://github.com/ronchengang",
"followers_url": "https://api.github.com/users/ronchengang/followers",
"following_url": "https://api.github.com/users/ronchengang/following{/other_user}",
"gists_url": "https://api.github.com/users/ronchengang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ronchengang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ronchengang/subscriptions",
"organizations_url": "https://api.github.com/users/ronchengang/orgs",
"repos_url": "https://api.github.com/users/ronchengang/repos",
"events_url": "https://api.github.com/users/ronchengang/events{/privacy}",
"received_events_url": "https://api.github.com/users/ronchengang/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5661/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5661/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4501
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4501/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4501/comments
|
https://api.github.com/repos/ollama/ollama/issues/4501/events
|
https://github.com/ollama/ollama/issues/4501
| 2,303,035,854
|
I_kwDOJ0Z1Ps6JRYnO
| 4,501
|
Does Ollama currently plan to support multiple acceleration frameworks
|
{
"login": "glide-the",
"id": 16206043,
"node_id": "MDQ6VXNlcjE2MjA2MDQz",
"avatar_url": "https://avatars.githubusercontent.com/u/16206043?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/glide-the",
"html_url": "https://github.com/glide-the",
"followers_url": "https://api.github.com/users/glide-the/followers",
"following_url": "https://api.github.com/users/glide-the/following{/other_user}",
"gists_url": "https://api.github.com/users/glide-the/gists{/gist_id}",
"starred_url": "https://api.github.com/users/glide-the/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/glide-the/subscriptions",
"organizations_url": "https://api.github.com/users/glide-the/orgs",
"repos_url": "https://api.github.com/users/glide-the/repos",
"events_url": "https://api.github.com/users/glide-the/events{/privacy}",
"received_events_url": "https://api.github.com/users/glide-the/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-05-17T15:31:51
| 2024-07-09T05:17:34
| 2024-07-09T05:17:33
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
#### Requirements
Does Ollama currently plan to support multiple acceleration frameworks?
We understand that Ollama currently leverages Llama.cpp for inference acceleration, which supports only the Llama architecture. The GLM has made some modifications to the model.
We are very keen on seeing the GLM ecosystem implemented with C++ capabilities. To this end, we have developed the following design proposal and would like to inquire if Ollama has plans to advance this work.
##### Ollama Project Integration with ChatGLM and CogVM
> The Ollama project is currently built on the Llama.cpp acceleration framework, creating a one-click run framework. It leverages the inference and conversational capabilities of Llama.cpp. At a higher level, it has designed a service distribution and execution test. Users can pull quantized models from a remote image server and run them using a local client. The project currently supports Linux, Mac, and Windows terminal systems. Llama.cpp inference code can accelerate inference on mainstream hardware.
###### Objective
To use Ollama's service distribution method to distribute models like ChatGLM and CogVM on the server side, supporting multi-end systems (Linux/Mac/Windows) for execution.
###### Ollama Project Design Description
The Ollama framework relies on the language features of CGO, designing a local client runtime system. The system compiles the terminal executable files of Llama.cpp via CGO to publish the HTTP server service provided by Llama.cpp. By connecting Go and C through .h files, it supports model quantization. The upper layer has designed command modules to receive user command instructions, thereby invoking HTTP service through Go to complete model instance maintenance. Additionally, the Go module includes code for model management and retrieval.
- Call Relationship Diagram
```mermaid
graph TD;
A[cgo Layer] --> B[llama.cpp server];
B --> C[httpserver Service];
C --> D[.h Files];
D --> E[Quantization Support];
A --> F[Command Module];
F --> C;
F --> G[Go Execution];
G --> H[Model Management];
G --> I[Model Retrieval];
G --> J[Task pubsh];
E --> C;
subgraph Ollama Framework
A;
B;
C;
D;
E;
F;
G;
H;
I;
end
K[Compilation Adaptation];
K --> L[llama.cpp server];
L --> M[Task Scheduling];
```
#### Design Proposal
Based on the design content in the above diagram, we investigated the ChatGLM.cpp repository, which provides quantization support for the GGML inference solution. On this basis, we can write a GLM server executor, making some adaptation operations at the compilation layer, checking the compatibility of Llama.cpp and ChatGLM.cpp .h header files, and scheduling the corresponding task allocation.
```mermaid
graph TD;
A[cgo Layer];
A --> F[Command Module];
F --> C[llama.cpp & chatglm.cpp header]
C --> D[.h Files];
D --> E[Quantization Support];
F --> G[Go Execution];
G --> H[Model Management];
G --> I[Model Retrieval];
G --> J[Task pubsh];
E --> C;
subgraph Ollama Framework
A;
B;
C;
D;
E;
F;
G;
H;
I;
end
K[Compilation Adaptation];
K --> L[llama.cpp & chatglm.cpp server];
L --> M[Task Scheduling];
```
##### link
- https://github.com/li-plus/chatglm.cpp
- https://github.com/ollama/ollama/issues/3160
|
{
"login": "glide-the",
"id": 16206043,
"node_id": "MDQ6VXNlcjE2MjA2MDQz",
"avatar_url": "https://avatars.githubusercontent.com/u/16206043?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/glide-the",
"html_url": "https://github.com/glide-the",
"followers_url": "https://api.github.com/users/glide-the/followers",
"following_url": "https://api.github.com/users/glide-the/following{/other_user}",
"gists_url": "https://api.github.com/users/glide-the/gists{/gist_id}",
"starred_url": "https://api.github.com/users/glide-the/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/glide-the/subscriptions",
"organizations_url": "https://api.github.com/users/glide-the/orgs",
"repos_url": "https://api.github.com/users/glide-the/repos",
"events_url": "https://api.github.com/users/glide-the/events{/privacy}",
"received_events_url": "https://api.github.com/users/glide-the/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4501/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4501/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7231
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7231/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7231/comments
|
https://api.github.com/repos/ollama/ollama/issues/7231/events
|
https://github.com/ollama/ollama/pull/7231
| 2,593,186,089
|
PR_kwDOJ0Z1Ps5-5QU1
| 7,231
|
fix: consider any status code as redirect
|
{
"login": "XciD",
"id": 6586344,
"node_id": "MDQ6VXNlcjY1ODYzNDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/6586344?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/XciD",
"html_url": "https://github.com/XciD",
"followers_url": "https://api.github.com/users/XciD/followers",
"following_url": "https://api.github.com/users/XciD/following{/other_user}",
"gists_url": "https://api.github.com/users/XciD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/XciD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/XciD/subscriptions",
"organizations_url": "https://api.github.com/users/XciD/orgs",
"repos_url": "https://api.github.com/users/XciD/repos",
"events_url": "https://api.github.com/users/XciD/events{/privacy}",
"received_events_url": "https://api.github.com/users/XciD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 0
| 2024-10-16T22:34:11
| 2024-12-02T20:40:26
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7231",
"html_url": "https://github.com/ollama/ollama/pull/7231",
"diff_url": "https://github.com/ollama/ollama/pull/7231.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7231.patch",
"merged_at": null
}
|
When retrieving the url for downloading a model, ollama always consider that the model is host on a CDN.
This PR resolve:
- If 200 is return on the same host, just return the current url
- Consider any 3xx as a redirect url
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7231/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7231/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7850
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7850/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7850/comments
|
https://api.github.com/repos/ollama/ollama/issues/7850/events
|
https://github.com/ollama/ollama/pull/7850
| 2,696,476,961
|
PR_kwDOJ0Z1Ps6DQXCy
| 7,850
|
openai: remove unused error code
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-11-26T23:38:39
| 2024-11-27T00:08:11
| 2024-11-27T00:08:10
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7850",
"html_url": "https://github.com/ollama/ollama/pull/7850",
"diff_url": "https://github.com/ollama/ollama/pull/7850.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7850.patch",
"merged_at": "2024-11-27T00:08:09"
}
|
The writeError takes a code argument which is no longer used. Remove it for clarity.
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7850/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7850/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3987
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3987/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3987/comments
|
https://api.github.com/repos/ollama/ollama/issues/3987/events
|
https://github.com/ollama/ollama/issues/3987
| 2,267,308,728
|
I_kwDOJ0Z1Ps6HJGK4
| 3,987
|
Increase the number of CPU usage for ollama_llama_se in linux
|
{
"login": "wwjCMP",
"id": 32979859,
"node_id": "MDQ6VXNlcjMyOTc5ODU5",
"avatar_url": "https://avatars.githubusercontent.com/u/32979859?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wwjCMP",
"html_url": "https://github.com/wwjCMP",
"followers_url": "https://api.github.com/users/wwjCMP/followers",
"following_url": "https://api.github.com/users/wwjCMP/following{/other_user}",
"gists_url": "https://api.github.com/users/wwjCMP/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wwjCMP/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wwjCMP/subscriptions",
"organizations_url": "https://api.github.com/users/wwjCMP/orgs",
"repos_url": "https://api.github.com/users/wwjCMP/repos",
"events_url": "https://api.github.com/users/wwjCMP/events{/privacy}",
"received_events_url": "https://api.github.com/users/wwjCMP/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 0
| 2024-04-28T02:53:48
| 2024-04-28T04:05:52
| 2024-04-28T04:05:29
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Can the number of CPU cores used by the command 'ollama_llama_se' be increased through settings in Linux? As shown in the figure, the CPU is running at full load, but the 'ollama_llama_se' command can only use about thirty cores in competition with other continuously running commands. Which of these continuously running commands are submitted through Slurm?
Therefore, the ollama_llama_se runs very slowly.
When the CPU is idle, ollama_llama_se can probably utilize about 48 cores. At this time, ollama_llama_se is running very fast.
This machine only has CPU. There are a total of 96 cores.
|
{
"login": "wwjCMP",
"id": 32979859,
"node_id": "MDQ6VXNlcjMyOTc5ODU5",
"avatar_url": "https://avatars.githubusercontent.com/u/32979859?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wwjCMP",
"html_url": "https://github.com/wwjCMP",
"followers_url": "https://api.github.com/users/wwjCMP/followers",
"following_url": "https://api.github.com/users/wwjCMP/following{/other_user}",
"gists_url": "https://api.github.com/users/wwjCMP/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wwjCMP/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wwjCMP/subscriptions",
"organizations_url": "https://api.github.com/users/wwjCMP/orgs",
"repos_url": "https://api.github.com/users/wwjCMP/repos",
"events_url": "https://api.github.com/users/wwjCMP/events{/privacy}",
"received_events_url": "https://api.github.com/users/wwjCMP/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3987/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3987/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3683
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3683/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3683/comments
|
https://api.github.com/repos/ollama/ollama/issues/3683/events
|
https://github.com/ollama/ollama/issues/3683
| 2,246,915,257
|
I_kwDOJ0Z1Ps6F7TS5
| 3,683
|
mixtral:22b OLLAMA 0.1.32 llama runner process no longer running: -1 cudaMalloc failed: out of memory
|
{
"login": "subhashdasyam",
"id": 19161628,
"node_id": "MDQ6VXNlcjE5MTYxNjI4",
"avatar_url": "https://avatars.githubusercontent.com/u/19161628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/subhashdasyam",
"html_url": "https://github.com/subhashdasyam",
"followers_url": "https://api.github.com/users/subhashdasyam/followers",
"following_url": "https://api.github.com/users/subhashdasyam/following{/other_user}",
"gists_url": "https://api.github.com/users/subhashdasyam/gists{/gist_id}",
"starred_url": "https://api.github.com/users/subhashdasyam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/subhashdasyam/subscriptions",
"organizations_url": "https://api.github.com/users/subhashdasyam/orgs",
"repos_url": "https://api.github.com/users/subhashdasyam/repos",
"events_url": "https://api.github.com/users/subhashdasyam/events{/privacy}",
"received_events_url": "https://api.github.com/users/subhashdasyam/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg",
"url": "https://api.github.com/repos/ollama/ollama/labels/nvidia",
"name": "nvidia",
"color": "8CDB00",
"default": false,
"description": "Issues relating to Nvidia GPUs and CUDA"
},
{
"id": 6677745918,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgZQ_g",
"url": "https://api.github.com/repos/ollama/ollama/labels/gpu",
"name": "gpu",
"color": "76C49E",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] | null | 9
| 2024-04-16T21:31:15
| 2024-05-07T14:51:16
| 2024-04-17T00:41:46
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
```
Apr 17 01:25:09 ai-pc ollama[56169]: time=2024-04-17T01:25:09.106+04:00 level=INFO source=gpu.go:121 msg="Detecting GPU type"
Apr 17 01:25:09 ai-pc ollama[56169]: time=2024-04-17T01:25:09.106+04:00 level=INFO source=gpu.go:268 msg="Searching for GPU management library libcudart.so*"
Apr 17 01:25:09 ai-pc ollama[56169]: time=2024-04-17T01:25:09.107+04:00 level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [/tmp/ollama3415574743/runners/cuda_v11/libcudart.so.11.0]"
Apr 17 01:25:09 ai-pc ollama[56169]: time=2024-04-17T01:25:09.108+04:00 level=INFO source=gpu.go:126 msg="Nvidia GPU detected via cudart"
Apr 17 01:25:09 ai-pc ollama[56169]: time=2024-04-17T01:25:09.108+04:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
Apr 17 01:25:09 ai-pc ollama[56169]: time=2024-04-17T01:25:09.201+04:00 level=INFO source=gpu.go:202 msg="[cudart] CUDART CUDA Compute Capability detected: 8.6"
Apr 17 01:25:09 ai-pc ollama[56169]: time=2024-04-17T01:25:09.240+04:00 level=INFO source=gpu.go:121 msg="Detecting GPU type"
Apr 17 01:25:09 ai-pc ollama[56169]: time=2024-04-17T01:25:09.240+04:00 level=INFO source=gpu.go:268 msg="Searching for GPU management library libcudart.so*"
Apr 17 01:25:09 ai-pc ollama[56169]: time=2024-04-17T01:25:09.241+04:00 level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [/tmp/ollama3415574743/runners/cuda_v11/libcudart.so.11.0]"
Apr 17 01:25:09 ai-pc ollama[56169]: time=2024-04-17T01:25:09.241+04:00 level=INFO source=gpu.go:126 msg="Nvidia GPU detected via cudart"
Apr 17 01:25:09 ai-pc ollama[56169]: time=2024-04-17T01:25:09.241+04:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
Apr 17 01:25:09 ai-pc ollama[56169]: time=2024-04-17T01:25:09.318+04:00 level=INFO source=gpu.go:202 msg="[cudart] CUDART CUDA Compute Capability detected: 8.6"
Apr 17 01:25:09 ai-pc ollama[56169]: time=2024-04-17T01:25:09.358+04:00 level=INFO source=server.go:120 msg="offload to gpu" reallayers=34 layers=34 required="76868.7 MiB" used="46864.5 MiB" available="47268.4 MiB" kv="448.0 MiB" fulloffload="244.0 MiB">
Apr 17 01:25:09 ai-pc ollama[56169]: time=2024-04-17T01:25:09.358+04:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
Apr 17 01:25:09 ai-pc ollama[56169]: time=2024-04-17T01:25:09.358+04:00 level=INFO source=server.go:257 msg="starting llama server" cmd="/tmp/ollama3415574743/runners/cuda_v11/ollama_llama_server --model /usr/share/ollama/.ollama/models/blobs/sha256-373>
Apr 17 01:25:09 ai-pc ollama[56169]: time=2024-04-17T01:25:09.359+04:00 level=INFO source=server.go:382 msg="waiting for llama runner to start responding"
Apr 17 01:25:09 ai-pc ollama[58558]: {"function":"server_params_parse","level":"INFO","line":2599,"msg":"logging to file is disabled.","tid":"136407237222400","timestamp":1713302709}
Apr 17 01:25:09 ai-pc ollama[58558]: {"build":1,"commit":"7593639","function":"main","level":"INFO","line":2795,"msg":"build info","tid":"136407237222400","timestamp":1713302709}
Apr 17 01:25:09 ai-pc ollama[58558]: {"function":"main","level":"INFO","line":2798,"msg":"system info","n_threads":14,"n_threads_batch":-1,"system_info":"AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON>
Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: loaded meta data with 25 key-value pairs and 563 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-373c4038c2d0dad733d6d29d5f635b7fda61ffa972ab3c4d89e516a7c0bdd80c (version GGUF V3 (lates>
Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: - kv 0: general.architecture str = llama
Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: - kv 1: general.name str = v2ray
Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: - kv 2: llama.vocab_size u32 = 32000
Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: - kv 3: llama.context_length u32 = 65536
Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: - kv 4: llama.embedding_length u32 = 6144
Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: - kv 5: llama.block_count u32 = 56
Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: - kv 6: llama.feed_forward_length u32 = 16384
Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: - kv 7: llama.rope.dimension_count u32 = 128
Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: - kv 8: llama.attention.head_count u32 = 48
Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: - kv 9: llama.attention.head_count_kv u32 = 8
Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: - kv 10: llama.expert_count u32 = 8
Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: - kv 11: llama.expert_used_count u32 = 2
Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: - kv 12: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: - kv 13: llama.rope.freq_base f32 = 1000000.000000
Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: - kv 14: general.file_type u32 = 2
Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: - kv 15: tokenizer.ggml.model str = llama
Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: - kv 16: tokenizer.ggml.tokens arr[str,32000] = ["<unk>", "<s>", "</s>", "<0x00>", "<...
Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: - kv 17: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000...
Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: - kv 18: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: - kv 19: tokenizer.ggml.bos_token_id u32 = 1
Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 2
Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: - kv 21: tokenizer.ggml.unknown_token_id u32 = 0
Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: - kv 22: tokenizer.ggml.add_bos_token bool = true
Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: - kv 23: tokenizer.ggml.add_eos_token bool = false
Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: - kv 24: general.quantization_version u32 = 2
Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: - type f32: 113 tensors
Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: - type f16: 56 tensors
Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: - type q4_0: 281 tensors
Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: - type q8_0: 112 tensors
Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: - type q6_K: 1 tensors
Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_vocab: special tokens definition check successful ( 259/32000 ).
Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: format = GGUF V3 (latest)
Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: arch = llama
Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: vocab type = SPM
Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: n_vocab = 32000
Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: n_merges = 0
Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: n_ctx_train = 65536
Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: n_embd = 6144
Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: n_head = 48
Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: n_head_kv = 8
Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: n_layer = 56
Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: n_rot = 128
Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: n_embd_head_k = 128
Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: n_embd_head_v = 128
Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: n_gqa = 6
Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: n_embd_k_gqa = 1024
Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: n_embd_v_gqa = 1024
Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: f_norm_eps = 0.0e+00
Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: f_norm_rms_eps = 1.0e-05
Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: f_clamp_kqv = 0.0e+00
Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00
Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: f_logit_scale = 0.0e+00
Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: n_ff = 16384
Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: n_expert = 8
Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: n_expert_used = 2
Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: causal attn = 1
Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: pooling type = 0
Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: rope type = 0
Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: rope scaling = linear
Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: freq_base_train = 1000000.0
Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: freq_scale_train = 1
Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: n_yarn_orig_ctx = 65536
Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: rope_finetuned = unknown
Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: ssm_d_conv = 0
Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: ssm_d_inner = 0
Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: ssm_d_state = 0
Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: ssm_dt_rank = 0
Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: model type = 8x22B
Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: model ftype = Q4_0
Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: model params = 140.62 B
Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: model size = 74.05 GiB (4.52 BPW)
Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: general.name = v2ray
Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: BOS token = 1 '<s>'
Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: EOS token = 2 '</s>'
Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: UNK token = 0 '<unk>'
Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: LF token = 13 '<0x0A>'
Apr 17 01:25:09 ai-pc ollama[56169]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: yes
Apr 17 01:25:09 ai-pc ollama[56169]: ggml_cuda_init: CUDA_USE_TENSOR_CORES: no
Apr 17 01:25:09 ai-pc ollama[56169]: ggml_cuda_init: found 2 CUDA devices:
Apr 17 01:25:09 ai-pc ollama[56169]: Device 0: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes
Apr 17 01:25:09 ai-pc ollama[56169]: Device 1: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_tensors: ggml ctx size = 1.16 MiB
Apr 17 01:25:10 ai-pc ollama[56169]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 24289.03 MiB on device 0: cudaMalloc failed: out of memory
Apr 17 01:25:11 ai-pc ollama[56169]: llama_model_load: error loading model: unable to allocate backend buffer
Apr 17 01:25:11 ai-pc ollama[56169]: llama_load_model_from_file: exception loading model
Apr 17 01:25:11 ai-pc ollama[56169]: terminate called after throwing an instance of 'std::runtime_error'
Apr 17 01:25:11 ai-pc ollama[56169]: what(): unable to allocate backend buffer
Apr 17 01:25:11 ai-pc ollama[56169]: time=2024-04-17T01:25:11.819+04:00 level=ERROR source=routes.go:120 msg="error loading llama server" error="llama runner process no longer running: -1 cudaMalloc failed: out of memory"
```
### What did you expect to see?
_No response_
### Steps to reproduce
_No response_
### Are there any recent changes that introduced the issue?
_No response_
### OS
Linux
### Architecture
amd64
### Platform
_No response_
### Ollama version
0.1.32
### GPU
Nvidia
### GPU info
Wed Apr 17 01:30:45 2024
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 550.67 Driver Version: 550.67 CUDA Version: 12.4 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA GeForce RTX 4090 Off | 00000000:01:00.0 Off | Off |
| 0% 49C P8 27W / 450W | 15MiB / 24564MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
| 1 NVIDIA GeForce RTX 3090 Off | 00000000:0D:00.0 On | N/A |
| 0% 53C P0 109W / 370W | 527MiB / 24576MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| 0 N/A N/A 2581 G /usr/lib/xorg/Xorg 4MiB |
| 1 N/A N/A 2581 G /usr/lib/xorg/Xorg 290MiB |
| 1 N/A N/A 2698 G /usr/bin/gnome-shell 51MiB |
| 1 N/A N/A 4417 G firefox 168MiB |
+-----------------------------------------------------------------------------------------+
### CPU
Intel
### Other software
_No response_
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3683/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3683/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5591
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5591/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5591/comments
|
https://api.github.com/repos/ollama/ollama/issues/5591/events
|
https://github.com/ollama/ollama/issues/5591
| 2,400,005,016
|
I_kwDOJ0Z1Ps6PDSuY
| 5,591
|
Upgrading removes all models
|
{
"login": "loranger",
"id": 6014,
"node_id": "MDQ6VXNlcjYwMTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/6014?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/loranger",
"html_url": "https://github.com/loranger",
"followers_url": "https://api.github.com/users/loranger/followers",
"following_url": "https://api.github.com/users/loranger/following{/other_user}",
"gists_url": "https://api.github.com/users/loranger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/loranger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/loranger/subscriptions",
"organizations_url": "https://api.github.com/users/loranger/orgs",
"repos_url": "https://api.github.com/users/loranger/repos",
"events_url": "https://api.github.com/users/loranger/events{/privacy}",
"received_events_url": "https://api.github.com/users/loranger/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 3
| 2024-07-10T07:43:19
| 2024-12-02T07:32:15
| 2024-11-17T18:51:29
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Hi,
I just upgraded my ollama setup manually, by running the install script again, as [specified](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-can-i-upgrade-ollama), but I also lost all my models, the ones pulled, the ones built, all are gone.
I supposed that's not the desired behaviour?
### OS
Linux
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.2.1
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5591/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5591/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5271
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5271/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5271/comments
|
https://api.github.com/repos/ollama/ollama/issues/5271/events
|
https://github.com/ollama/ollama/issues/5271
| 2,371,861,512
|
I_kwDOJ0Z1Ps6NX7wI
| 5,271
|
Low VRAM Utilization on RTX 3090 When Models are Split Across Multiple CUDA Devices (separate ollama serve)
|
{
"login": "chrisoutwright",
"id": 27736055,
"node_id": "MDQ6VXNlcjI3NzM2MDU1",
"avatar_url": "https://avatars.githubusercontent.com/u/27736055?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chrisoutwright",
"html_url": "https://github.com/chrisoutwright",
"followers_url": "https://api.github.com/users/chrisoutwright/followers",
"following_url": "https://api.github.com/users/chrisoutwright/following{/other_user}",
"gists_url": "https://api.github.com/users/chrisoutwright/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chrisoutwright/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chrisoutwright/subscriptions",
"organizations_url": "https://api.github.com/users/chrisoutwright/orgs",
"repos_url": "https://api.github.com/users/chrisoutwright/repos",
"events_url": "https://api.github.com/users/chrisoutwright/events{/privacy}",
"received_events_url": "https://api.github.com/users/chrisoutwright/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg",
"url": "https://api.github.com/repos/ollama/ollama/labels/nvidia",
"name": "nvidia",
"color": "8CDB00",
"default": false,
"description": "Issues relating to Nvidia GPUs and CUDA"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 4
| 2024-06-25T07:04:56
| 2024-08-01T22:38:04
| 2024-08-01T22:38:04
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
## Environment
- **Ollama Version**: 0.1.45
- **Operating System**: Win10
- **GPU Type**: NVIDIA RTX 3090, GTX 1080Ti
## Issue Description
I am experiencing an issue with VRAM utilization in Ollama 0.1.45. When using the codestral example to split models across different CUDA devices on an RTX 3090 and GTX 1080Ti (one GPU per model that is!) , it appears that only 10GB of VRAM is being used for Codestal now. This is similar to what one might expect with a GTX 1080Ti, suggesting that there might be a misconfiguration or a bug in how VRAM is allocated or recognized for the RTX 3090.
I am using:
$env:CUDA_VISIBLE_DEVICES=0 for 3090
and
$env:CUDA_VISIBLE_DEVICES=1 for 1080
which correspond to the identifiers
and am using ollama serve

### OS
Windows
### GPU
Nvidia
### CPU
_No response_
### Ollama version
0.1.45
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5271/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5271/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3011
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3011/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3011/comments
|
https://api.github.com/repos/ollama/ollama/issues/3011/events
|
https://github.com/ollama/ollama/issues/3011
| 2,176,669,801
|
I_kwDOJ0Z1Ps6BvVhp
| 3,011
|
Starcoder2 crashes latest ollama container
|
{
"login": "madelponte",
"id": 3129897,
"node_id": "MDQ6VXNlcjMxMjk4OTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3129897?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/madelponte",
"html_url": "https://github.com/madelponte",
"followers_url": "https://api.github.com/users/madelponte/followers",
"following_url": "https://api.github.com/users/madelponte/following{/other_user}",
"gists_url": "https://api.github.com/users/madelponte/gists{/gist_id}",
"starred_url": "https://api.github.com/users/madelponte/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/madelponte/subscriptions",
"organizations_url": "https://api.github.com/users/madelponte/orgs",
"repos_url": "https://api.github.com/users/madelponte/repos",
"events_url": "https://api.github.com/users/madelponte/events{/privacy}",
"received_events_url": "https://api.github.com/users/madelponte/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 3
| 2024-03-08T19:45:23
| 2024-03-08T21:35:41
| 2024-03-08T21:35:40
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Image ID: `76b4fbf17cef`
Comamnd run: `ollama run starcoder2`
Tried with both docker and podman and same thing happens with either.
Error:
```go
time=2024-03-08T19:33:43.460Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-03-08T19:33:43.460Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-03-08T19:33:43.460Z level=INFO source=llm.go:77 msg="GPU not available, falling back to CPU"
time=2024-03-08T19:33:43.462Z level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: /tmp/ollama2695468419/cpu_avx2/libext_server.so"
time=2024-03-08T19:33:43.462Z level=INFO source=dyn_ext_server.go:150 msg="Initializing llama server"
SIGSEGV: segmentation violation
PC=0x7f999fc6a7fd m=5 sigcode=1
signal arrived during cgo execution
goroutine 336 [syscall]:
runtime.cgocall(0x9bd7f0, 0xc0008746c8)
/usr/local/go/src/runtime/cgocall.go:157 +0x4b fp=0xc0008746a0 sp=0xc000874668 pc=0x409b0b
github.com/jmorganca/ollama/llm._Cfunc_dyn_llama_server_init({0x7f9938001f50, 0x7f99546f83c0, 0x7f99546e8030, 0x7f99546eaf60, 0x7f9954705760, 0x7f99546f18a0, 0x7f99546eabe0, 0x7f99546e80b0, 0x7f9954706060, 0x7f9954705300, ...}, ...)
_cgo_gotypes.go:282 +0x45 fp=0xc0008746c8 sp=0xc0008746a0 pc=0x7c5c05
github.com/jmorganca/ollama/llm.newDynExtServer.func7(0xaf2e55?, 0xc?)
/go/src/github.com/jmorganca/ollama/llm/dyn_ext_server.go:153 +0xef fp=0xc0008747b8 sp=0xc0008746c8 pc=0x7c714f
github.com/jmorganca/ollama/llm.newDynExtServer({0xc000138090, 0x2f}, {0xc0001a0150, _}, {_, _, _}, {0x0, 0x0, 0x0}, ...)
/go/src/github.com/jmorganca/ollama/llm/dyn_ext_server.go:153 +0xa65 fp=0xc000874a58 sp=0xc0008747b8 pc=0x7c6de5
github.com/jmorganca/ollama/llm.newLlmServer({{_, _, _}, {_, _}, {_, _}}, {_, _}, {0xc0001a0150, ...}, ...)
/go/src/github.com/jmorganca/ollama/llm/llm.go:158 +0x425 fp=0xc000874c18 sp=0xc000874a58 pc=0x7c3545
github.com/jmorganca/ollama/llm.New({0xc0002f89d8, 0x15}, {0xc0001a0150, _}, {_, _, _}, {0x0, 0x0, 0x0}, ...)
/go/src/github.com/jmorganca/ollama/llm/llm.go:123 +0x713 fp=0xc000874e98 sp=0xc000874c18 pc=0x7c2eb3
github.com/jmorganca/ollama/server.load(0xc000002a80?, 0xc000002a80, {{0x0, 0x800, 0x200, 0x1, 0xffffffffffffffff, 0x0, 0x0, 0x1, ...}, ...}, ...)
/go/src/github.com/jmorganca/ollama/server/routes.go:85 +0x3a5 fp=0xc000875018 sp=0xc000874e98 pc=0x9971e5
github.com/jmorganca/ollama/server.ChatHandler(0xc000280300)
/go/src/github.com/jmorganca/ollama/server/routes.go:1175 +0xa37 fp=0xc000875748 sp=0xc000875018 pc=0x9a2977
github.com/gin-gonic/gin.(*Context).Next(...)
/root/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/context.go:174
github.com/jmorganca/ollama/server.(*Server).GenerateRoutes.func1(0xc000280300)
/go/src/github.com/jmorganca/ollama/server/routes.go:945 +0x68 fp=0xc000875780 sp=0xc000875748 pc=0x9a11a8
github.com/gin-gonic/gin.(*Context).Next(...)
/root/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/context.go:174
github.com/gin-gonic/gin.CustomRecoveryWithWriter.func1(0xc000280300)
/root/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/recovery.go:102 +0x7a fp=0xc0008757d0 sp=0xc000875780 pc=0x9787ba
github.com/gin-gonic/gin.(*Context).Next(...)
/root/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/context.go:174
github.com/gin-gonic/gin.LoggerWithConfig.func1(0xc000280300)
/root/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/logger.go:240 +0xde fp=0xc000875980 sp=0xc0008757d0 pc=0x97795e
github.com/gin-gonic/gin.(*Context).Next(...)
/root/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/context.go:174
github.com/gin-gonic/gin.(*Engine).handleHTTPRequest(0xc0000d1a00, 0xc000280300)
/root/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/gin.go:620 +0x65b fp=0xc000875b08 sp=0xc000875980 pc=0x976a1b
github.com/gin-gonic/gin.(*Engine).ServeHTTP(0xc0000d1a00, {0x1179fa40?, 0xc0002ca1c0}, 0xc000280200)
/root/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/gin.go:576 +0x1dd fp=0xc000875b48 sp=0xc000875b08 pc=0x9761dd
net/http.serverHandler.ServeHTTP({0x1179dd60?}, {0x1179fa40?, 0xc0002ca1c0?}, 0x6?)
/usr/local/go/src/net/http/server.go:2938 +0x8e fp=0xc000875b78 sp=0xc000875b48 pc=0x6ced4e
net/http.(*conn).serve(0xc0009141b0, {0x117a10a8, 0xc0001b0ba0})
/usr/local/go/src/net/http/server.go:2009 +0x5f4 fp=0xc000875fb8 sp=0xc000875b78 pc=0x6cac34
net/http.(*Server).Serve.func3()
/usr/local/go/src/net/http/server.go:3086 +0x28 fp=0xc000875fe0 sp=0xc000875fb8 pc=0x6cf568
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc000875fe8 sp=0xc000875fe0 pc=0x46e2c1
created by net/http.(*Server).Serve in goroutine 1
/usr/local/go/src/net/http/server.go:3086 +0x5cb
goroutine 1 [IO wait]:
runtime.gopark(0x480f10?, 0xc0000c9850?, 0xa0?, 0x98?, 0x4f711d?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc0000c9830 sp=0xc0000c9810 pc=0x43e7ee
runtime.netpollblock(0x46c332?, 0x4092a6?, 0x0?)
/usr/local/go/src/runtime/netpoll.go:564 +0xf7 fp=0xc0000c9868 sp=0xc0000c9830 pc=0x437277
internal/poll.runtime_pollWait(0x7f9958e84e80, 0x72)
/usr/local/go/src/runtime/netpoll.go:343 +0x85 fp=0xc0000c9888 sp=0xc0000c9868 pc=0x468a05
internal/poll.(*pollDesc).wait(0xc000466100?, 0x4?, 0x0)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc0000c98b0 sp=0xc0000c9888 pc=0x4efd67
internal/poll.(*pollDesc).waitRead(...)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc000466100)
/usr/local/go/src/internal/poll/fd_unix.go:611 +0x2ac fp=0xc0000c9958 sp=0xc0000c98b0 pc=0x4f524c
net.(*netFD).accept(0xc000466100)
/usr/local/go/src/net/fd_unix.go:172 +0x29 fp=0xc0000c9a10 sp=0xc0000c9958 pc=0x56be29
net.(*TCPListener).accept(0xc00043d5a0)
/usr/local/go/src/net/tcpsock_posix.go:152 +0x1e fp=0xc0000c9a38 sp=0xc0000c9a10 pc=0x580c3e
net.(*TCPListener).Accept(0xc00043d5a0)
/usr/local/go/src/net/tcpsock.go:315 +0x30 fp=0xc0000c9a68 sp=0xc0000c9a38 pc=0x57fdf0
net/http.(*onceCloseListener).Accept(0xc0009141b0?)
<autogenerated>:1 +0x24 fp=0xc0000c9a80 sp=0xc0000c9a68 pc=0x6f1ae4
net/http.(*Server).Serve(0xc000378ff0, {0x1179f830, 0xc00043d5a0})
/usr/local/go/src/net/http/server.go:3056 +0x364 fp=0xc0000c9bb0 sp=0xc0000c9a80 pc=0x6cf1a4
github.com/jmorganca/ollama/server.Serve({0x1179f830, 0xc00043d5a0})
/go/src/github.com/jmorganca/ollama/server/routes.go:1048 +0x454 fp=0xc0000c9c98 sp=0xc0000c9bb0 pc=0x9a1654
github.com/jmorganca/ollama/cmd.RunServer(0xc000468300?, {0x11be88c0?, 0x4?, 0xadab0a?})
/go/src/github.com/jmorganca/ollama/cmd/cmd.go:706 +0x1b9 fp=0xc0000c9d30 sp=0xc0000c9c98 pc=0x9b4799
github.com/spf13/cobra.(*Command).execute(0xc000421800, {0x11be88c0, 0x0, 0x0})
/root/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:940 +0x87c fp=0xc0000c9e68 sp=0xc0000c9d30 pc=0x764d9c
github.com/spf13/cobra.(*Command).ExecuteC(0xc000420c00)
/root/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068 +0x3a5 fp=0xc0000c9f20 sp=0xc0000c9e68 pc=0x7655c5
github.com/spf13/cobra.(*Command).Execute(...)
/root/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992
github.com/spf13/cobra.(*Command).ExecuteContext(...)
/root/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:985
main.main()
/go/src/github.com/jmorganca/ollama/main.go:11 +0x4d fp=0xc0000c9f40 sp=0xc0000c9f20 pc=0x9bc90d
runtime.main()
/usr/local/go/src/runtime/proc.go:267 +0x2bb fp=0xc0000c9fe0 sp=0xc0000c9f40 pc=0x43e39b
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0000c9fe8 sp=0xc0000c9fe0 pc=0x46e2c1
goroutine 2 [force gc (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000056fa8 sp=0xc000056f88 pc=0x43e7ee
runtime.goparkunlock(...)
/usr/local/go/src/runtime/proc.go:404
runtime.forcegchelper()
/usr/local/go/src/runtime/proc.go:322 +0xb3 fp=0xc000056fe0 sp=0xc000056fa8 pc=0x43e673
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc000056fe8 sp=0xc000056fe0 pc=0x46e2c1
created by runtime.init.6 in goroutine 1
/usr/local/go/src/runtime/proc.go:310 +0x1a
goroutine 3 [GC sweep wait]:
runtime.gopark(0x1?, 0x0?, 0x0?, 0x0?, 0x0?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000057778 sp=0xc000057758 pc=0x43e7ee
runtime.goparkunlock(...)
/usr/local/go/src/runtime/proc.go:404
runtime.bgsweep(0x0?)
/usr/local/go/src/runtime/mgcsweep.go:321 +0xdf fp=0xc0000577c8 sp=0xc000057778 pc=0x42a73f
runtime.gcenable.func1()
/usr/local/go/src/runtime/mgc.go:200 +0x25 fp=0xc0000577e0 sp=0xc0000577c8 pc=0x41f865
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0000577e8 sp=0xc0000577e0 pc=0x46e2c1
created by runtime.gcenable in goroutine 1
/usr/local/go/src/runtime/mgc.go:200 +0x66
goroutine 4 [GC scavenge wait]:
runtime.gopark(0xbd6bd2?, 0xb0ef8e?, 0x0?, 0x0?, 0x0?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000057f70 sp=0xc000057f50 pc=0x43e7ee
runtime.goparkunlock(...)
/usr/local/go/src/runtime/proc.go:404
runtime.(*scavengerState).park(0x11bb8c40)
/usr/local/go/src/runtime/mgcscavenge.go:425 +0x49 fp=0xc000057fa0 sp=0xc000057f70 pc=0x427f69
runtime.bgscavenge(0x0?)
/usr/local/go/src/runtime/mgcscavenge.go:658 +0x59 fp=0xc000057fc8 sp=0xc000057fa0 pc=0x428519
runtime.gcenable.func2()
/usr/local/go/src/runtime/mgc.go:201 +0x25 fp=0xc000057fe0 sp=0xc000057fc8 pc=0x41f805
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc000057fe8 sp=0xc000057fe0 pc=0x46e2c1
created by runtime.gcenable in goroutine 1
/usr/local/go/src/runtime/mgc.go:201 +0xa5
goroutine 5 [finalizer wait]:
runtime.gopark(0x0?, 0xc00062c0f0?, 0x60?, 0x40?, 0x1000000010?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000056628 sp=0xc000056608 pc=0x43e7ee
runtime.runfinq()
/usr/local/go/src/runtime/mfinal.go:193 +0x107 fp=0xc0000567e0 sp=0xc000056628 pc=0x41e8e7
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0000567e8 sp=0xc0000567e0 pc=0x46e2c1
created by runtime.createfing in goroutine 1
/usr/local/go/src/runtime/mfinal.go:163 +0x3d
goroutine 6 [select, locked to thread]:
runtime.gopark(0xc0000587a8?, 0x2?, 0x89?, 0xea?, 0xc0000587a4?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000058638 sp=0xc000058618 pc=0x43e7ee
runtime.selectgo(0xc0000587a8, 0xc0000587a0, 0x0?, 0x0, 0x0?, 0x1)
/usr/local/go/src/runtime/select.go:327 +0x725 fp=0xc000058758 sp=0xc000058638 pc=0x44e325
runtime.ensureSigM.func1()
/usr/local/go/src/runtime/signal_unix.go:1014 +0x19f fp=0xc0000587e0 sp=0xc000058758 pc=0x46535f
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0000587e8 sp=0xc0000587e0 pc=0x46e2c1
created by runtime.ensureSigM in goroutine 1
/usr/local/go/src/runtime/signal_unix.go:997 +0xc8
goroutine 18 [syscall]:
runtime.notetsleepg(0x0?, 0x0?)
/usr/local/go/src/runtime/lock_futex.go:236 +0x29 fp=0xc0000527a0 sp=0xc000052768 pc=0x411349
os/signal.signal_recv()
/usr/local/go/src/runtime/sigqueue.go:152 +0x29 fp=0xc0000527c0 sp=0xc0000527a0 pc=0x46ac89
os/signal.loop()
/usr/local/go/src/os/signal/signal_unix.go:23 +0x13 fp=0xc0000527e0 sp=0xc0000527c0 pc=0x6f4513
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0000527e8 sp=0xc0000527e0 pc=0x46e2c1
created by os/signal.Notify.func1.1 in goroutine 1
/usr/local/go/src/os/signal/signal.go:151 +0x1f
goroutine 34 [chan receive]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000514718 sp=0xc0005146f8 pc=0x43e7ee
runtime.chanrecv(0xc00018daa0, 0x0, 0x1)
/usr/local/go/src/runtime/chan.go:583 +0x3cd fp=0xc000514790 sp=0xc000514718 pc=0x40beed
runtime.chanrecv1(0x0?, 0x0?)
/usr/local/go/src/runtime/chan.go:442 +0x12 fp=0xc0005147b8 sp=0xc000514790 pc=0x40baf2
github.com/jmorganca/ollama/server.Serve.func2()
/go/src/github.com/jmorganca/ollama/server/routes.go:1030 +0x25 fp=0xc0005147e0 sp=0xc0005147b8 pc=0x9a16e5
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0005147e8 sp=0xc0005147e0 pc=0x46e2c1
created by github.com/jmorganca/ollama/server.Serve in goroutine 1
/go/src/github.com/jmorganca/ollama/server/routes.go:1029 +0x3c7
goroutine 35 [GC worker (idle)]:
runtime.gopark(0x122dd1a80669?, 0x1?, 0xd?, 0x57?, 0x0?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000514f50 sp=0xc000514f30 pc=0x43e7ee
runtime.gcBgMarkWorker()
/usr/local/go/src/runtime/mgc.go:1293 +0xe5 fp=0xc000514fe0 sp=0xc000514f50 pc=0x4213e5
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc000514fe8 sp=0xc000514fe0 pc=0x46e2c1
created by runtime.gcBgMarkStartWorkers in goroutine 1
/usr/local/go/src/runtime/mgc.go:1217 +0x1c
goroutine 19 [GC worker (idle)]:
runtime.gopark(0x122dd1a61207?, 0x1?, 0x85?, 0x17?, 0x0?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000052f50 sp=0xc000052f30 pc=0x43e7ee
runtime.gcBgMarkWorker()
/usr/local/go/src/runtime/mgc.go:1293 +0xe5 fp=0xc000052fe0 sp=0xc000052f50 pc=0x4213e5
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc000052fe8 sp=0xc000052fe0 pc=0x46e2c1
created by runtime.gcBgMarkStartWorkers in goroutine 1
/usr/local/go/src/runtime/mgc.go:1217 +0x1c
goroutine 20 [GC worker (idle)]:
runtime.gopark(0x122c9e608baf?, 0x3?, 0xb8?, 0x51?, 0x0?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000053750 sp=0xc000053730 pc=0x43e7ee
runtime.gcBgMarkWorker()
/usr/local/go/src/runtime/mgc.go:1293 +0xe5 fp=0xc0000537e0 sp=0xc000053750 pc=0x4213e5
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0000537e8 sp=0xc0000537e0 pc=0x46e2c1
created by runtime.gcBgMarkStartWorkers in goroutine 1
/usr/local/go/src/runtime/mgc.go:1217 +0x1c
goroutine 50 [GC worker (idle)]:
runtime.gopark(0x122dd1a61872?, 0x3?, 0x97?, 0x2b?, 0x0?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000510750 sp=0xc000510730 pc=0x43e7ee
runtime.gcBgMarkWorker()
/usr/local/go/src/runtime/mgc.go:1293 +0xe5 fp=0xc0005107e0 sp=0xc000510750 pc=0x4213e5
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0005107e8 sp=0xc0005107e0 pc=0x46e2c1
created by runtime.gcBgMarkStartWorkers in goroutine 1
/usr/local/go/src/runtime/mgc.go:1217 +0x1c
goroutine 51 [GC worker (idle)]:
runtime.gopark(0x11bea5e0?, 0x3?, 0x17?, 0xde?, 0x0?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000510f50 sp=0xc000510f30 pc=0x43e7ee
runtime.gcBgMarkWorker()
/usr/local/go/src/runtime/mgc.go:1293 +0xe5 fp=0xc000510fe0 sp=0xc000510f50 pc=0x4213e5
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc000510fe8 sp=0xc000510fe0 pc=0x46e2c1
created by runtime.gcBgMarkStartWorkers in goroutine 1
/usr/local/go/src/runtime/mgc.go:1217 +0x1c
goroutine 52 [GC worker (idle)]:
runtime.gopark(0x122dd1a67852?, 0x1?, 0xe2?, 0xf8?, 0x0?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000511750 sp=0xc000511730 pc=0x43e7ee
runtime.gcBgMarkWorker()
/usr/local/go/src/runtime/mgc.go:1293 +0xe5 fp=0xc0005117e0 sp=0xc000511750 pc=0x4213e5
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0005117e8 sp=0xc0005117e0 pc=0x46e2c1
created by runtime.gcBgMarkStartWorkers in goroutine 1
/usr/local/go/src/runtime/mgc.go:1217 +0x1c
goroutine 7 [GC worker (idle)]:
runtime.gopark(0x122dd1a616d1?, 0x3?, 0xa3?, 0x6c?, 0x0?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000058f50 sp=0xc000058f30 pc=0x43e7ee
runtime.gcBgMarkWorker()
/usr/local/go/src/runtime/mgc.go:1293 +0xe5 fp=0xc000058fe0 sp=0xc000058f50 pc=0x4213e5
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc000058fe8 sp=0xc000058fe0 pc=0x46e2c1
created by runtime.gcBgMarkStartWorkers in goroutine 1
/usr/local/go/src/runtime/mgc.go:1217 +0x1c
goroutine 8 [GC worker (idle)]:
runtime.gopark(0x122dd1a617c2?, 0x3?, 0x37?, 0xd4?, 0x0?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000059750 sp=0xc000059730 pc=0x43e7ee
runtime.gcBgMarkWorker()
/usr/local/go/src/runtime/mgc.go:1293 +0xe5 fp=0xc0000597e0 sp=0xc000059750 pc=0x4213e5
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0000597e8 sp=0xc0000597e0 pc=0x46e2c1
created by runtime.gcBgMarkStartWorkers in goroutine 1
/usr/local/go/src/runtime/mgc.go:1217 +0x1c
goroutine 9 [GC worker (idle)]:
runtime.gopark(0x122dd1a61946?, 0x1?, 0x96?, 0xfb?, 0x0?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000059f50 sp=0xc000059f30 pc=0x43e7ee
runtime.gcBgMarkWorker()
/usr/local/go/src/runtime/mgc.go:1293 +0xe5 fp=0xc000059fe0 sp=0xc000059f50 pc=0x4213e5
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc000059fe8 sp=0xc000059fe0 pc=0x46e2c1
created by runtime.gcBgMarkStartWorkers in goroutine 1
/usr/local/go/src/runtime/mgc.go:1217 +0x1c
goroutine 21 [GC worker (idle)]:
runtime.gopark(0x122dd1a618e5?, 0x1?, 0x87?, 0x63?, 0x0?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000053f50 sp=0xc000053f30 pc=0x43e7ee
runtime.gcBgMarkWorker()
/usr/local/go/src/runtime/mgc.go:1293 +0xe5 fp=0xc000053fe0 sp=0xc000053f50 pc=0x4213e5
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc000053fe8 sp=0xc000053fe0 pc=0x46e2c1
created by runtime.gcBgMarkStartWorkers in goroutine 1
/usr/local/go/src/runtime/mgc.go:1217 +0x1c
goroutine 10 [GC worker (idle)]:
runtime.gopark(0x122dd1a60c1f?, 0x3?, 0x1f?, 0xda?, 0x0?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000494750 sp=0xc000494730 pc=0x43e7ee
runtime.gcBgMarkWorker()
/usr/local/go/src/runtime/mgc.go:1293 +0xe5 fp=0xc0004947e0 sp=0xc000494750 pc=0x4213e5
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0004947e8 sp=0xc0004947e0 pc=0x46e2c1
created by runtime.gcBgMarkStartWorkers in goroutine 1
/usr/local/go/src/runtime/mgc.go:1217 +0x1c
goroutine 11 [GC worker (idle)]:
runtime.gopark(0x122dd1a6184c?, 0x1?, 0xe1?, 0x3f?, 0x0?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000494f50 sp=0xc000494f30 pc=0x43e7ee
runtime.gcBgMarkWorker()
/usr/local/go/src/runtime/mgc.go:1293 +0xe5 fp=0xc000494fe0 sp=0xc000494f50 pc=0x4213e5
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc000494fe8 sp=0xc000494fe0 pc=0x46e2c1
created by runtime.gcBgMarkStartWorkers in goroutine 1
/usr/local/go/src/runtime/mgc.go:1217 +0x1c
goroutine 71 [IO wait]:
runtime.gopark(0x478531a248414a0d?, 0xb?, 0x0?, 0x0?, 0x8?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc0003035c8 sp=0xc0003035a8 pc=0x43e7ee
runtime.netpollblock(0x47f078?, 0x4092a6?, 0x0?)
/usr/local/go/src/runtime/netpoll.go:564 +0xf7 fp=0xc000303600 sp=0xc0003035c8 pc=0x437277
internal/poll.runtime_pollWait(0x7f9958e84b98, 0x72)
/usr/local/go/src/runtime/netpoll.go:343 +0x85 fp=0xc000303620 sp=0xc000303600 pc=0x468a05
internal/poll.(*pollDesc).wait(0xc000518180?, 0xc000457500?, 0x0)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc000303648 sp=0xc000303620 pc=0x4efd67
internal/poll.(*pollDesc).waitRead(...)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc000518180, {0xc000457500, 0x1500, 0x1500})
/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a fp=0xc0003036e0 sp=0xc000303648 pc=0x4f105a
net.(*netFD).Read(0xc000518180, {0xc000457500?, 0xc000457505?, 0x0?})
/usr/local/go/src/net/fd_posix.go:55 +0x25 fp=0xc000303728 sp=0xc0003036e0 pc=0x569e05
net.(*conn).Read(0xc00005a3c0, {0xc000457500?, 0x427365?, 0xc0000c4538?})
/usr/local/go/src/net/net.go:179 +0x45 fp=0xc000303770 sp=0xc000303728 pc=0x5780a5
net.(*TCPConn).Read(0xc000303808?, {0xc000457500?, 0xc000013db8?, 0x18?})
<autogenerated>:1 +0x25 fp=0xc0003037a0 sp=0xc000303770 pc=0x589fa5
crypto/tls.(*atLeastReader).Read(0xc000013db8, {0xc000457500?, 0xc000013db8?, 0x0?})
/usr/local/go/src/crypto/tls/conn.go:805 +0x3b fp=0xc0003037e8 sp=0xc0003037a0 pc=0x617cfb
bytes.(*Buffer).ReadFrom(0xc0000c4628, {0x1179cdc0, 0xc000013db8})
/usr/local/go/src/bytes/buffer.go:211 +0x98 fp=0xc000303840 sp=0xc0003037e8 pc=0x4a2f18
crypto/tls.(*Conn).readFromUntil(0xc0000c4380, {0x1179bfe0?, 0xc00005a3c0}, 0x0?)
/usr/local/go/src/crypto/tls/conn.go:827 +0xde fp=0xc000303880 sp=0xc000303840 pc=0x617ede
crypto/tls.(*Conn).readRecordOrCCS(0xc0000c4380, 0x0)
/usr/local/go/src/crypto/tls/conn.go:625 +0x250 fp=0xc000303c20 sp=0xc000303880 pc=0x6154b0
crypto/tls.(*Conn).readRecord(...)
/usr/local/go/src/crypto/tls/conn.go:587
crypto/tls.(*Conn).Read(0xc0000c4380, {0xc0006a5000, 0x1000, 0x4775b3?})
/usr/local/go/src/crypto/tls/conn.go:1369 +0x158 fp=0xc000303c90 sp=0xc000303c20 pc=0x61b778
bufio.(*Reader).Read(0xc0006a2c60, {0xc0000fe200, 0x9, 0xc000303d38?})
/usr/local/go/src/bufio/bufio.go:244 +0x197 fp=0xc000303cc8 sp=0xc000303c90 pc=0x655057
io.ReadAtLeast({0x1179c320, 0xc0006a2c60}, {0xc0000fe200, 0x9, 0x9}, 0x9)
/usr/local/go/src/io/io.go:335 +0x90 fp=0xc000303d10 sp=0xc000303cc8 pc=0x49ac50
io.ReadFull(...)
/usr/local/go/src/io/io.go:354
net/http.http2readFrameHeader({0xc0000fe200, 0x9, 0x0?}, {0x1179c320?, 0xc0006a2c60?})
/usr/local/go/src/net/http/h2_bundle.go:1635 +0x65 fp=0xc000303d60 sp=0xc000303d10 pc=0x68e825
net/http.(*http2Framer).ReadFrame(0xc0000fe1c0)
/usr/local/go/src/net/http/h2_bundle.go:1899 +0x85 fp=0xc000303e08 sp=0xc000303d60 pc=0x68ef65
net/http.(*http2clientConnReadLoop).run(0xc000303f98)
/usr/local/go/src/net/http/h2_bundle.go:9338 +0x11f fp=0xc000303f60 sp=0xc000303e08 pc=0x6b1e1f
net/http.(*http2ClientConn).readLoop(0xc000002000)
/usr/local/go/src/net/http/h2_bundle.go:9233 +0x65 fp=0xc000303fc8 sp=0xc000303f60 pc=0x6b13a5
net/http.(*http2Transport).newClientConn.func3()
/usr/local/go/src/net/http/h2_bundle.go:7905 +0x25 fp=0xc000303fe0 sp=0xc000303fc8 pc=0x6aa285
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc000303fe8 sp=0xc000303fe0 pc=0x46e2c1
created by net/http.(*http2Transport).newClientConn in goroutine 70
/usr/local/go/src/net/http/h2_bundle.go:7905 +0xcbe
goroutine 27 [IO wait]:
runtime.gopark(0x341dcfbc0c799a5f?, 0xb?, 0x0?, 0x0?, 0x9?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc0001215c8 sp=0xc0001215a8 pc=0x43e7ee
runtime.netpollblock(0x47f078?, 0x4092a6?, 0x0?)
/usr/local/go/src/runtime/netpoll.go:564 +0xf7 fp=0xc000121600 sp=0xc0001215c8 pc=0x437277
internal/poll.runtime_pollWait(0x7f9958e84c90, 0x72)
/usr/local/go/src/runtime/netpoll.go:343 +0x85 fp=0xc000121620 sp=0xc000121600 pc=0x468a05
internal/poll.(*pollDesc).wait(0xc000026100?, 0xc000458a00?, 0x0)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc000121648 sp=0xc000121620 pc=0x4efd67
internal/poll.(*pollDesc).waitRead(...)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc000026100, {0xc000458a00, 0x1500, 0x1500})
/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a fp=0xc0001216e0 sp=0xc000121648 pc=0x4f105a
net.(*netFD).Read(0xc000026100, {0xc000458a00?, 0xc0001217b0?, 0x416b08?})
/usr/local/go/src/net/fd_posix.go:55 +0x25 fp=0xc000121728 sp=0xc0001216e0 pc=0x569e05
net.(*conn).Read(0xc00013c018, {0xc000458a00?, 0x0?, 0xc0001217a0?})
/usr/local/go/src/net/net.go:179 +0x45 fp=0xc000121770 sp=0xc000121728 pc=0x5780a5
net.(*TCPConn).Read(0xc000121808?, {0xc000458a00?, 0xc000522000?, 0x18?})
<autogenerated>:1 +0x25 fp=0xc0001217a0 sp=0xc000121770 pc=0x589fa5
crypto/tls.(*atLeastReader).Read(0xc000522000, {0xc000458a00?, 0xc000522000?, 0x0?})
/usr/local/go/src/crypto/tls/conn.go:805 +0x3b fp=0xc0001217e8 sp=0xc0001217a0 pc=0x617cfb
bytes.(*Buffer).ReadFrom(0xc0000c4d28, {0x1179cdc0, 0xc000522000})
/usr/local/go/src/bytes/buffer.go:211 +0x98 fp=0xc000121840 sp=0xc0001217e8 pc=0x4a2f18
crypto/tls.(*Conn).readFromUntil(0xc0000c4a80, {0x1179bfe0?, 0xc00013c018}, 0xc000121948?)
/usr/local/go/src/crypto/tls/conn.go:827 +0xde fp=0xc000121880 sp=0xc000121840 pc=0x617ede
crypto/tls.(*Conn).readRecordOrCCS(0xc0000c4a80, 0x0)
/usr/local/go/src/crypto/tls/conn.go:625 +0x250 fp=0xc000121c20 sp=0xc000121880 pc=0x6154b0
crypto/tls.(*Conn).readRecord(...)
/usr/local/go/src/crypto/tls/conn.go:587
crypto/tls.(*Conn).Read(0xc0000c4a80, {0xc0004c1000, 0x1000, 0x117a4058?})
/usr/local/go/src/crypto/tls/conn.go:1369 +0x158 fp=0xc000121c90 sp=0xc000121c20 pc=0x61b778
bufio.(*Reader).Read(0xc00018cf60, {0xc000414120, 0x9, 0x6f056e?})
/usr/local/go/src/bufio/bufio.go:244 +0x197 fp=0xc000121cc8 sp=0xc000121c90 pc=0x655057
io.ReadAtLeast({0x1179c320, 0xc00018cf60}, {0xc000414120, 0x9, 0x9}, 0x9)
/usr/local/go/src/io/io.go:335 +0x90 fp=0xc000121d10 sp=0xc000121cc8 pc=0x49ac50
io.ReadFull(...)
/usr/local/go/src/io/io.go:354
net/http.http2readFrameHeader({0xc000414120, 0x9, 0x6b6392?}, {0x1179c320?, 0xc00018cf60?})
/usr/local/go/src/net/http/h2_bundle.go:1635 +0x65 fp=0xc000121d60 sp=0xc000121d10 pc=0x68e825
net/http.(*http2Framer).ReadFrame(0xc0004140e0)
/usr/local/go/src/net/http/h2_bundle.go:1899 +0x85 fp=0xc000121e08 sp=0xc000121d60 pc=0x68ef65
net/http.(*http2clientConnReadLoop).run(0xc000121f98)
/usr/local/go/src/net/http/h2_bundle.go:9338 +0x11f fp=0xc000121f60 sp=0xc000121e08 pc=0x6b1e1f
net/http.(*http2ClientConn).readLoop(0xc0004be000)
/usr/local/go/src/net/http/h2_bundle.go:9233 +0x65 fp=0xc000121fc8 sp=0xc000121f60 pc=0x6b13a5
net/http.(*http2Transport).newClientConn.func3()
/usr/local/go/src/net/http/h2_bundle.go:7905 +0x25 fp=0xc000121fe0 sp=0xc000121fc8 pc=0x6aa285
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc000121fe8 sp=0xc000121fe0 pc=0x46e2c1
created by net/http.(*http2Transport).newClientConn in goroutine 26
/usr/local/go/src/net/http/h2_bundle.go:7905 +0xcbe
goroutine 206 [IO wait]:
runtime.gopark(0x1?, 0xb?, 0x0?, 0x0?, 0x7?)
/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000557da0 sp=0xc000557d80 pc=0x43e7ee
runtime.netpollblock(0x47f078?, 0x4092a6?, 0x0?)
/usr/local/go/src/runtime/netpoll.go:564 +0xf7 fp=0xc000557dd8 sp=0xc000557da0 pc=0x437277
internal/poll.runtime_pollWait(0x7f9958e84aa0, 0x72)
/usr/local/go/src/runtime/netpoll.go:343 +0x85 fp=0xc000557df8 sp=0xc000557dd8 pc=0x468a05
internal/poll.(*pollDesc).wait(0xc000518380?, 0xc00080e2e1?, 0x0)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc000557e20 sp=0xc000557df8 pc=0x4efd67
internal/poll.(*pollDesc).waitRead(...)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc000518380, {0xc00080e2e1, 0x1, 0x1})
/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a fp=0xc000557eb8 sp=0xc000557e20 pc=0x4f105a
net.(*netFD).Read(0xc000518380, {0xc00080e2e1?, 0x91b0000081b?, 0x92b000008cb?})
/usr/local/go/src/net/fd_posix.go:55 +0x25 fp=0xc000557f00 sp=0xc000557eb8 pc=0x569e05
net.(*conn).Read(0xc000936008, {0xc00080e2e1?, 0x92b000008cb?, 0x91b0000081b?})
/usr/local/go/src/net/net.go:179 +0x45 fp=0xc000557f48 sp=0xc000557f00 pc=0x5780a5
net.(*TCPConn).Read(0x92b000008cb?, {0xc00080e2e1?, 0x100099b0000095b?, 0xc00045eb00?})
<autogenerated>:1 +0x25 fp=0xc000557f78 sp=0xc000557f48 pc=0x589fa5
net/http.(*connReader).backgroundRead(0xc00080e2d0)
/usr/local/go/src/net/http/server.go:683 +0x37 fp=0xc000557fc8 sp=0xc000557f78 pc=0x6c4ab7
net/http.(*connReader).startBackgroundRead.func2()
/usr/local/go/src/net/http/server.go:679 +0x25 fp=0xc000557fe0 sp=0xc000557fc8 pc=0x6c49e5
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc000557fe8 sp=0xc000557fe0 pc=0x46e2c1
created by net/http.(*connReader).startBackgroundRead in goroutine 336
/usr/local/go/src/net/http/server.go:679 +0xba
rax 0x0
rbx 0x7f99579ff410
rcx 0x67
rdx 0x0
rdi 0x0
rsi 0x0
rbp 0x7f99579ff3d0
rsp 0x7f99579ff238
r8 0x7f9938015c90
r9 0x7f9938015cb8
r10 0x7f999fadcb40
r11 0x7f999fc66a80
r12 0x0
r13 0x7f993801aa38
r14 0x7f9938015a60
r15 0x0
rip 0x7f999fc6a7fd
rflags 0x10283
cs 0x33
fs 0x0
gs 0x0
```
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3011/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3011/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7185
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7185/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7185/comments
|
https://api.github.com/repos/ollama/ollama/issues/7185/events
|
https://github.com/ollama/ollama/issues/7185
| 2,583,207,847
|
I_kwDOJ0Z1Ps6Z-J-n
| 7,185
|
[Feature Request] Command to browse the model library / search for a specific model from the ollama CLI.
|
{
"login": "AFellowSpeedrunner",
"id": 73440604,
"node_id": "MDQ6VXNlcjczNDQwNjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/73440604?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AFellowSpeedrunner",
"html_url": "https://github.com/AFellowSpeedrunner",
"followers_url": "https://api.github.com/users/AFellowSpeedrunner/followers",
"following_url": "https://api.github.com/users/AFellowSpeedrunner/following{/other_user}",
"gists_url": "https://api.github.com/users/AFellowSpeedrunner/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AFellowSpeedrunner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AFellowSpeedrunner/subscriptions",
"organizations_url": "https://api.github.com/users/AFellowSpeedrunner/orgs",
"repos_url": "https://api.github.com/users/AFellowSpeedrunner/repos",
"events_url": "https://api.github.com/users/AFellowSpeedrunner/events{/privacy}",
"received_events_url": "https://api.github.com/users/AFellowSpeedrunner/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 6
| 2024-10-12T15:50:00
| 2024-10-13T12:22:57
| 2024-10-13T04:56:04
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I recently had an idea. What if there was a command to search and browse the model library from the ollama CLI?
I'm imagining something like `ollama search llama3` and `ollama browse`.
For example, the search would return like (rough example, GitHub formatting kinda broke it and I don't know how to fix it):
MODEL LATEST SIZE PARAMETERS
llama3 4.7GB 8B, 70B
llama3.1 4.7GB 8B, 70B, 405B
llama3.2 2.0GB 1B, 3B
[username]/[model] [size] [parameters]
Browse would be the same, but it would sort from newest model first. (Maybe the potential to sort by most popular and stuff like the library website?)
I hope the devs think this is a good idea because I'd love to see this implemented!
Thank you.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7185/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7185/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4931
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4931/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4931/comments
|
https://api.github.com/repos/ollama/ollama/issues/4931/events
|
https://github.com/ollama/ollama/issues/4931
| 2,341,634,421
|
I_kwDOJ0Z1Ps6LkoF1
| 4,931
|
Release Note Issue
|
{
"login": "karaketir16",
"id": 27349806,
"node_id": "MDQ6VXNlcjI3MzQ5ODA2",
"avatar_url": "https://avatars.githubusercontent.com/u/27349806?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/karaketir16",
"html_url": "https://github.com/karaketir16",
"followers_url": "https://api.github.com/users/karaketir16/followers",
"following_url": "https://api.github.com/users/karaketir16/following{/other_user}",
"gists_url": "https://api.github.com/users/karaketir16/gists{/gist_id}",
"starred_url": "https://api.github.com/users/karaketir16/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/karaketir16/subscriptions",
"organizations_url": "https://api.github.com/users/karaketir16/orgs",
"repos_url": "https://api.github.com/users/karaketir16/repos",
"events_url": "https://api.github.com/users/karaketir16/events{/privacy}",
"received_events_url": "https://api.github.com/users/karaketir16/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-06-08T12:25:51
| 2024-06-08T20:27:53
| 2024-06-08T20:27:53
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
In the release notes for [v0.1.34](https://github.com/ollama/ollama/releases/tag/v0.1.34), under the "What's Changed" section, the environment variable is incorrectly listed as `OLLAMA_MAX_LOADED`. The correct version is `OLLAMA_MAX_LOADED_MODELS`.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4931/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4931/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2295
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2295/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2295/comments
|
https://api.github.com/repos/ollama/ollama/issues/2295/events
|
https://github.com/ollama/ollama/issues/2295
| 2,111,185,400
|
I_kwDOJ0Z1Ps591iH4
| 2,295
|
multimodal processing doesn't work for one-shot CLI
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 1
| 2024-01-31T23:50:32
| 2024-02-02T05:33:07
| 2024-02-02T05:33:07
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
This doesn't work:
```
% ollama run llava "whats in this image ./image.jpg"
I'm sorry, but as a text-based AI language model, I am not able to directly view or interpret images. However, if the image is related to
the topic of data science or machine learning, it could potentially be something like a dataset, a visualization of data, a chart, or any
other form of data representation. Please provide more context about the image you are referring to so that I can attempt to answer your
question.
```
But this does:
```
% ollama run llava
>>> what's in this image ./image.jpg
Added image './image.jpg'
The image shows a hot dog in a bun, garnished with mustard and ketchup.
>>> Send a message (/? for help)
```
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2295/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2295/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7685
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7685/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7685/comments
|
https://api.github.com/repos/ollama/ollama/issues/7685/events
|
https://github.com/ollama/ollama/issues/7685
| 2,661,647,183
|
I_kwDOJ0Z1Ps6epYNP
| 7,685
|
Streaming chat/completions behind a gateway with timeout
|
{
"login": "Upabjojr",
"id": 4128856,
"node_id": "MDQ6VXNlcjQxMjg4NTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4128856?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Upabjojr",
"html_url": "https://github.com/Upabjojr",
"followers_url": "https://api.github.com/users/Upabjojr/followers",
"following_url": "https://api.github.com/users/Upabjojr/following{/other_user}",
"gists_url": "https://api.github.com/users/Upabjojr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Upabjojr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Upabjojr/subscriptions",
"organizations_url": "https://api.github.com/users/Upabjojr/orgs",
"repos_url": "https://api.github.com/users/Upabjojr/repos",
"events_url": "https://api.github.com/users/Upabjojr/events{/privacy}",
"received_events_url": "https://api.github.com/users/Upabjojr/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 3
| 2024-11-15T11:04:48
| 2024-12-23T07:53:52
| 2024-12-23T07:53:52
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I am using Ollama on a server behind a gateway that has a 30 second timeout on every forwarded HTTP request. If Ollama takes more than 30 seconds to respond to the HTTP request, the connection will be reset.
So far, enabling streaming on chat/completions has been an efficient workaround, as streaming chunks of generated takes much less time than 30 seconds.
There are, however, some cases that still cause this issue, in particular:
1. Posting a very long context may take more than 30 seconds to process before the streaming of chunks starts.
2. If the Ollama server is busy responding at many parallel requests the streaming may take longer than 30 seconds to start.
In order to avoid hitting the timeout threshold that resets the connection to Ollama on my gateway, I was wondering if it is possible to add support to chat/completions for streaming empty strings immediately, even before the LLM text generation has started?
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7685/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7685/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5839
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5839/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5839/comments
|
https://api.github.com/repos/ollama/ollama/issues/5839/events
|
https://github.com/ollama/ollama/issues/5839
| 2,421,807,379
|
I_kwDOJ0Z1Ps6QWdkT
| 5,839
|
CUDA error: CUBLAS_STATUS_NOT_INITIALIZED
|
{
"login": "CaptainDP",
"id": 19919798,
"node_id": "MDQ6VXNlcjE5OTE5Nzk4",
"avatar_url": "https://avatars.githubusercontent.com/u/19919798?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CaptainDP",
"html_url": "https://github.com/CaptainDP",
"followers_url": "https://api.github.com/users/CaptainDP/followers",
"following_url": "https://api.github.com/users/CaptainDP/following{/other_user}",
"gists_url": "https://api.github.com/users/CaptainDP/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CaptainDP/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CaptainDP/subscriptions",
"organizations_url": "https://api.github.com/users/CaptainDP/orgs",
"repos_url": "https://api.github.com/users/CaptainDP/repos",
"events_url": "https://api.github.com/users/CaptainDP/events{/privacy}",
"received_events_url": "https://api.github.com/users/CaptainDP/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-07-22T03:11:37
| 2024-07-22T07:01:56
| 2024-07-22T07:01:56
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
error msg:
CUDA error: CUBLAS_STATUS_NOT_INITIALIZED
current device: 0, in function cublas_handle at /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml/src/ggml-cuda/common.cuh:826
cublasCreate_v2(&cublas_handles[device])
model:qwen2-sft,use llama.cpp/convert_hf_to_gguf.py transfer to gguf;
env1:ubuntu20+A800:CUDA error: CUBLAS_STATUS_NOT_INITIALIZED
env2:MAC os:is ok
### OS
Linux, Docker
### GPU
Nvidia
### CPU
Intel
### Ollama version
ollama version is 0.2.7
|
{
"login": "CaptainDP",
"id": 19919798,
"node_id": "MDQ6VXNlcjE5OTE5Nzk4",
"avatar_url": "https://avatars.githubusercontent.com/u/19919798?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CaptainDP",
"html_url": "https://github.com/CaptainDP",
"followers_url": "https://api.github.com/users/CaptainDP/followers",
"following_url": "https://api.github.com/users/CaptainDP/following{/other_user}",
"gists_url": "https://api.github.com/users/CaptainDP/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CaptainDP/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CaptainDP/subscriptions",
"organizations_url": "https://api.github.com/users/CaptainDP/orgs",
"repos_url": "https://api.github.com/users/CaptainDP/repos",
"events_url": "https://api.github.com/users/CaptainDP/events{/privacy}",
"received_events_url": "https://api.github.com/users/CaptainDP/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5839/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5839/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1758
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1758/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1758/comments
|
https://api.github.com/repos/ollama/ollama/issues/1758/events
|
https://github.com/ollama/ollama/issues/1758
| 2,061,960,258
|
I_kwDOJ0Z1Ps565wRC
| 1,758
|
💡 "ollama --verify" to validate a model
|
{
"login": "adriens",
"id": 5235127,
"node_id": "MDQ6VXNlcjUyMzUxMjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/5235127?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/adriens",
"html_url": "https://github.com/adriens",
"followers_url": "https://api.github.com/users/adriens/followers",
"following_url": "https://api.github.com/users/adriens/following{/other_user}",
"gists_url": "https://api.github.com/users/adriens/gists{/gist_id}",
"starred_url": "https://api.github.com/users/adriens/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/adriens/subscriptions",
"organizations_url": "https://api.github.com/users/adriens/orgs",
"repos_url": "https://api.github.com/users/adriens/repos",
"events_url": "https://api.github.com/users/adriens/events{/privacy}",
"received_events_url": "https://api.github.com/users/adriens/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-01-02T05:33:31
| 2024-03-11T21:26:48
| 2024-03-11T20:33:16
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
# ❔ About
Sometimes, me may only need to validate than we can compile a model with `ollama`...without having to download the whole base model.
**👉 In a few words, this would help telling, very fast and very easily is a `ollama` modelfile could be used. 👈**
# 💡 Feature request
Implement `ollama --verify` which exits with success if:
- ✔️ the file is well formated
- ✔️ the base image exists
# 💰 Benefits
- **Make it possible to implement code validation with CI**... and then protect source code
- **Save resources (storage, CPU)** while trying to validate a `ollama` modelfile (especially on GH Actions)
# 🔖 Related stuff
- https://github.com/jmorganca/ollama/issues/1473
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1758/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1758/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2483
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2483/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2483/comments
|
https://api.github.com/repos/ollama/ollama/issues/2483/events
|
https://github.com/ollama/ollama/pull/2483
| 2,133,357,614
|
PR_kwDOJ0Z1Ps5mzpiS
| 2,483
|
update default registry domain
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 1
| 2024-02-14T00:40:03
| 2024-12-10T21:50:54
| null |
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/2483",
"html_url": "https://github.com/ollama/ollama/pull/2483",
"diff_url": "https://github.com/ollama/ollama/pull/2483.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2483.patch",
"merged_at": null
}
|
update default registry domain from registry.ollama.ai to ollama.com
migrate models by moving models to their new location. this is one directional
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2483/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2483/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6859
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6859/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6859/comments
|
https://api.github.com/repos/ollama/ollama/issues/6859/events
|
https://github.com/ollama/ollama/issues/6859
| 2,534,379,222
|
I_kwDOJ0Z1Ps6XD47W
| 6,859
|
Something got changed in the build process and I seem unable to force CUDA/CUBLAS use.
|
{
"login": "phalexo",
"id": 4603365,
"node_id": "MDQ6VXNlcjQ2MDMzNjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/4603365?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/phalexo",
"html_url": "https://github.com/phalexo",
"followers_url": "https://api.github.com/users/phalexo/followers",
"following_url": "https://api.github.com/users/phalexo/following{/other_user}",
"gists_url": "https://api.github.com/users/phalexo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/phalexo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/phalexo/subscriptions",
"organizations_url": "https://api.github.com/users/phalexo/orgs",
"repos_url": "https://api.github.com/users/phalexo/repos",
"events_url": "https://api.github.com/users/phalexo/events{/privacy}",
"received_events_url": "https://api.github.com/users/phalexo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 0
| 2024-09-18T18:18:21
| 2024-09-18T18:39:35
| 2024-09-18T18:39:35
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
How do I ensure that ollama is built with CUDA/CUBLAS support?
I don't see anything in the README.md to that end.
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
just built from source just now.
|
{
"login": "phalexo",
"id": 4603365,
"node_id": "MDQ6VXNlcjQ2MDMzNjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/4603365?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/phalexo",
"html_url": "https://github.com/phalexo",
"followers_url": "https://api.github.com/users/phalexo/followers",
"following_url": "https://api.github.com/users/phalexo/following{/other_user}",
"gists_url": "https://api.github.com/users/phalexo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/phalexo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/phalexo/subscriptions",
"organizations_url": "https://api.github.com/users/phalexo/orgs",
"repos_url": "https://api.github.com/users/phalexo/repos",
"events_url": "https://api.github.com/users/phalexo/events{/privacy}",
"received_events_url": "https://api.github.com/users/phalexo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6859/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6859/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6234
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6234/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6234/comments
|
https://api.github.com/repos/ollama/ollama/issues/6234/events
|
https://github.com/ollama/ollama/issues/6234
| 2,453,866,619
|
I_kwDOJ0Z1Ps6SQwh7
| 6,234
|
File Name with Empty Space Will Not be Recognize
|
{
"login": "Mo-enen",
"id": 13920065,
"node_id": "MDQ6VXNlcjEzOTIwMDY1",
"avatar_url": "https://avatars.githubusercontent.com/u/13920065?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Mo-enen",
"html_url": "https://github.com/Mo-enen",
"followers_url": "https://api.github.com/users/Mo-enen/followers",
"following_url": "https://api.github.com/users/Mo-enen/following{/other_user}",
"gists_url": "https://api.github.com/users/Mo-enen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Mo-enen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mo-enen/subscriptions",
"organizations_url": "https://api.github.com/users/Mo-enen/orgs",
"repos_url": "https://api.github.com/users/Mo-enen/repos",
"events_url": "https://api.github.com/users/Mo-enen/events{/privacy}",
"received_events_url": "https://api.github.com/users/Mo-enen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 6
| 2024-08-07T16:19:09
| 2024-08-08T07:40:05
| 2024-08-08T07:40:05
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
As the image below shows:
The Chinese content is the correct response from the LLM. Not an error message.

### OS
Windows
### Ollama version
0.3.4
|
{
"login": "Mo-enen",
"id": 13920065,
"node_id": "MDQ6VXNlcjEzOTIwMDY1",
"avatar_url": "https://avatars.githubusercontent.com/u/13920065?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Mo-enen",
"html_url": "https://github.com/Mo-enen",
"followers_url": "https://api.github.com/users/Mo-enen/followers",
"following_url": "https://api.github.com/users/Mo-enen/following{/other_user}",
"gists_url": "https://api.github.com/users/Mo-enen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Mo-enen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mo-enen/subscriptions",
"organizations_url": "https://api.github.com/users/Mo-enen/orgs",
"repos_url": "https://api.github.com/users/Mo-enen/repos",
"events_url": "https://api.github.com/users/Mo-enen/events{/privacy}",
"received_events_url": "https://api.github.com/users/Mo-enen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6234/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6234/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/498
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/498/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/498/comments
|
https://api.github.com/repos/ollama/ollama/issues/498/events
|
https://github.com/ollama/ollama/issues/498
| 1,888,084,370
|
I_kwDOJ0Z1Ps5wieGS
| 498
|
SSL certificate error.
|
{
"login": "ggozad",
"id": 183103,
"node_id": "MDQ6VXNlcjE4MzEwMw==",
"avatar_url": "https://avatars.githubusercontent.com/u/183103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ggozad",
"html_url": "https://github.com/ggozad",
"followers_url": "https://api.github.com/users/ggozad/followers",
"following_url": "https://api.github.com/users/ggozad/following{/other_user}",
"gists_url": "https://api.github.com/users/ggozad/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ggozad/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ggozad/subscriptions",
"organizations_url": "https://api.github.com/users/ggozad/orgs",
"repos_url": "https://api.github.com/users/ggozad/repos",
"events_url": "https://api.github.com/users/ggozad/events{/privacy}",
"received_events_url": "https://api.github.com/users/ggozad/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-09-08T17:42:09
| 2023-09-08T20:25:35
| 2023-09-08T20:25:35
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hey!
Just installed Ollama on my brand new MacBook. When trying to pull a model it seems there is a certificate error on the model registry:
```
ollama pull llama2
pulling manifest
Error: pull model manifest: Get "https://registry.ollama.ai/v2/library/llama2/manifests/latest": tls: failed to verify certificate: SecPolicyCreateSSL error: 0z
```
Let me know if I can provide any more info.
|
{
"login": "ggozad",
"id": 183103,
"node_id": "MDQ6VXNlcjE4MzEwMw==",
"avatar_url": "https://avatars.githubusercontent.com/u/183103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ggozad",
"html_url": "https://github.com/ggozad",
"followers_url": "https://api.github.com/users/ggozad/followers",
"following_url": "https://api.github.com/users/ggozad/following{/other_user}",
"gists_url": "https://api.github.com/users/ggozad/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ggozad/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ggozad/subscriptions",
"organizations_url": "https://api.github.com/users/ggozad/orgs",
"repos_url": "https://api.github.com/users/ggozad/repos",
"events_url": "https://api.github.com/users/ggozad/events{/privacy}",
"received_events_url": "https://api.github.com/users/ggozad/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/498/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/498/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5385
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5385/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5385/comments
|
https://api.github.com/repos/ollama/ollama/issues/5385/events
|
https://github.com/ollama/ollama/issues/5385
| 2,381,908,177
|
I_kwDOJ0Z1Ps6N-QjR
| 5,385
|
Provide a single command for "serve + pull model", to be used in CI/CD
|
{
"login": "steren",
"id": 360895,
"node_id": "MDQ6VXNlcjM2MDg5NQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/360895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/steren",
"html_url": "https://github.com/steren",
"followers_url": "https://api.github.com/users/steren/followers",
"following_url": "https://api.github.com/users/steren/following{/other_user}",
"gists_url": "https://api.github.com/users/steren/gists{/gist_id}",
"starred_url": "https://api.github.com/users/steren/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/steren/subscriptions",
"organizations_url": "https://api.github.com/users/steren/orgs",
"repos_url": "https://api.github.com/users/steren/repos",
"events_url": "https://api.github.com/users/steren/events{/privacy}",
"received_events_url": "https://api.github.com/users/steren/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 5
| 2024-06-29T18:35:39
| 2024-07-29T18:17:33
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I am building a container image on top of the official `ollama/ollama` image and I want to store in this image the model I intend to serve, so that I do not have to pull it after startup. The use case is to run Ollama in an autoscaled container environment.
The issue is that today, Ollama requires `ollama serve` before the `ollama pull` command can be used.
# Expected
I'd expect to be able to use a very simple Dockerfile like this:
```
FROM ollama/ollama
RUN ollama pull gemma:2b
CMD ["serve"]
```
# Observed
I cannot use a simple Dockerfile, I need to use a bash script that would start the server, wait for it to start, and only when started, pull the model:
```
wait_for_ollama() {
while ! nc -z localhost 8080; do
sleep 1 # Wait 1 second before checking again
done
}
# Start ollama serve in the background
ollama serve &
# Wait for ollama serve to start listening
wait_for_ollama
echo "ollama serve is now listening on port 8080"
# Run ollama pull
ollama pull gemma:2b
# Indicate successful completion
echo "ollama pull gemma:2b completed"
```
That I then reference in my Dockerfile:
```
FROM ollama/ollama
ADD pull.sh /
RUN ./pull.sh
CMD ["serve"]
```
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5385/reactions",
"total_count": 5,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5385/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/5051
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5051/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5051/comments
|
https://api.github.com/repos/ollama/ollama/issues/5051/events
|
https://github.com/ollama/ollama/pull/5051
| 2,354,141,159
|
PR_kwDOJ0Z1Ps5yhhWO
| 5,051
|
add model capabilities
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-06-14T21:29:14
| 2024-07-02T21:26:09
| 2024-07-02T21:26:07
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5051",
"html_url": "https://github.com/ollama/ollama/pull/5051",
"diff_url": "https://github.com/ollama/ollama/pull/5051.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5051.patch",
"merged_at": "2024-07-02T21:26:07"
}
|
detect completion capability by looking at model KVs. with this change, ollama correctly detects a model like [jina/jina-embeddings-v2-small-en](https://ollama.com/jina/jina-embeddings-v2-small-en) is an embedding model (as opposed to a text completion model)
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5051/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5051/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7226
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7226/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7226/comments
|
https://api.github.com/repos/ollama/ollama/issues/7226/events
|
https://github.com/ollama/ollama/issues/7226
| 2,592,038,509
|
I_kwDOJ0Z1Ps6af15t
| 7,226
|
Library tags not present in model information - RFC
|
{
"login": "elsatch",
"id": 653433,
"node_id": "MDQ6VXNlcjY1MzQzMw==",
"avatar_url": "https://avatars.githubusercontent.com/u/653433?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/elsatch",
"html_url": "https://github.com/elsatch",
"followers_url": "https://api.github.com/users/elsatch/followers",
"following_url": "https://api.github.com/users/elsatch/following{/other_user}",
"gists_url": "https://api.github.com/users/elsatch/gists{/gist_id}",
"starred_url": "https://api.github.com/users/elsatch/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/elsatch/subscriptions",
"organizations_url": "https://api.github.com/users/elsatch/orgs",
"repos_url": "https://api.github.com/users/elsatch/repos",
"events_url": "https://api.github.com/users/elsatch/events{/privacy}",
"received_events_url": "https://api.github.com/users/elsatch/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 0
| 2024-10-16T14:15:34
| 2024-10-16T14:15:34
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Today, I was using a third party software along Ollma (Msty). That program has support for text and vision models. As I was trying MiniCPM-V, a vision model, it was not detected by the program. Somehow I assumed that the Vision tag present on the library would be also present in the Model information. But that does not seem the case.
Would it make sense to add that information to the model description?
Sample vision model information on the Ollama library:

Sample output using ollama show modelname:

As you can see, the Vision tag on the Library is not present on the model information. (I assume it could be infered from the projector section).
This issue could be related to #5682.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7226/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7226/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/4398
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4398/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4398/comments
|
https://api.github.com/repos/ollama/ollama/issues/4398/events
|
https://github.com/ollama/ollama/issues/4398
| 2,292,426,426
|
I_kwDOJ0Z1Ps6Io6a6
| 4,398
|
KeyError: 'name' when using completions with tool use in mistral
|
{
"login": "r4881t",
"id": 81687400,
"node_id": "MDQ6VXNlcjgxNjg3NDAw",
"avatar_url": "https://avatars.githubusercontent.com/u/81687400?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/r4881t",
"html_url": "https://github.com/r4881t",
"followers_url": "https://api.github.com/users/r4881t/followers",
"following_url": "https://api.github.com/users/r4881t/following{/other_user}",
"gists_url": "https://api.github.com/users/r4881t/gists{/gist_id}",
"starred_url": "https://api.github.com/users/r4881t/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/r4881t/subscriptions",
"organizations_url": "https://api.github.com/users/r4881t/orgs",
"repos_url": "https://api.github.com/users/r4881t/repos",
"events_url": "https://api.github.com/users/r4881t/events{/privacy}",
"received_events_url": "https://api.github.com/users/r4881t/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 6
| 2024-05-13T10:28:57
| 2024-09-28T03:27:42
| 2024-05-15T15:20:16
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I am running Ollama + Litellm with Autogen.
When I try it, I keep getting the error below
```
Traceback (most recent call last):
File "/Users/pranavprakash/workspace/litellm/venv/lib/python3.11/site-packages/litellm/llms/ollama_chat.py", line 422, in ollama_acompletion
"function": {"name": function_call["name"], "arguments": json.dumps(function_call["arguments"])},
~~~~~~~~~~~~~^^^^^^^^
KeyError: 'name'
Traceback (most recent call last):
File "/Users/pranavprakash/workspace/litellm/venv/lib/python3.11/site-packages/litellm/main.py", line 324, in acompletion
response = await init_response
^^^^^^^^^^^^^^^^^^^
File "/Users/pranavprakash/workspace/litellm/venv/lib/python3.11/site-packages/litellm/llms/ollama_chat.py", line 448, in ollama_acompletion
raise e
File "/Users/pranavprakash/workspace/litellm/venv/lib/python3.11/site-packages/litellm/llms/ollama_chat.py", line 422, in ollama_acompletion
"function": {"name": function_call["name"], "arguments": json.dumps(function_call["arguments"])},
~~~~~~~~~~~~~^^^^^^^^
KeyError: 'name'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/pranavprakash/workspace/litellm/venv/lib/python3.11/site-packages/litellm/proxy/proxy_server.py", line 3653, in chat_completion
responses = await asyncio.gather(
^^^^^^^^^^^^^^^^^^^^^
File "/Users/pranavprakash/workspace/litellm/venv/lib/python3.11/site-packages/litellm/utils.py", line 3708, in wrapper_async
raise e
File "/Users/pranavprakash/workspace/litellm/venv/lib/python3.11/site-packages/litellm/utils.py", line 3536, in wrapper_async
result = await original_function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/pranavprakash/workspace/litellm/venv/lib/python3.11/site-packages/litellm/main.py", line 345, in acompletion
raise exception_type(
^^^^^^^^^^^^^^^
File "/Users/pranavprakash/workspace/litellm/venv/lib/python3.11/site-packages/litellm/utils.py", line 9220, in exception_type
raise e
File "/Users/pranavprakash/workspace/litellm/venv/lib/python3.11/site-packages/litellm/utils.py", line 9195, in exception_type
raise APIConnectionError(
litellm.exceptions.APIConnectionError: 'name'
INFO: 127.0.0.1:50731 - "POST /chat/completions HTTP/1.1" 500 Internal Server Error
```
The post call being made is as below
```
DEBUG - openai._base_client - Request options: {'method': 'post', 'url': '/chat/completions', 'files': None, 'json_data': {'messages': [{'content': "Your name is AgentX. You can help with web3 q&a and research. You should decompose the task and then use the appropriate tools to solve it.RULES:1. When searching for categories, if there are multiple possible categories, use all of them2. When searching for trending coins, order by volume in last 24 hours.3. When preparing final response include ALL the relevant data asked in the question4. As much as possible show the absolute numbers and the percentage change along with the date5. You may use the 'previous messages' to understand the context of current QueryReturn 'TERMINATE' when the task is done.", 'role': 'system'}, {'content': "Previous Messages: \nQuery: What are your capabilities?. Answer: As your web3 assistant, I have several capabilities to help with various crypto-related tasks:\n\n1. **Cryptocurrency Market Information**: I can provide real-time and historical data on cryptocurrency prices, market caps, volume, and changes for individual coins or across categories. This includes data on top gainers, top losers, and trending coins.\n\n2. **Categories Exploration**: I can search and provide information on various cryptocurrency categories, including DeFi, NFTs, AI, SocialFi, and more.\n\n3. **NFT Insights**: I can fetch trending NFTs based on different criteria and time frames, and provide details on specific NFTs owned by users.\n\n4. **User and Wallet Services**: I can retrieve user details, check wallet balances for specific user IDs, and perform token transfers between wallets on supported chains.\n\n5. **Transaction History**: I can provide a detailed transaction history for specific users, helping track their activities and balances over time.\n\n6. **Global Market Insights**: I can give an overview of the global cryptocurrency market capitalization and volume trends.\n\nThese tools and services are designed to help you navigate the web3 space, whether you're investing, researching, or managing crypto assets and NFTs. \nQuery: What about AI category?. Answer: Here are the top 3 tokens in the Artificial Intelligence (AI) category based on trading volume over the last month:\n\n1. **Render (RNDR)**\n - **Current Price**: USD 10.90\n - **Market Cap**: USD 4,226,950,113\n - **Price Change over 30 Days**: +28.24%\n\n2. **Fetch.ai (FET)**\n - **Current Price**: USD 2.21\n - **Market Cap**: USD 5,560,924,883\n - **Price Change over 30 Days**: -5.77%\n\n3. **The Graph (GRT)**\n - **Current Price**: USD 0.283\n - **Market Cap**: USD 2,683,949,970\n - **Price Change over 30 Days**: -2.78%\n\nThese tokens are currently the most traded within the AI category for the past month. Please consider these details for your investment decisions. \nQuery: okay, so I want to invest some money in SocialFi category. I have been hearing a lot about it. Can you please tell me about the top 3 tokens alongwith info about them? Limit your research to last one month only.. Answer: The top 3 tokens in the SocialFi category based on the last month's data, ordered by trading volume, are:\n\n1. **Theta Network (THETA)**\n - **Current Price**: USD 2.01\n - **Market Cap**: USD 2,004,443,416\n - **Price Change over 30 Days**: -27.17%\n - **Volume High (24h)**: USD 2.04\n - **Volume Low (24h)**: USD 1.98\n\n2. **CyberConnect (CYBER)**\n - **Current Price**: USD 8.02\n - **Market Cap**: USD 171,831,014\n - **Price Change over 30 Days**: -31.18%\n - **Fully Diluted Market Valuation**: USD 800,072,376\n - **Volume High (24h)**: USD 8.01\n - **Volume Low (24h)**: USD 7.67\n\n3. **Steem Dollars (SBD)**\n - **Current Price**: USD 3.66\n - **Market Cap**: USD 48,714,447\n - **Price Change over 30 Days**: -19.95%\n - **Volume High (24h)**: USD 3.92\n - **Volume Low (24h)**: USD 3.53\n\nThese tokens are currently the most traded within the SocialFi category for the past month, but note their decline in price during this period. Please consider this data for your investment decisions.. Current: \nUser_id: '2d468bcf-ed13-4767-8c67-ffd0880a7a92'. Query: What is the current price of bitcoin?", 'role': 'user'}], 'model': 'NotRequired', 'stream': False, 'tools': [{'type': 'function', 'function': {'description': 'Available Cryptocurrency Categories', 'name': 'available_cryptocurrency_categories', 'parameters': {'type': 'object', 'properties': {}, 'required': []}}}, {'type': 'function', 'function': {'description': 'Current Price of a single or multiple Cryptocurrency Tokens', 'name': 'current_price_tool', 'parameters': {'type': 'object', 'properties': {'tokens': {'anyOf': [{'type': 'string'}, {'type': 'array', 'items': {'type': 'string'}}], 'description': 'A single token or multiple tokens'}, 'vs_currency': {'type': 'string', 'default': 'usd', 'description': 'Base currency'}}, 'required': ['tokens']}}}, {'type': 'function', 'function': {'description': 'Historical Price of single Cryptocurrency Token', 'name': 'historical_price_tool', 'parameters': {'type': 'object', 'properties': {'token': {'type': 'string', 'default': 'usd', 'description': 'vs_currency'}, 'vs_currency': {'type': 'string', 'default': 'usd', 'description': 'vs_currency'}, 'days': {'type': 'number', 'default': 7, 'description': 'days'}}, 'required': ['token']}}}, {'type': 'function', 'function': {'description': 'Top Gainers of Cryptocurrency Tokens', 'name': 'top_gainers_tool', 'parameters': {'type': 'object', 'properties': {'vs_currency': {'type': 'string', 'default': '24h', 'description': 'duration'}, 'duration': {'type': 'string', 'default': '24h', 'description': 'duration'}, 'top_coins': {'type': 'integer', 'default': 10, 'description': 'result_count'}, 'result_count': {'type': 'integer', 'default': 10, 'description': 'result_count'}}, 'required': []}}}, {'type': 'function', 'function': {'description': 'Top Losers of Cryptocurrency Tokens', 'name': 'top_losers_tool', 'parameters': {'type': 'object', 'properties': {'vs_currency': {'type': 'string', 'default': '24h', 'description': 'duration'}, 'duration': {'type': 'string', 'default': '24h', 'description': 'duration'}, 'top_coins': {'type': 'integer', 'default': 10, 'description': 'result_count'}, 'result_count': {'type': 'integer', 'default': 10, 'description': 'result_count'}}, 'required': []}}}, {'type': 'function', 'function': {'description': 'Search cryptocurrency and get its market cap, liquidity, volume, etc. Not meant for historical data analysis. Use the tokens from this response along with historical_price_tool for historical data analysis.', 'name': 'search_coins', 'parameters': {'type': 'object', 'properties': {'category': {'type': 'string', 'default': None, 'description': 'The category-id in which coins are search for'}, 'market_cap_min': {'type': 'integer', 'default': None, 'description': 'Minimum Market Cap filter'}, 'market_cap_max': {'type': 'integer', 'default': None, 'description': 'Maximum Market Cap filter'}, 'fdv_min': {'type': 'integer', 'default': None, 'description': 'Fully Diluted Value minimum filter'}, 'fdv_max': {'type': 'integer', 'default': None, 'description': 'Fully Diluted Value maximum filter'}, 'circulating_supply_percentage_min': {'type': 'integer', 'default': None, 'description': 'Min Circulating supply as a percentage of total supply filter'}, 'circulating_supply_percentage_max': {'type': 'integer', 'default': None, 'description': 'Max circulating supply as a percentage of total supply filter'}, 'total_volume': {'type': 'integer', 'default': None, 'description': 'Total Volume Traded'}, 'order': {'enum': ['market_cap_asc', 'market_cap_desc', 'volume_asc', 'volume_desc'], 'type': 'string', 'default': 'market_cap_desc', 'description': 'Ordering of results'}, 'count': {'type': 'integer', 'default': 10, 'description': 'count'}, 'price_change_percentage': {'enum': ['1h', '24h', '7d', '14d', '30d', '200d', '1y'], 'type': 'string', 'default': '24h', 'description': 'price change percentage Duration'}, 'sparkline': {'type': 'boolean', 'default': False, 'description': 'Show Spark Line'}, 'vs_currency': {'type': 'string', 'default': 'usd', 'description': 'The base currency'}}, 'required': []}}}, {'type': 'function', 'function': {'description': 'Trending Cryptocurrency Tokens Across all categories', 'name': 'trending_coins', 'parameters': {'type': 'object', 'properties': {}, 'required': []}}}, {'type': 'function', 'function': {'description': 'Global Market Cap and Volume', 'name': 'global_market_cap_and_volume', 'parameters': {'type': 'object', 'properties': {'days': {'type': 'integer', 'default': 7, 'description': 'Get data for these many days'}, 'vs_currency': {'type': 'string', 'default': 'usd', 'description': 'The base currency'}}, 'required': []}}}, {'type': 'function', 'function': {'description': 'Trending NFTs', 'name': 'trending_nfts', 'parameters': {'type': 'object', 'properties': {'time_frame': {'enum': ['one_hour', 'two_hours', 'eight_hours', 'one_day', 'two_days', 'seven_days'], 'type': 'string', 'description': 'Trending Mint Time Frame'}, 'criteria': {'enum': ['unique_wallets', 'total_mints'], 'type': 'string', 'description': 'Trending Mint Criteria'}, 'limit': {'type': 'integer', 'default': 10, 'description': 'limit'}}, 'required': ['time_frame', 'criteria']}}}, {'type': 'function', 'function': {'description': 'Get NFTs for user_id', 'name': 'get_nfts', 'parameters': {'type': 'object', 'properties': {'user_id': {'type': 'string', 'description': 'The user_id of the user '}}, 'required': ['user_id']}}}, {'type': 'function', 'function': {'description': 'Get Transaction History for user_id', 'name': 'get_transaction_history', 'parameters': {'type': 'object', 'properties': {'user_id': {'type': 'string', 'description': 'The user_id of the user '}}, 'required': ['user_id']}}}, {'type': 'function', 'function': {'description': "Transfer Tokens from one wallet to another.Before calling this function, make sure to (a) call get_wallet_balance to check the balance AND to get the currency address AND (b) call the supported_chains to get the 'slug', which will be used as a parameter in this function.", 'name': 'transfer_tokens', 'parameters': {'type': 'object', 'properties': {'uniqueUserId': {'type': 'string', 'description': 'The user_id of the user '}, 'to': {'type': 'string', 'description': 'The user_id of the recipient '}, 'amount': {'type': 'string', 'description': 'The amount of tokens to transfer '}, 'currencyAddress': {'type': 'string', 'description': 'The address of the currency '}, 'chain': {'type': 'string', 'default': 'ethereum', 'description': "The 'slug' of blockchain to transfer on "}}, 'required': ['uniqueUserId', 'to', 'amount', 'currencyAddress']}}}, {'type': 'function', 'function': {'description': 'Supported Chains for EVM Wallet', 'name': 'supported_chains', 'parameters': {'type': 'object', 'properties': {}, 'required': []}}}, {'type': 'function', 'function': {'description': 'Get User Details For user_id including wallet address, points and more.', 'name': 'get_user_details', 'parameters': {'type': 'object', 'properties': {'user_id': {'type': 'string', 'description': 'User Id for whom details to be fetched'}}, 'required': ['user_id']}}}, {'type': 'function', 'function': {'description': 'Get Wallet Balance for user_id', 'name': 'get_wallet_balance', 'parameters': {'type': 'object', 'properties': {'user_id': {'type': 'string', 'description': 'The user_id of the user '}}, 'required': ['user_id']}}}]}}
```
My Llama mistral is
```
% ollama list
NAME ID SIZE MODIFIED
llama3:latest a6990ed6be41 4.7 GB 6 days ago
mistral:latest 61e88e884507 4.1 GB About an hour ago
```
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.1.34
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4398/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4398/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6589
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6589/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6589/comments
|
https://api.github.com/repos/ollama/ollama/issues/6589/events
|
https://github.com/ollama/ollama/issues/6589
| 2,500,132,754
|
I_kwDOJ0Z1Ps6VBP-S
| 6,589
|
Can this be used with "LM Studio" to share models? If so, how can it be modified?
|
{
"login": "Willy-Shenn",
"id": 79782696,
"node_id": "MDQ6VXNlcjc5NzgyNjk2",
"avatar_url": "https://avatars.githubusercontent.com/u/79782696?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Willy-Shenn",
"html_url": "https://github.com/Willy-Shenn",
"followers_url": "https://api.github.com/users/Willy-Shenn/followers",
"following_url": "https://api.github.com/users/Willy-Shenn/following{/other_user}",
"gists_url": "https://api.github.com/users/Willy-Shenn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Willy-Shenn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Willy-Shenn/subscriptions",
"organizations_url": "https://api.github.com/users/Willy-Shenn/orgs",
"repos_url": "https://api.github.com/users/Willy-Shenn/repos",
"events_url": "https://api.github.com/users/Willy-Shenn/events{/privacy}",
"received_events_url": "https://api.github.com/users/Willy-Shenn/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-09-02T06:42:02
| 2024-09-02T21:58:24
| 2024-09-02T21:58:23
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I am currently using two UI systems, but they cannot share models (possibly due to differences in how the models are identified and created). Even after modifying the environment variables, both UIs cannot use models from the same path. Is there anyone who can guide me on how to modify the two UIs so they can use models from the same path? I would be very grateful.
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6589/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6589/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4448
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4448/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4448/comments
|
https://api.github.com/repos/ollama/ollama/issues/4448/events
|
https://github.com/ollama/ollama/issues/4448
| 2,297,242,533
|
I_kwDOJ0Z1Ps6I7SOl
| 4,448
|
Streaming Chat Completion via OpenAI API should support stream option to include Usage
|
{
"login": "odrobnik",
"id": 333270,
"node_id": "MDQ6VXNlcjMzMzI3MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/333270?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/odrobnik",
"html_url": "https://github.com/odrobnik",
"followers_url": "https://api.github.com/users/odrobnik/followers",
"following_url": "https://api.github.com/users/odrobnik/following{/other_user}",
"gists_url": "https://api.github.com/users/odrobnik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/odrobnik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/odrobnik/subscriptions",
"organizations_url": "https://api.github.com/users/odrobnik/orgs",
"repos_url": "https://api.github.com/users/odrobnik/repos",
"events_url": "https://api.github.com/users/odrobnik/events{/privacy}",
"received_events_url": "https://api.github.com/users/odrobnik/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 2
| 2024-05-15T08:43:18
| 2024-09-03T15:39:56
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
In streaming mode the OpenAI chat completion has a new parameter to include Usage information after the Chunks. You just add a `{ "include_usage": true }` to the request.
Then the final chunks will look like this:
```
...
data: {"id":"chatcmpl-9P4UJf7DEdyXVro2VOMRMT9qKR0bC","object":"chat.completion.chunk","created":1715762479,"model":"gpt-3.5-turbo-0125","system_fingerprint":null,"choices":[{"index":1,"delta":{},"logprobs":null,"finish_reason":"stop"}],"usage":null}
data: {"id":"chatcmpl-9P4UJf7DEdyXVro2VOMRMT9qKR0bC","object":"chat.completion.chunk","created":1715762479,"model":"gpt-3.5-turbo-0125","system_fingerprint":null,"choices":[{"index":2,"delta":{},"logprobs":null,"finish_reason":"stop"}],"usage":null}
data: {"id":"chatcmpl-9P4UJf7DEdyXVro2VOMRMT9qKR0bC","object":"chat.completion.chunk","created":1715762479,"model":"gpt-3.5-turbo-0125","system_fingerprint":null,"choices":[],"usage":{"prompt_tokens":24,"completion_tokens":58,"total_tokens":82}}
data: [DONE]
```
The final chunk contains no choices, but a `usage`:
```
"usage":{"prompt_tokens":24,"completion_tokens":58,"total_tokens":82}
```
This usage is over all the generations from this stream.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4448/reactions",
"total_count": 10,
"+1": 10,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4448/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/4706
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4706/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4706/comments
|
https://api.github.com/repos/ollama/ollama/issues/4706/events
|
https://github.com/ollama/ollama/issues/4706
| 2,323,719,550
|
I_kwDOJ0Z1Ps6KgSV-
| 4,706
|
22B Codestral model
|
{
"login": "DuckyBlender",
"id": 42645784,
"node_id": "MDQ6VXNlcjQyNjQ1Nzg0",
"avatar_url": "https://avatars.githubusercontent.com/u/42645784?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DuckyBlender",
"html_url": "https://github.com/DuckyBlender",
"followers_url": "https://api.github.com/users/DuckyBlender/followers",
"following_url": "https://api.github.com/users/DuckyBlender/following{/other_user}",
"gists_url": "https://api.github.com/users/DuckyBlender/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DuckyBlender/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DuckyBlender/subscriptions",
"organizations_url": "https://api.github.com/users/DuckyBlender/orgs",
"repos_url": "https://api.github.com/users/DuckyBlender/repos",
"events_url": "https://api.github.com/users/DuckyBlender/events{/privacy}",
"received_events_url": "https://api.github.com/users/DuckyBlender/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
closed
| false
| null |
[] | null | 4
| 2024-05-29T16:19:24
| 2024-05-29T20:02:22
| 2024-05-29T20:02:21
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
https://huggingface.co/mistralai/Codestral-22B-v0.1
https://mistral.ai/news/codestral/
|
{
"login": "DuckyBlender",
"id": 42645784,
"node_id": "MDQ6VXNlcjQyNjQ1Nzg0",
"avatar_url": "https://avatars.githubusercontent.com/u/42645784?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DuckyBlender",
"html_url": "https://github.com/DuckyBlender",
"followers_url": "https://api.github.com/users/DuckyBlender/followers",
"following_url": "https://api.github.com/users/DuckyBlender/following{/other_user}",
"gists_url": "https://api.github.com/users/DuckyBlender/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DuckyBlender/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DuckyBlender/subscriptions",
"organizations_url": "https://api.github.com/users/DuckyBlender/orgs",
"repos_url": "https://api.github.com/users/DuckyBlender/repos",
"events_url": "https://api.github.com/users/DuckyBlender/events{/privacy}",
"received_events_url": "https://api.github.com/users/DuckyBlender/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4706/reactions",
"total_count": 5,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 5,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4706/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1919
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1919/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1919/comments
|
https://api.github.com/repos/ollama/ollama/issues/1919/events
|
https://github.com/ollama/ollama/issues/1919
| 2,076,033,030
|
I_kwDOJ0Z1Ps57vcAG
| 1,919
|
create model, not meeting the performance requirements of the gguf
|
{
"login": "quanpinjie",
"id": 2564119,
"node_id": "MDQ6VXNlcjI1NjQxMTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/2564119?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/quanpinjie",
"html_url": "https://github.com/quanpinjie",
"followers_url": "https://api.github.com/users/quanpinjie/followers",
"following_url": "https://api.github.com/users/quanpinjie/following{/other_user}",
"gists_url": "https://api.github.com/users/quanpinjie/gists{/gist_id}",
"starred_url": "https://api.github.com/users/quanpinjie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/quanpinjie/subscriptions",
"organizations_url": "https://api.github.com/users/quanpinjie/orgs",
"repos_url": "https://api.github.com/users/quanpinjie/repos",
"events_url": "https://api.github.com/users/quanpinjie/events{/privacy}",
"received_events_url": "https://api.github.com/users/quanpinjie/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-01-11T08:25:25
| 2024-03-12T22:35:26
| 2024-03-12T22:35:26
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
i convert baichuan2 to gguf and create a model,
The result is poor performance,do I need to configure anything else
modelfile:
FROM ./baichuan2-ggml-model-f16.gguf

|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1919/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1919/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4779
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4779/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4779/comments
|
https://api.github.com/repos/ollama/ollama/issues/4779/events
|
https://github.com/ollama/ollama/pull/4779
| 2,329,458,752
|
PR_kwDOJ0Z1Ps5xNjcq
| 4,779
|
update welcome prompt in windows to `llama3`
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-06-02T04:00:45
| 2024-06-02T04:05:52
| 2024-06-02T04:05:51
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4779",
"html_url": "https://github.com/ollama/ollama/pull/4779",
"diff_url": "https://github.com/ollama/ollama/pull/4779.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4779.patch",
"merged_at": "2024-06-02T04:05:51"
}
| null |
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4779/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4779/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1073
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1073/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1073/comments
|
https://api.github.com/repos/ollama/ollama/issues/1073/events
|
https://github.com/ollama/ollama/issues/1073
| 1,987,359,804
|
I_kwDOJ0Z1Ps52dLQ8
| 1,073
|
More fine-grained download speed
|
{
"login": "Dialga",
"id": 5157928,
"node_id": "MDQ6VXNlcjUxNTc5Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5157928?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Dialga",
"html_url": "https://github.com/Dialga",
"followers_url": "https://api.github.com/users/Dialga/followers",
"following_url": "https://api.github.com/users/Dialga/following{/other_user}",
"gists_url": "https://api.github.com/users/Dialga/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Dialga/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Dialga/subscriptions",
"organizations_url": "https://api.github.com/users/Dialga/orgs",
"repos_url": "https://api.github.com/users/Dialga/repos",
"events_url": "https://api.github.com/users/Dialga/events{/privacy}",
"received_events_url": "https://api.github.com/users/Dialga/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 2
| 2023-11-10T10:33:32
| 2024-01-17T23:52:24
| 2024-01-17T23:52:24
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Currently when downloading large models, it shows downloading `16/19 GB`, it would be more helpful to show a float e.g. `16.22/19.3 GB`.
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1073/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1073/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8410
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8410/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8410/comments
|
https://api.github.com/repos/ollama/ollama/issues/8410/events
|
https://github.com/ollama/ollama/pull/8410
| 2,785,995,103
|
PR_kwDOJ0Z1Ps6HpLv_
| 8,410
|
sample: add sampling package for new engine
|
{
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "https://api.github.com/users/ParthSareen/followers",
"following_url": "https://api.github.com/users/ParthSareen/following{/other_user}",
"gists_url": "https://api.github.com/users/ParthSareen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ParthSareen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ParthSareen/subscriptions",
"organizations_url": "https://api.github.com/users/ParthSareen/orgs",
"repos_url": "https://api.github.com/users/ParthSareen/repos",
"events_url": "https://api.github.com/users/ParthSareen/events{/privacy}",
"received_events_url": "https://api.github.com/users/ParthSareen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 1
| 2025-01-14T02:04:26
| 2025-01-29T23:09:07
| null |
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | true
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/8410",
"html_url": "https://github.com/ollama/ollama/pull/8410",
"diff_url": "https://github.com/ollama/ollama/pull/8410.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8410.patch",
"merged_at": null
}
|
This package introduces a first pass at the sampler for the new engine.
It's super simple to write your own sampler, people would be able to build and run their own as well from source.
Would like your thoughts on the following:
- Go-isms! Please help with writing idiomatic go.
- Should we keep min-p? openai doesn't support anything other than temperature sampling...
- Optimization considerations below, and any others that you can think of
Known optimizations to be considered (as follow-up):
- [ ] Possibly caching the softmax call -> if this is done multiple times for a token then the result should just be cached
- [ ] TopK does a simple sort - could be using a min heap with 5 nodes
- [ ] TopK should also be a performance improving sampling technique - we should be trimming the amount of vocab the sampler has to go through. This could be tracked for a token through tracking valid indices at each step as a set or an ordered map
References:
- Common sampling methods: https://huyenchip.com/2024/01/16/sampling.html#top_k
- Min p sampling: https://arxiv.org/pdf/2407.01082
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8410/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8410/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6070
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6070/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6070/comments
|
https://api.github.com/repos/ollama/ollama/issues/6070/events
|
https://github.com/ollama/ollama/issues/6070
| 2,437,096,604
|
I_kwDOJ0Z1Ps6RQySc
| 6,070
|
Run Ollama on multiple GPU using ollama run
|
{
"login": "atharvnagrikar",
"id": 111486339,
"node_id": "U_kgDOBqUlgw",
"avatar_url": "https://avatars.githubusercontent.com/u/111486339?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/atharvnagrikar",
"html_url": "https://github.com/atharvnagrikar",
"followers_url": "https://api.github.com/users/atharvnagrikar/followers",
"following_url": "https://api.github.com/users/atharvnagrikar/following{/other_user}",
"gists_url": "https://api.github.com/users/atharvnagrikar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/atharvnagrikar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/atharvnagrikar/subscriptions",
"organizations_url": "https://api.github.com/users/atharvnagrikar/orgs",
"repos_url": "https://api.github.com/users/atharvnagrikar/repos",
"events_url": "https://api.github.com/users/atharvnagrikar/events{/privacy}",
"received_events_url": "https://api.github.com/users/atharvnagrikar/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 2
| 2024-07-30T07:20:07
| 2024-07-30T17:00:55
| 2024-07-30T17:00:52
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi,
I have 2 gpus each having 40 gb of memory and i want to run llama3.1 70b using these GPUs, are there any features to run ollama on distributed way
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6070/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6070/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8680
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8680/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8680/comments
|
https://api.github.com/repos/ollama/ollama/issues/8680/events
|
https://github.com/ollama/ollama/issues/8680
| 2,819,658,888
|
I_kwDOJ0Z1Ps6oEJSI
| 8,680
|
api/chat not working in parallel with docker-compose
|
{
"login": "acclayer7",
"id": 178514264,
"node_id": "U_kgDOCqPpWA",
"avatar_url": "https://avatars.githubusercontent.com/u/178514264?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/acclayer7",
"html_url": "https://github.com/acclayer7",
"followers_url": "https://api.github.com/users/acclayer7/followers",
"following_url": "https://api.github.com/users/acclayer7/following{/other_user}",
"gists_url": "https://api.github.com/users/acclayer7/gists{/gist_id}",
"starred_url": "https://api.github.com/users/acclayer7/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/acclayer7/subscriptions",
"organizations_url": "https://api.github.com/users/acclayer7/orgs",
"repos_url": "https://api.github.com/users/acclayer7/repos",
"events_url": "https://api.github.com/users/acclayer7/events{/privacy}",
"received_events_url": "https://api.github.com/users/acclayer7/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 1
| 2025-01-30T00:54:32
| 2025-01-30T01:05:37
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Hello, I have my ollama with enough memory (16vram), I use OLLAMA_NUM_PARALLEL=2 OLLAMA_MAX_LOADED_MODELS=2, but I don't see any memory increase.
I use docker-compose to make work, however when using the api, it does not increase the vram, it stays using the same vram and I still have 10gb vram left over. It should take up more if the parameters are in parallel, right?
### OS
Linux, Docker
### GPU
Nvidia
### CPU
Intel
### Ollama version
_No response_
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8680/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8680/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/8195
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8195/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8195/comments
|
https://api.github.com/repos/ollama/ollama/issues/8195/events
|
https://github.com/ollama/ollama/issues/8195
| 2,753,790,476
|
I_kwDOJ0Z1Ps6kI4IM
| 8,195
|
ERROR : max retries exceeded
|
{
"login": "Jinish2170",
"id": 121560356,
"node_id": "U_kgDOBz7dJA",
"avatar_url": "https://avatars.githubusercontent.com/u/121560356?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Jinish2170",
"html_url": "https://github.com/Jinish2170",
"followers_url": "https://api.github.com/users/Jinish2170/followers",
"following_url": "https://api.github.com/users/Jinish2170/following{/other_user}",
"gists_url": "https://api.github.com/users/Jinish2170/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Jinish2170/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Jinish2170/subscriptions",
"organizations_url": "https://api.github.com/users/Jinish2170/orgs",
"repos_url": "https://api.github.com/users/Jinish2170/repos",
"events_url": "https://api.github.com/users/Jinish2170/events{/privacy}",
"received_events_url": "https://api.github.com/users/Jinish2170/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 3
| 2024-12-21T06:10:47
| 2024-12-25T07:34:12
| 2024-12-24T19:27:10
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Not been able to install new models like llama3.2 or llama 3.3
error message shown like
"Error: max retries exceeded: Get"https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/dd/dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%2F20241221%2Fauto%2Fs3%2Faws4_request&X-Amz-Date=20241221T060447Z&X-Amz-Expires=86400&X-Amz-SignedHeaders=host&X-Amz-Signature=0ab49575e0ffae6239b1c6eaada0b4daae343e219e9ca48c11be71e2703e27a6": dial tcp: lookup dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com: no such host"
### OS
Windows
### GPU
_No response_
### CPU
Intel
### Ollama version
0.5.4
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8195/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8195/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/75
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/75/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/75/comments
|
https://api.github.com/repos/ollama/ollama/issues/75/events
|
https://github.com/ollama/ollama/issues/75
| 1,801,998,729
|
I_kwDOJ0Z1Ps5raFGJ
| 75
|
error on `ollama run`
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 0
| 2023-07-13T01:45:38
| 2023-07-13T02:21:14
| 2023-07-13T02:21:14
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
`ollama run` sometimes shows a `malformed HTTP response` error:
```
ollama run orca
Error: Post "http://127.0.0.1:11434/api/pull": net/http: HTTP/1.x transport connection broken: malformed HTTP response "{\"total\":2142590208,\"completed\":2142590208,\"percent\":100}"
```
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/75/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/75/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7223
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7223/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7223/comments
|
https://api.github.com/repos/ollama/ollama/issues/7223/events
|
https://github.com/ollama/ollama/issues/7223
| 2,591,323,421
|
I_kwDOJ0Z1Ps6adHUd
| 7,223
|
How to add support for RWKV?
|
{
"login": "MollySophia",
"id": 20746884,
"node_id": "MDQ6VXNlcjIwNzQ2ODg0",
"avatar_url": "https://avatars.githubusercontent.com/u/20746884?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MollySophia",
"html_url": "https://github.com/MollySophia",
"followers_url": "https://api.github.com/users/MollySophia/followers",
"following_url": "https://api.github.com/users/MollySophia/following{/other_user}",
"gists_url": "https://api.github.com/users/MollySophia/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MollySophia/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MollySophia/subscriptions",
"organizations_url": "https://api.github.com/users/MollySophia/orgs",
"repos_url": "https://api.github.com/users/MollySophia/repos",
"events_url": "https://api.github.com/users/MollySophia/events{/privacy}",
"received_events_url": "https://api.github.com/users/MollySophia/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
open
| false
| null |
[] | null | 2
| 2024-10-16T09:54:37
| 2024-10-16T11:41:50
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi! I would like to try to make RWKV v6 models working with ollama.
llama.cpp has it supported already.
- Currently ollama fails to load the model due to a bug in llama.cpp. Here's the fix PR: https://github.com/ggerganov/llama.cpp/pull/9907
- Another issue is the chat template. I wonder how should a chat template be added for a new model? Specifically, how does ollama decide which template to use when loading a modelfile?
Thanks!
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7223/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7223/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/6895
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6895/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6895/comments
|
https://api.github.com/repos/ollama/ollama/issues/6895/events
|
https://github.com/ollama/ollama/pull/6895
| 2,539,685,256
|
PR_kwDOJ0Z1Ps58NyVk
| 6,895
|
CI: adjust step ordering for win arm to match x64
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-09-20T21:09:30
| 2024-09-20T21:21:27
| 2024-09-20T21:20:57
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6895",
"html_url": "https://github.com/ollama/ollama/pull/6895",
"diff_url": "https://github.com/ollama/ollama/pull/6895.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6895.patch",
"merged_at": "2024-09-20T21:20:57"
}
| null |
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6895/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6895/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4606
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4606/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4606/comments
|
https://api.github.com/repos/ollama/ollama/issues/4606/events
|
https://github.com/ollama/ollama/issues/4606
| 2,314,442,587
|
I_kwDOJ0Z1Ps6J85db
| 4,606
|
MiniCPM-Llama3-V 2.5
|
{
"login": "ycyy",
"id": 10897377,
"node_id": "MDQ6VXNlcjEwODk3Mzc3",
"avatar_url": "https://avatars.githubusercontent.com/u/10897377?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ycyy",
"html_url": "https://github.com/ycyy",
"followers_url": "https://api.github.com/users/ycyy/followers",
"following_url": "https://api.github.com/users/ycyy/following{/other_user}",
"gists_url": "https://api.github.com/users/ycyy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ycyy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ycyy/subscriptions",
"organizations_url": "https://api.github.com/users/ycyy/orgs",
"repos_url": "https://api.github.com/users/ycyy/repos",
"events_url": "https://api.github.com/users/ycyy/events{/privacy}",
"received_events_url": "https://api.github.com/users/ycyy/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
closed
| false
| null |
[] | null | 9
| 2024-05-24T05:46:39
| 2024-06-09T17:11:22
| 2024-06-09T17:11:22
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
[https://github.com/OpenBMB/MiniCPM-V](MiniCPM-V)
> [2024.05.24] We release the [MiniCPM-Llama3-V 2.5 gguf](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5-gguf), which supports [llama.cpp](https://github.com/OpenBMB/MiniCPM-V#inference-with-llamacpp) inference and provides a 6~8 token/s smooth decoding on mobile phones. Try it now!
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4606/reactions",
"total_count": 9,
"+1": 7,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 2
}
|
https://api.github.com/repos/ollama/ollama/issues/4606/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/349
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/349/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/349/comments
|
https://api.github.com/repos/ollama/ollama/issues/349/events
|
https://github.com/ollama/ollama/pull/349
| 1,850,683,721
|
PR_kwDOJ0Z1Ps5X7j1Q
| 349
|
close open files
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-08-14T23:09:15
| 2023-08-14T23:15:59
| 2023-08-14T23:15:58
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/349",
"html_url": "https://github.com/ollama/ollama/pull/349",
"diff_url": "https://github.com/ollama/ollama/pull/349.diff",
"patch_url": "https://github.com/ollama/ollama/pull/349.patch",
"merged_at": "2023-08-14T23:15:58"
}
| null |
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/349/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/349/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/8150
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8150/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8150/comments
|
https://api.github.com/repos/ollama/ollama/issues/8150/events
|
https://github.com/ollama/ollama/issues/8150
| 2,746,656,707
|
I_kwDOJ0Z1Ps6jtqfD
| 8,150
|
model run failed
|
{
"login": "kingluxun",
"id": 189943745,
"node_id": "U_kgDOC1JPwQ",
"avatar_url": "https://avatars.githubusercontent.com/u/189943745?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kingluxun",
"html_url": "https://github.com/kingluxun",
"followers_url": "https://api.github.com/users/kingluxun/followers",
"following_url": "https://api.github.com/users/kingluxun/following{/other_user}",
"gists_url": "https://api.github.com/users/kingluxun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kingluxun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kingluxun/subscriptions",
"organizations_url": "https://api.github.com/users/kingluxun/orgs",
"repos_url": "https://api.github.com/users/kingluxun/repos",
"events_url": "https://api.github.com/users/kingluxun/events{/privacy}",
"received_events_url": "https://api.github.com/users/kingluxun/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-12-18T03:27:10
| 2024-12-18T03:37:53
| 2024-12-18T03:37:53
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
What is the issue?
0.5.4
Error: llama runner process has terminated: error:/usr/lib/ollama/runners/cuda_v12_avx/ollama_llama_server: undefined symbol: ggml_backend_cuda_reg.
Same for 0.5.1 is running normally.
nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2024 NVIDIA Corporation
Built on Tue_Oct_29_23:50:19_PDT_2024
Cuda compilation tools, release 12.6, V12.6.85
Build cuda_12.6.r12.6/compiler.35059454_0
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.5.4
|
{
"login": "kingluxun",
"id": 189943745,
"node_id": "U_kgDOC1JPwQ",
"avatar_url": "https://avatars.githubusercontent.com/u/189943745?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kingluxun",
"html_url": "https://github.com/kingluxun",
"followers_url": "https://api.github.com/users/kingluxun/followers",
"following_url": "https://api.github.com/users/kingluxun/following{/other_user}",
"gists_url": "https://api.github.com/users/kingluxun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kingluxun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kingluxun/subscriptions",
"organizations_url": "https://api.github.com/users/kingluxun/orgs",
"repos_url": "https://api.github.com/users/kingluxun/repos",
"events_url": "https://api.github.com/users/kingluxun/events{/privacy}",
"received_events_url": "https://api.github.com/users/kingluxun/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8150/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8150/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6763
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6763/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6763/comments
|
https://api.github.com/repos/ollama/ollama/issues/6763/events
|
https://github.com/ollama/ollama/issues/6763
| 2,520,631,609
|
I_kwDOJ0Z1Ps6WPck5
| 6,763
|
`ollama show` displays context length in scientific notation
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5667396210,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2acg",
"url": "https://api.github.com/repos/ollama/ollama/labels/good%20first%20issue",
"name": "good first issue",
"color": "7057ff",
"default": true,
"description": "Good for newcomers"
}
] |
closed
| false
| null |
[] | null | 0
| 2024-09-11T19:47:47
| 2024-09-11T21:58:42
| 2024-09-11T21:58:41
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?

### OS
_No response_
### GPU
_No response_
### CPU
_No response_
### Ollama version
_No response_
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6763/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6763/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3815
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3815/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3815/comments
|
https://api.github.com/repos/ollama/ollama/issues/3815/events
|
https://github.com/ollama/ollama/issues/3815
| 2,255,853,207
|
I_kwDOJ0Z1Ps6GdZaX
| 3,815
|
OpenSSL SSL_read: error:0A000126
|
{
"login": "xuya227939",
"id": 16217324,
"node_id": "MDQ6VXNlcjE2MjE3MzI0",
"avatar_url": "https://avatars.githubusercontent.com/u/16217324?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xuya227939",
"html_url": "https://github.com/xuya227939",
"followers_url": "https://api.github.com/users/xuya227939/followers",
"following_url": "https://api.github.com/users/xuya227939/following{/other_user}",
"gists_url": "https://api.github.com/users/xuya227939/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xuya227939/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xuya227939/subscriptions",
"organizations_url": "https://api.github.com/users/xuya227939/orgs",
"repos_url": "https://api.github.com/users/xuya227939/repos",
"events_url": "https://api.github.com/users/xuya227939/events{/privacy}",
"received_events_url": "https://api.github.com/users/xuya227939/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info",
"name": "needs more info",
"color": "BA8041",
"default": false,
"description": "More information is needed to assist"
}
] |
closed
| false
| null |
[] | null | 3
| 2024-04-22T07:56:53
| 2024-08-23T20:57:36
| 2024-08-23T20:57:36
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
jiang@jiang-MS-7D90:~$ curl -fsSL https://ollama.com/install.sh | sh
>>> Downloading ollama...
######################################################################## 100.0%#=#=-# # curl: (56) OpenSSL SSL_read: error:0A000126:SSL routines::unexpected eof while reading, errno 0
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
_No response_
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3815/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3815/timeline
| null |
not_planned
| false
|
https://api.github.com/repos/ollama/ollama/issues/2714
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2714/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2714/comments
|
https://api.github.com/repos/ollama/ollama/issues/2714/events
|
https://github.com/ollama/ollama/issues/2714
| 2,151,554,978
|
I_kwDOJ0Z1Ps6APh-i
| 2,714
|
Misunderstanding of ollama num_ctx parameter and context window
|
{
"login": "PhilipAmadasun",
"id": 55031054,
"node_id": "MDQ6VXNlcjU1MDMxMDU0",
"avatar_url": "https://avatars.githubusercontent.com/u/55031054?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PhilipAmadasun",
"html_url": "https://github.com/PhilipAmadasun",
"followers_url": "https://api.github.com/users/PhilipAmadasun/followers",
"following_url": "https://api.github.com/users/PhilipAmadasun/following{/other_user}",
"gists_url": "https://api.github.com/users/PhilipAmadasun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PhilipAmadasun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PhilipAmadasun/subscriptions",
"organizations_url": "https://api.github.com/users/PhilipAmadasun/orgs",
"repos_url": "https://api.github.com/users/PhilipAmadasun/repos",
"events_url": "https://api.github.com/users/PhilipAmadasun/events{/privacy}",
"received_events_url": "https://api.github.com/users/PhilipAmadasun/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 26
| 2024-02-23T18:00:42
| 2024-12-09T09:25:49
| 2024-02-23T19:34:42
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I'm trying to understand the relationship between the context window and the `num_ctx` parameter. Let's say I'm using mistral, and mistral's max context (according to google) is 8000, and "attention span" (according to google) is 128000. If I have a 27000 length user query. What exactly happens? If I set `num_ctx: 4096`. Does mistral just grab the last 4096 token sequence from the 27K user query? Then process the 4096 sequence along with the 128K window it grabs from the previously established overall context (In the case of the RESTful API, I'm talking about that body['context'] thing)?
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2714/reactions",
"total_count": 6,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2714/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8440
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8440/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8440/comments
|
https://api.github.com/repos/ollama/ollama/issues/8440/events
|
https://github.com/ollama/ollama/issues/8440
| 2,789,502,572
|
I_kwDOJ0Z1Ps6mRG5s
| 8,440
|
Using `mkdir -p` rather than checking manually is a dir exists before creating it A.K.A. Storing (very) large files in /root vs. "Error: mkdir /usr/share/ollama/XXX: file exists"
|
{
"login": "liar666",
"id": 3216927,
"node_id": "MDQ6VXNlcjMyMTY5Mjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3216927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/liar666",
"html_url": "https://github.com/liar666",
"followers_url": "https://api.github.com/users/liar666/followers",
"following_url": "https://api.github.com/users/liar666/following{/other_user}",
"gists_url": "https://api.github.com/users/liar666/gists{/gist_id}",
"starred_url": "https://api.github.com/users/liar666/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/liar666/subscriptions",
"organizations_url": "https://api.github.com/users/liar666/orgs",
"repos_url": "https://api.github.com/users/liar666/repos",
"events_url": "https://api.github.com/users/liar666/events{/privacy}",
"received_events_url": "https://api.github.com/users/liar666/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 4
| 2025-01-15T11:06:49
| 2025-01-16T15:18:37
| 2025-01-16T14:59:34
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Hi,
Like many Linux users, I have a separated `/home` and `/`. When I created these partitions (~10 years ago), I allocated only ~40GB to `/` as it was _way_ sufficient to store the OS + all the packages I use in my every day life.
Unfortunately, when I experiment with new models in `ollama`, I end up very quickly filling my `/` partition, which is very bad for the system.
As a Linux user for years, I'm used to this kind of situations and know a few 'tricks' to circumvent them, like creating a directory on another partition and using a symbolic link to "redirect" the `/root/dir/` content to this other directory.
Unfortunately, when I try to do that with `ollama`, the daemon/"service" refuses to start, with the following error:
```
Jan 15 11:45:40 LOCAL-MACHINE systemd[1]: Started ollama.service - Ollama Service.
Jan 15 11:45:40 LOCAL-MACHINE ollama[1665365]: 2025/01/15 11:45:40 routes.go:1259: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PRO>
Jan 15 11:45:40 LOCAL-MACHINE ollama[1665365]: Error: mkdir /usr/share/ollama/.ollama/models: file exists
Jan 15 11:45:40 LOCAL-MACHINE systemd[1]: ollama.service: Main process exited, code=exited, status=1/FAILURE
Jan 15 11:45:40 LOCAL-MACHINE systemd[1]: ollama.service: Failed with result 'exit-code'.
```
Apparently, you try to create `/usr/share/ollama/.ollama/models` at every start of the daemon... and even if it already exists as a link...
Could you correct that or allow us to change the directory where `ollama` stores the models?
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.5.4
|
{
"login": "liar666",
"id": 3216927,
"node_id": "MDQ6VXNlcjMyMTY5Mjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3216927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/liar666",
"html_url": "https://github.com/liar666",
"followers_url": "https://api.github.com/users/liar666/followers",
"following_url": "https://api.github.com/users/liar666/following{/other_user}",
"gists_url": "https://api.github.com/users/liar666/gists{/gist_id}",
"starred_url": "https://api.github.com/users/liar666/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/liar666/subscriptions",
"organizations_url": "https://api.github.com/users/liar666/orgs",
"repos_url": "https://api.github.com/users/liar666/repos",
"events_url": "https://api.github.com/users/liar666/events{/privacy}",
"received_events_url": "https://api.github.com/users/liar666/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8440/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8440/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3408
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3408/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3408/comments
|
https://api.github.com/repos/ollama/ollama/issues/3408/events
|
https://github.com/ollama/ollama/issues/3408
| 2,215,643,381
|
I_kwDOJ0Z1Ps6EEAj1
| 3,408
|
Pushing a model isn't early alpha anymore
|
{
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.github.com/users/technovangelist/followers",
"following_url": "https://api.github.com/users/technovangelist/following{/other_user}",
"gists_url": "https://api.github.com/users/technovangelist/gists{/gist_id}",
"starred_url": "https://api.github.com/users/technovangelist/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/technovangelist/subscriptions",
"organizations_url": "https://api.github.com/users/technovangelist/orgs",
"repos_url": "https://api.github.com/users/technovangelist/repos",
"events_url": "https://api.github.com/users/technovangelist/events{/privacy}",
"received_events_url": "https://api.github.com/users/technovangelist/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-03-29T16:17:47
| 2024-04-15T19:40:06
| 2024-04-15T19:40:06
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What are you trying to do?
Pushing a model isn't early alpha anymore
### How should we solve this?
remove 'early alpha' in the import doc
### What is the impact of not solving this?
folks will think its early alpha
### Anything else?
_No response_
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3408/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3408/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8446
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8446/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8446/comments
|
https://api.github.com/repos/ollama/ollama/issues/8446/events
|
https://github.com/ollama/ollama/pull/8446
| 2,791,628,854
|
PR_kwDOJ0Z1Ps6H8rG9
| 8,446
|
add conversion code for cohere2 arch
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 0
| 2025-01-16T03:51:52
| 2025-01-18T05:54:34
| null |
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/8446",
"html_url": "https://github.com/ollama/ollama/pull/8446",
"diff_url": "https://github.com/ollama/ollama/pull/8446.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8446.patch",
"merged_at": null
}
|
This change adds conversion + test routines for Cohere's command-r7b model.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8446/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8446/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7102
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7102/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7102/comments
|
https://api.github.com/repos/ollama/ollama/issues/7102/events
|
https://github.com/ollama/ollama/issues/7102
| 2,566,570,878
|
I_kwDOJ0Z1Ps6Y-sN-
| 7,102
|
VideoCore GPU support
|
{
"login": "erkinalp",
"id": 5833034,
"node_id": "MDQ6VXNlcjU4MzMwMzQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/5833034?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/erkinalp",
"html_url": "https://github.com/erkinalp",
"followers_url": "https://api.github.com/users/erkinalp/followers",
"following_url": "https://api.github.com/users/erkinalp/following{/other_user}",
"gists_url": "https://api.github.com/users/erkinalp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/erkinalp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/erkinalp/subscriptions",
"organizations_url": "https://api.github.com/users/erkinalp/orgs",
"repos_url": "https://api.github.com/users/erkinalp/repos",
"events_url": "https://api.github.com/users/erkinalp/events{/privacy}",
"received_events_url": "https://api.github.com/users/erkinalp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 0
| 2024-10-04T14:51:16
| 2024-10-04T14:51:16
| null |
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Required to be able to run models on RaspberryPi's GPU.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7102/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7102/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/8016
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8016/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8016/comments
|
https://api.github.com/repos/ollama/ollama/issues/8016/events
|
https://github.com/ollama/ollama/pull/8016
| 2,727,845,381
|
PR_kwDOJ0Z1Ps6ElPgL
| 8,016
|
Add warning message when prompt doesn't include json for structured outputs
|
{
"login": "danclaytondev",
"id": 27310664,
"node_id": "MDQ6VXNlcjI3MzEwNjY0",
"avatar_url": "https://avatars.githubusercontent.com/u/27310664?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/danclaytondev",
"html_url": "https://github.com/danclaytondev",
"followers_url": "https://api.github.com/users/danclaytondev/followers",
"following_url": "https://api.github.com/users/danclaytondev/following{/other_user}",
"gists_url": "https://api.github.com/users/danclaytondev/gists{/gist_id}",
"starred_url": "https://api.github.com/users/danclaytondev/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/danclaytondev/subscriptions",
"organizations_url": "https://api.github.com/users/danclaytondev/orgs",
"repos_url": "https://api.github.com/users/danclaytondev/repos",
"events_url": "https://api.github.com/users/danclaytondev/events{/privacy}",
"received_events_url": "https://api.github.com/users/danclaytondev/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-12-09T17:51:22
| 2024-12-09T17:57:07
| 2024-12-09T17:55:09
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/8016",
"html_url": "https://github.com/ollama/ollama/pull/8016",
"diff_url": "https://github.com/ollama/ollama/pull/8016.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8016.patch",
"merged_at": null
}
|
ollama currently warns when `json` output is required but it is not mentioned in the prompt. It is recommended in the docs that prompts should ask for JSON output.
With the new structured output feature, the warning isn't logged if a user supplied a schema, only if they ask for `"format": "json"`. I think we need the warning in both cases.
Tagging @ParthSareen because I think you have been working on this recently. :)
|
{
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "https://api.github.com/users/ParthSareen/followers",
"following_url": "https://api.github.com/users/ParthSareen/following{/other_user}",
"gists_url": "https://api.github.com/users/ParthSareen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ParthSareen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ParthSareen/subscriptions",
"organizations_url": "https://api.github.com/users/ParthSareen/orgs",
"repos_url": "https://api.github.com/users/ParthSareen/repos",
"events_url": "https://api.github.com/users/ParthSareen/events{/privacy}",
"received_events_url": "https://api.github.com/users/ParthSareen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8016/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/ollama/ollama/issues/8016/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5734
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5734/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5734/comments
|
https://api.github.com/repos/ollama/ollama/issues/5734/events
|
https://github.com/ollama/ollama/pull/5734
| 2,412,269,766
|
PR_kwDOJ0Z1Ps51kqk-
| 5,734
|
server: validate template
|
{
"login": "joshyan1",
"id": 76125168,
"node_id": "MDQ6VXNlcjc2MTI1MTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/76125168?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joshyan1",
"html_url": "https://github.com/joshyan1",
"followers_url": "https://api.github.com/users/joshyan1/followers",
"following_url": "https://api.github.com/users/joshyan1/following{/other_user}",
"gists_url": "https://api.github.com/users/joshyan1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joshyan1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joshyan1/subscriptions",
"organizations_url": "https://api.github.com/users/joshyan1/orgs",
"repos_url": "https://api.github.com/users/joshyan1/repos",
"events_url": "https://api.github.com/users/joshyan1/events{/privacy}",
"received_events_url": "https://api.github.com/users/joshyan1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-07-17T00:13:11
| 2024-07-22T18:20:15
| 2024-07-19T22:24:29
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5734",
"html_url": "https://github.com/ollama/ollama/pull/5734",
"diff_url": "https://github.com/ollama/ollama/pull/5734.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5734.patch",
"merged_at": "2024-07-19T22:24:29"
}
|
tries to parse template and returns error if it fails.
resolves: https://github.com/ollama/ollama/issues/5449
|
{
"login": "joshyan1",
"id": 76125168,
"node_id": "MDQ6VXNlcjc2MTI1MTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/76125168?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joshyan1",
"html_url": "https://github.com/joshyan1",
"followers_url": "https://api.github.com/users/joshyan1/followers",
"following_url": "https://api.github.com/users/joshyan1/following{/other_user}",
"gists_url": "https://api.github.com/users/joshyan1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joshyan1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joshyan1/subscriptions",
"organizations_url": "https://api.github.com/users/joshyan1/orgs",
"repos_url": "https://api.github.com/users/joshyan1/repos",
"events_url": "https://api.github.com/users/joshyan1/events{/privacy}",
"received_events_url": "https://api.github.com/users/joshyan1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5734/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5734/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4940
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4940/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4940/comments
|
https://api.github.com/repos/ollama/ollama/issues/4940/events
|
https://github.com/ollama/ollama/issues/4940
| 2,341,920,624
|
I_kwDOJ0Z1Ps6Llt9w
| 4,940
|
Can't run ollama using cmd on Windows
|
{
"login": "ziarmandhost",
"id": 30569343,
"node_id": "MDQ6VXNlcjMwNTY5MzQz",
"avatar_url": "https://avatars.githubusercontent.com/u/30569343?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ziarmandhost",
"html_url": "https://github.com/ziarmandhost",
"followers_url": "https://api.github.com/users/ziarmandhost/followers",
"following_url": "https://api.github.com/users/ziarmandhost/following{/other_user}",
"gists_url": "https://api.github.com/users/ziarmandhost/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ziarmandhost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ziarmandhost/subscriptions",
"organizations_url": "https://api.github.com/users/ziarmandhost/orgs",
"repos_url": "https://api.github.com/users/ziarmandhost/repos",
"events_url": "https://api.github.com/users/ziarmandhost/events{/privacy}",
"received_events_url": "https://api.github.com/users/ziarmandhost/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-06-09T00:17:26
| 2024-06-09T15:27:37
| 2024-06-09T15:27:37
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I can't run ollama using windows 11 terminal app:

But environment variable exists in "System variables":

### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
_No response_
|
{
"login": "ziarmandhost",
"id": 30569343,
"node_id": "MDQ6VXNlcjMwNTY5MzQz",
"avatar_url": "https://avatars.githubusercontent.com/u/30569343?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ziarmandhost",
"html_url": "https://github.com/ziarmandhost",
"followers_url": "https://api.github.com/users/ziarmandhost/followers",
"following_url": "https://api.github.com/users/ziarmandhost/following{/other_user}",
"gists_url": "https://api.github.com/users/ziarmandhost/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ziarmandhost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ziarmandhost/subscriptions",
"organizations_url": "https://api.github.com/users/ziarmandhost/orgs",
"repos_url": "https://api.github.com/users/ziarmandhost/repos",
"events_url": "https://api.github.com/users/ziarmandhost/events{/privacy}",
"received_events_url": "https://api.github.com/users/ziarmandhost/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4940/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4940/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3277
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3277/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3277/comments
|
https://api.github.com/repos/ollama/ollama/issues/3277/events
|
https://github.com/ollama/ollama/issues/3277
| 2,198,917,431
|
I_kwDOJ0Z1Ps6DENE3
| 3,277
|
Can not build ollama on windows 11
|
{
"login": "linkerlin",
"id": 37062,
"node_id": "MDQ6VXNlcjM3MDYy",
"avatar_url": "https://avatars.githubusercontent.com/u/37062?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/linkerlin",
"html_url": "https://github.com/linkerlin",
"followers_url": "https://api.github.com/users/linkerlin/followers",
"following_url": "https://api.github.com/users/linkerlin/following{/other_user}",
"gists_url": "https://api.github.com/users/linkerlin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/linkerlin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/linkerlin/subscriptions",
"organizations_url": "https://api.github.com/users/linkerlin/orgs",
"repos_url": "https://api.github.com/users/linkerlin/repos",
"events_url": "https://api.github.com/users/linkerlin/events{/privacy}",
"received_events_url": "https://api.github.com/users/linkerlin/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 1
| 2024-03-21T01:38:22
| 2024-03-21T10:59:38
| 2024-03-21T10:59:37
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
```
D:\gos\ollama>go build .
# github.com/jmorganca/ollama/llm
llm\llm.go:52:17: undefined: gpu.CheckVRAM
llm\llm.go:68:14: undefined: gpu.GetGPUInfo
llm\llm.go:166:15: undefined: newDynExtServer
```
### What did you expect to see?
Succ build
### Steps to reproduce
_No response_
### Are there any recent changes that introduced the issue?
_No response_
### OS
_No response_
### Architecture
_No response_
### Platform
_No response_
### Ollama version
_No response_
### GPU
_No response_
### GPU info
_No response_
### CPU
_No response_
### Other software
_No response_
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3277/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3277/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5624
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5624/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5624/comments
|
https://api.github.com/repos/ollama/ollama/issues/5624/events
|
https://github.com/ollama/ollama/issues/5624
| 2,402,020,264
|
I_kwDOJ0Z1Ps6PK-uo
| 5,624
|
Make full use of all GPU resources for inference
|
{
"login": "HeroSong666",
"id": 142960235,
"node_id": "U_kgDOCIVmaw",
"avatar_url": "https://avatars.githubusercontent.com/u/142960235?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HeroSong666",
"html_url": "https://github.com/HeroSong666",
"followers_url": "https://api.github.com/users/HeroSong666/followers",
"following_url": "https://api.github.com/users/HeroSong666/following{/other_user}",
"gists_url": "https://api.github.com/users/HeroSong666/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HeroSong666/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HeroSong666/subscriptions",
"organizations_url": "https://api.github.com/users/HeroSong666/orgs",
"repos_url": "https://api.github.com/users/HeroSong666/repos",
"events_url": "https://api.github.com/users/HeroSong666/events{/privacy}",
"received_events_url": "https://api.github.com/users/HeroSong666/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg",
"url": "https://api.github.com/repos/ollama/ollama/labels/nvidia",
"name": "nvidia",
"color": "8CDB00",
"default": false,
"description": "Issues relating to Nvidia GPUs and CUDA"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info",
"name": "needs more info",
"color": "BA8041",
"default": false,
"description": "More information is needed to assist"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 8
| 2024-07-11T01:25:13
| 2024-09-05T23:04:06
| 2024-09-05T23:04:05
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I used 4 A30 gpus to reason about qwen2-72b's model. But even at peak times, each card was not used more than 35%. At the same time, the speed of reasoning is relatively slow.
### OS
Linux, Docker
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.2.0
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5624/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5624/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8233
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8233/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8233/comments
|
https://api.github.com/repos/ollama/ollama/issues/8233/events
|
https://github.com/ollama/ollama/issues/8233
| 2,758,178,036
|
I_kwDOJ0Z1Ps6kZnT0
| 8,233
|
version aware linux upgrade
|
{
"login": "lamyergeier",
"id": 42092626,
"node_id": "MDQ6VXNlcjQyMDkyNjI2",
"avatar_url": "https://avatars.githubusercontent.com/u/42092626?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lamyergeier",
"html_url": "https://github.com/lamyergeier",
"followers_url": "https://api.github.com/users/lamyergeier/followers",
"following_url": "https://api.github.com/users/lamyergeier/following{/other_user}",
"gists_url": "https://api.github.com/users/lamyergeier/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lamyergeier/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lamyergeier/subscriptions",
"organizations_url": "https://api.github.com/users/lamyergeier/orgs",
"repos_url": "https://api.github.com/users/lamyergeier/repos",
"events_url": "https://api.github.com/users/lamyergeier/events{/privacy}",
"received_events_url": "https://api.github.com/users/lamyergeier/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 5755339642,
"node_id": "LA_kwDOJ0Z1Ps8AAAABVwuDeg",
"url": "https://api.github.com/repos/ollama/ollama/labels/linux",
"name": "linux",
"color": "516E70",
"default": false,
"description": ""
},
{
"id": 6678628138,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjhPHKg",
"url": "https://api.github.com/repos/ollama/ollama/labels/install",
"name": "install",
"color": "E0B88D",
"default": false,
"description": ""
}
] |
open
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 3
| 2024-12-24T18:05:51
| 2025-01-07T16:58:11
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
ollama install command, `curl -fsSL https://ollama.com/install.sh | sh`, removes and reinstalls even if there is no version update.
The script should not remove current version downloads, if there is no version update.
### OS
Linux
### GPU
Intel
### CPU
Intel
### Ollama version
0.5.4
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8233/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8233/timeline
| null |
reopened
| false
|
https://api.github.com/repos/ollama/ollama/issues/2990
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2990/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2990/comments
|
https://api.github.com/repos/ollama/ollama/issues/2990/events
|
https://github.com/ollama/ollama/pull/2990
| 2,174,616,537
|
PR_kwDOJ0Z1Ps5pAbJL
| 2,990
|
fix: default terminal width, height
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-03-07T19:29:12
| 2024-03-08T23:20:55
| 2024-03-08T23:20:54
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/2990",
"html_url": "https://github.com/ollama/ollama/pull/2990",
"diff_url": "https://github.com/ollama/ollama/pull/2990.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2990.patch",
"merged_at": "2024-03-08T23:20:54"
}
|
resolves #2970
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2990/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2990/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4018
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4018/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4018/comments
|
https://api.github.com/repos/ollama/ollama/issues/4018/events
|
https://github.com/ollama/ollama/issues/4018
| 2,268,011,330
|
I_kwDOJ0Z1Ps6HLxtC
| 4,018
|
API truncates parentheses before stop token
|
{
"login": "IgorAlexey",
"id": 18470725,
"node_id": "MDQ6VXNlcjE4NDcwNzI1",
"avatar_url": "https://avatars.githubusercontent.com/u/18470725?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/IgorAlexey",
"html_url": "https://github.com/IgorAlexey",
"followers_url": "https://api.github.com/users/IgorAlexey/followers",
"following_url": "https://api.github.com/users/IgorAlexey/following{/other_user}",
"gists_url": "https://api.github.com/users/IgorAlexey/gists{/gist_id}",
"starred_url": "https://api.github.com/users/IgorAlexey/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/IgorAlexey/subscriptions",
"organizations_url": "https://api.github.com/users/IgorAlexey/orgs",
"repos_url": "https://api.github.com/users/IgorAlexey/repos",
"events_url": "https://api.github.com/users/IgorAlexey/events{/privacy}",
"received_events_url": "https://api.github.com/users/IgorAlexey/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-04-29T02:39:07
| 2024-07-17T00:44:26
| 2024-07-17T00:44:26
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
The generate API is truncating closing parentheses when they appear before a stop token at seemingly random occasions. This issue is reproducible across all models I've tested (Phi3, All llama3 versions, WizardLM2) so it looks like it's an API limitation?
sample text
```
Everyone uses `:)` at the end of their messages
|User|Message|
|-|-|
|User|Hello there! :)|
|UserB|
```
```bash
curl http://localhost:11434/api/generate -d '{"model": "llama3:instruct", "prompt": "Everyone uses `:)` at the end of their messages\\n|User|Message|\\n|-|-|\\n|User|Hello there! :)|\\n|UserB|", "raw": true, "stream": false, "options": {"stop": ["|"]}}'
```
_P.S. If you don't see it the first time, run the command a couple times more and it might happen_
**Expected**: The response should include the closing parentheses in the generated text, like so: `... :)`
**Actual**: The response truncates the closing parentheses, resulting in `... :`
This doesn't happen if the token that follows the `)` isn't a stop token, but if it is, then it might take the `)` with it.
It also happens with quotes and <>
**Linux/Nvidia/Intel**
Ollama **0.1.32**
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4018/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4018/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6247
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6247/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6247/comments
|
https://api.github.com/repos/ollama/ollama/issues/6247/events
|
https://github.com/ollama/ollama/pull/6247
| 2,454,585,561
|
PR_kwDOJ0Z1Ps53xa8Q
| 6,247
|
Store layers inside manifests consistently as values.
|
{
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users/jessegross/followers",
"following_url": "https://api.github.com/users/jessegross/following{/other_user}",
"gists_url": "https://api.github.com/users/jessegross/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jessegross/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jessegross/subscriptions",
"organizations_url": "https://api.github.com/users/jessegross/orgs",
"repos_url": "https://api.github.com/users/jessegross/repos",
"events_url": "https://api.github.com/users/jessegross/events{/privacy}",
"received_events_url": "https://api.github.com/users/jessegross/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-08-08T00:31:42
| 2024-08-08T17:46:46
| 2024-08-08T17:46:43
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6247",
"html_url": "https://github.com/ollama/ollama/pull/6247",
"diff_url": "https://github.com/ollama/ollama/pull/6247.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6247.patch",
"merged_at": "2024-08-08T17:46:43"
}
|
This consistently uses layers as values (instead of pointers) inside of manifest after the change to make the config be passed by value. The interface is clearer and it reduces the need dereference and take address of in some places.
I'm not sure if the changes in layer.go are considered canonical Go, so I would appreciate some feedback there. In particular, the New functions return a layer by reference but the receiver functions take a pointer.
|
{
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users/jessegross/followers",
"following_url": "https://api.github.com/users/jessegross/following{/other_user}",
"gists_url": "https://api.github.com/users/jessegross/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jessegross/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jessegross/subscriptions",
"organizations_url": "https://api.github.com/users/jessegross/orgs",
"repos_url": "https://api.github.com/users/jessegross/repos",
"events_url": "https://api.github.com/users/jessegross/events{/privacy}",
"received_events_url": "https://api.github.com/users/jessegross/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6247/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6247/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6288
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6288/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6288/comments
|
https://api.github.com/repos/ollama/ollama/issues/6288/events
|
https://github.com/ollama/ollama/issues/6288
| 2,458,320,550
|
I_kwDOJ0Z1Ps6Shv6m
| 6,288
|
OLLAMA_LLM_LIBRARY=cpu is ignored: ErrorOutOfDeviceMemory when zero layers are offloaded to GPU through Vulkan
|
{
"login": "yurivict",
"id": 271906,
"node_id": "MDQ6VXNlcjI3MTkwNg==",
"avatar_url": "https://avatars.githubusercontent.com/u/271906?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yurivict",
"html_url": "https://github.com/yurivict",
"followers_url": "https://api.github.com/users/yurivict/followers",
"following_url": "https://api.github.com/users/yurivict/following{/other_user}",
"gists_url": "https://api.github.com/users/yurivict/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yurivict/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yurivict/subscriptions",
"organizations_url": "https://api.github.com/users/yurivict/orgs",
"repos_url": "https://api.github.com/users/yurivict/repos",
"events_url": "https://api.github.com/users/yurivict/events{/privacy}",
"received_events_url": "https://api.github.com/users/yurivict/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 4
| 2024-08-09T17:00:14
| 2024-08-13T05:42:01
| 2024-08-13T05:42:01
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
ollama server is run on CPU: ```OLLAMA_LLM_LIBRARY=cpu ollama start```
While attempting to run the gemma model, it still attempts to use vulkan and fails:
```
2024/08/09 09:58:04 routes.go:1108: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY:cpu OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/yuri/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
time=2024-08-09T09:58:04.045-07:00 level=INFO source=images.go:781 msg="total blobs: 47"
time=2024-08-09T09:58:04.047-07:00 level=INFO source=images.go:788 msg="total unused blobs removed: 0"
[GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached.
[GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.
- using env: export GIN_MODE=release
- using code: gin.SetMode(gin.ReleaseMode)
[GIN-debug] POST /api/pull --> github.com/ollama/ollama/server.(*Server).PullModelHandler-fm (5 handlers)
[GIN-debug] POST /api/generate --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers)
[GIN-debug] POST /api/chat --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers)
[GIN-debug] POST /api/embed --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers)
[GIN-debug] POST /api/embeddings --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers)
[GIN-debug] POST /api/create --> github.com/ollama/ollama/server.(*Server).CreateModelHandler-fm (5 handlers)
[GIN-debug] POST /api/push --> github.com/ollama/ollama/server.(*Server).PushModelHandler-fm (5 handlers)
[GIN-debug] POST /api/copy --> github.com/ollama/ollama/server.(*Server).CopyModelHandler-fm (5 handlers)
[GIN-debug] DELETE /api/delete --> github.com/ollama/ollama/server.(*Server).DeleteModelHandler-fm (5 handlers)
[GIN-debug] POST /api/show --> github.com/ollama/ollama/server.(*Server).ShowModelHandler-fm (5 handlers)
[GIN-debug] POST /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers)
[GIN-debug] HEAD /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers)
[GIN-debug] GET /api/ps --> github.com/ollama/ollama/server.(*Server).ProcessHandler-fm (5 handlers)
[GIN-debug] POST /v1/chat/completions --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers)
[GIN-debug] POST /v1/completions --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers)
[GIN-debug] POST /v1/embeddings --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers)
[GIN-debug] GET /v1/models --> github.com/ollama/ollama/server.(*Server).ListModelsHandler-fm (6 handlers)
[GIN-debug] GET /v1/models/:model --> github.com/ollama/ollama/server.(*Server).ShowModelHandler-fm (6 handlers)
[GIN-debug] GET / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] GET /api/tags --> github.com/ollama/ollama/server.(*Server).ListModelsHandler-fm (5 handlers)
[GIN-debug] GET /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
[GIN-debug] HEAD / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] HEAD /api/tags --> github.com/ollama/ollama/server.(*Server).ListModelsHandler-fm (5 handlers)
[GIN-debug] HEAD /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
time=2024-08-09T09:58:04.049-07:00 level=INFO source=routes.go:1155 msg="Listening on 127.0.0.1:11434 (version 0.0.0)"
time=2024-08-09T09:58:04.053-07:00 level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama1197517005/runners
time=2024-08-09T09:58:04.053-07:00 level=DEBUG source=payload.go:182 msg=extracting variant=cpu file=build/bsd/x86_64/cpu/bin/ollama_llama_server.gz
time=2024-08-09T09:58:04.053-07:00 level=DEBUG source=payload.go:182 msg=extracting variant=cpu_avx file=build/bsd/x86_64/cpu_avx/bin/ollama_llama_server.gz
time=2024-08-09T09:58:04.053-07:00 level=DEBUG source=payload.go:182 msg=extracting variant=cpu_avx2 file=build/bsd/x86_64/cpu_avx2/bin/ollama_llama_server.gz
time=2024-08-09T09:58:04.053-07:00 level=DEBUG source=payload.go:182 msg=extracting variant=vulkan file=build/bsd/x86_64/vulkan/bin/ollama_llama_server.gz
time=2024-08-09T09:58:04.053-07:00 level=DEBUG source=payload.go:182 msg=extracting variant=vulkan file=build/bsd/x86_64/vulkan/bin/vulkan-shaders-gen.gz
time=2024-08-09T09:58:04.166-07:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama1197517005/runners/cpu/ollama_llama_server
time=2024-08-09T09:58:04.166-07:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama1197517005/runners/cpu_avx/ollama_llama_server
time=2024-08-09T09:58:04.166-07:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama1197517005/runners/cpu_avx2/ollama_llama_server
time=2024-08-09T09:58:04.166-07:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama1197517005/runners/vulkan/ollama_llama_server
time=2024-08-09T09:58:04.166-07:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 vulkan]"
time=2024-08-09T09:58:04.166-07:00 level=DEBUG source=payload.go:45 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY"
time=2024-08-09T09:58:04.166-07:00 level=DEBUG source=sched.go:105 msg="starting llm scheduler"
time=2024-08-09T09:58:04.248-07:00 level=INFO source=types.go:105 msg="inference compute" id=0 library=vulkan compute="" driver=0.0 name="" total="6.2 GiB" available="6.2 GiB"
[GIN] 2024/08/09 - 09:58:06 | 200 | 43.926µs | 127.0.0.1 | HEAD "/"
[GIN] 2024/08/09 - 09:58:06 | 200 | 64.335737ms | 127.0.0.1 | POST "/api/show"
time=2024-08-09T09:58:06.863-07:00 level=DEBUG source=sched.go:181 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=0x146ea00 gpu_count=1
time=2024-08-09T09:58:06.921-07:00 level=DEBUG source=sched.go:219 msg="loading first model" model=/home/yuri/.ollama/models/blobs/sha256-ef311de6af9db043d51ca4b1e766c28e0a1ac41d60420fed5e001dc470c64b77
time=2024-08-09T09:58:06.921-07:00 level=DEBUG source=memory.go:101 msg=evaluating library=vulkan gpu_count=1 available="[6.2 GiB]"
time=2024-08-09T09:58:06.921-07:00 level=DEBUG source=memory.go:101 msg=evaluating library=vulkan gpu_count=1 available="[6.2 GiB]"
time=2024-08-09T09:58:06.922-07:00 level=DEBUG source=memory.go:101 msg=evaluating library=vulkan gpu_count=1 available="[6.2 GiB]"
time=2024-08-09T09:58:06.922-07:00 level=DEBUG source=memory.go:101 msg=evaluating library=vulkan gpu_count=1 available="[6.2 GiB]"
time=2024-08-09T09:58:06.923-07:00 level=DEBUG source=server.go:100 msg="system memory" total="24.0 GiB" free="0 B" free_swap="0 B"
time=2024-08-09T09:58:06.923-07:00 level=DEBUG source=memory.go:101 msg=evaluating library=vulkan gpu_count=1 available="[6.2 GiB]"
time=2024-08-09T09:58:06.924-07:00 level=INFO source=memory.go:309 msg="offload to vulkan" layers.requested=-1 layers.model=29 layers.offload=25 layers.split="" memory.available="[6.2 GiB]" memory.required.full="7.3 GiB" memory.required.partial="6.2 GiB" memory.required.kv="896.0 MiB" memory.required.allocations="[6.2 GiB]" memory.weights.total="4.9 GiB" memory.weights.repeating="4.3 GiB" memory.weights.nonrepeating="615.2 MiB" memory.graph.full="506.0 MiB" memory.graph.partial="1.1 GiB"
time=2024-08-09T09:58:06.924-07:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama1197517005/runners/cpu/ollama_llama_server
time=2024-08-09T09:58:06.924-07:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama1197517005/runners/cpu_avx/ollama_llama_server
time=2024-08-09T09:58:06.924-07:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama1197517005/runners/cpu_avx2/ollama_llama_server
time=2024-08-09T09:58:06.924-07:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama1197517005/runners/vulkan/ollama_llama_server
time=2024-08-09T09:58:06.924-07:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama1197517005/runners/cpu/ollama_llama_server
time=2024-08-09T09:58:06.924-07:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama1197517005/runners/cpu_avx/ollama_llama_server
time=2024-08-09T09:58:06.924-07:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama1197517005/runners/cpu_avx2/ollama_llama_server
time=2024-08-09T09:58:06.924-07:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama1197517005/runners/vulkan/ollama_llama_server
time=2024-08-09T09:58:06.924-07:00 level=INFO source=server.go:172 msg="user override" OLLAMA_LLM_LIBRARY=cpu path=/tmp/ollama1197517005/runners/cpu
time=2024-08-09T09:58:06.924-07:00 level=INFO source=server.go:390 msg="starting llama server" cmd="/tmp/ollama1197517005/runners/cpu/ollama_llama_server --model /home/yuri/.ollama/models/blobs/sha256-ef311de6af9db043d51ca4b1e766c28e0a1ac41d60420fed5e001dc470c64b77 --ctx-size 2048 --batch-size 512 --embedding --log-disable --verbose --parallel 1 --port 10149"
time=2024-08-09T09:58:06.924-07:00 level=DEBUG source=server.go:407 msg=subprocess environment="[PATH=/home/yuri/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin LD_LIBRARY_PATH=/tmp/ollama1197517005/runners/cpu:/tmp/ollama1197517005/runners]"
time=2024-08-09T09:58:06.928-07:00 level=INFO source=sched.go:445 msg="loaded runners" count=1
time=2024-08-09T09:58:06.928-07:00 level=INFO source=server.go:590 msg="waiting for llama runner to start responding"
time=2024-08-09T09:58:06.929-07:00 level=INFO source=server.go:624 msg="waiting for server to become available" status="llm server error"
INFO [main] build info | build=673836 commit="b5bb445feab3" tid="0x2678fb612000" timestamp=1723222686
INFO [main] system info | n_threads=4 n_threads_batch=-1 system_info="AVX = 0 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | SSSE3 = 0 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="0x2678fb612000" timestamp=1723222686 total_threads=8
INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="7" port="10149" tid="0x2678fb612000" timestamp=1723222686
llama_model_loader: loaded meta data with 24 key-value pairs and 254 tensors from /home/yuri/.ollama/models/blobs/sha256-ef311de6af9db043d51ca4b1e766c28e0a1ac41d60420fed5e001dc470c64b77 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = gemma
llama_model_loader: - kv 1: general.name str = gemma-1.1-7b-it
llama_model_loader: - kv 2: gemma.context_length u32 = 8192
llama_model_loader: - kv 3: gemma.embedding_length u32 = 3072
llama_model_loader: - kv 4: gemma.block_count u32 = 28
llama_model_loader: - kv 5: gemma.feed_forward_length u32 = 24576
llama_model_loader: - kv 6: gemma.attention.head_count u32 = 16
llama_model_loader: - kv 7: gemma.attention.head_count_kv u32 = 16
llama_model_loader: - kv 8: gemma.attention.layer_norm_rms_epsilon f32 = 0.000001
llama_model_loader: - kv 9: gemma.attention.key_length u32 = 256
llama_model_loader: - kv 10: gemma.attention.value_length u32 = 256
llama_model_loader: - kv 11: general.file_type u32 = 2
llama_model_loader: - kv 12: tokenizer.ggml.model str = llama
llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,256000] = ["<pad>", "<eos>", "<bos>", "<unk>", ...
llama_model_loader: - kv 14: tokenizer.ggml.scores arr[f32,256000] = [0.000000, 0.000000, 0.000000, 0.0000...
time=2024-08-09T09:58:07.220-07:00 level=INFO source=server.go:624 msg="waiting for server to become available" status="llm server loading model"
llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,256000] = [3, 3, 3, 2, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 16: tokenizer.ggml.bos_token_id u32 = 2
llama_model_loader: - kv 17: tokenizer.ggml.eos_token_id u32 = 1
llama_model_loader: - kv 18: tokenizer.ggml.unknown_token_id u32 = 3
llama_model_loader: - kv 19: tokenizer.ggml.padding_token_id u32 = 0
lla
llm_load_print_meta: n_head = 16
llm_load_print_meta: n_head_kv = 16
llm_load_print_meta: n_rot = 256
llm_load_print_meta: n_swa = 0
llm_load_print_meta: n_embd_head_k = 256
llm_load_print_meta: n_embd_head_v = 256
llm_load_print_meta: n_gqa = 1
llm_load_print_meta: n_embd_k_gqa = 4096
llm_load_print_meta: n_embd_v_gqa = 4096
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-06
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 24576
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type
llm_load_print_meta: PAD token = 0 '<pad>'
llm_load_print_meta: LF token = 227 '<0x0A>'
llm_load_print_meta: EOT token = 107 '<end_of_turn>'
llm_load_print_meta: max token length = 93
ggml_vulkan: Found 1 Vulkan devices:
Vulkan0: NVIDIA GeForce RTX 2060 (NVIDIA) | uma: 0 | fp16: 1 | warp size: 32
llm_load_tensors: ggml ctx size = 0.12 MiB
llm_load_tensors: offloading 0 repeating layers to GPU
llm_load_tensors: offloaded 0/29 layers to GPU
llm_load_tensors: CPU buffer size = 4773.90 MiB
llama_new_context_with_model: n_ctx = 2048
llama_new_context_with_model: n_batch = 512
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 10000.0
llama_new_context_with_model: freq_scale = 1
time=2024-08-09T09:58:08.114-07:00 level=DEBUG source=server.go:635 msg="model load progress 1.00"
time=2024-08-09T09:58:08.366-07:00 level=DEBUG source=server.go:638 msg="model load completed, waiting for server to become available" status="llm server loading model"
ggml_vulkan: Failed to allocate pinned memory.
ggml_vulkan: vk::Device::allocateMemory: ErrorOutOfDeviceMemory
llama_kv_cache_init: CPU KV buffer size = 896.00 MiB
llama_new_context_with_model: KV self size = 896.00 MiB, K (f16): 448.00 MiB, V (f16): 448.00 MiB
llama_new_context_with_model: Vulkan_Host output buffer size = 0.99 MiB
ggml_vulkan: Device memory allocation of size 1175699456 failed.
ggml_vulkan: vk::Device::allocateMemory: ErrorOutOfDeviceMemory
ggml_gallocr_reserve_n: failed to allocate NVIDIA GeForce RTX 2060 buffer of size 1175699456
llama_new_context_with_model: failed to allocate compute buffers
llama_init_from_gpt_params: error: failed to create context with model '/home/yuri/.ollama/models/blobs/sha256-ef311de6af9db043d51ca4b1e766c28e0a1ac41d60420fed5e001dc470c64b77'
ERROR [load_model] unable to load model | model="/home/yuri/.ollama/models/blobs/sha256-ef311de6af9db043d51ca4b1e766c28e0a1ac41d60420fed5e001dc470c64b77" tid="0x2678fb612000" timestamp=1723222689
time=2024-08-09T09:58:09.727-07:00 level=DEBUG source=server.go:430 msg="llama runner terminated" error="signal: abort trap"
time=2024-08-09T09:58:09.787-07:00 level=ERROR source=sched.go:451 msg="error loading llama server" error="llama runner process has terminated: error:failed to create context with model '/home/yuri/.ollama/models/blobs/sha256-ef311de6af9db043d51ca4b1e766c28e0a1ac41d60420fed5e001dc470c64b77'"
time=2024-08-09T09:58:09.787-07:00 level=DEBUG source=sched.go:454 msg="triggering expiration for failed load" model=/home/yuri/.ollama/models/blobs/sha256-ef311de6af9db043d51ca4b1e766c28e0a1ac41d60420fed5e001dc470c64b77
time=2024-08-09T09:58:09.787-07:00 level=DEBUG source=sched.go:355 msg="runner expired event received" modelPath=/home/yuri/.ollama/models/blobs/sha256-ef311de6af9db043d51ca4b1e766c28e0a1ac41d60420fed5e001dc470c64b77
time=2024-08-09T09:58:09.787-07:00 level=DEBUG source=sched.go:371 msg="got lock to unload" modelPath=/home/yuri/.ollama/models/blobs/sha256-ef311de6af9db043d51ca4b1e766c28e0a1ac41d60420fed5e001dc470c64b77
[GIN] 2024/08/09 - 09:58:09 | 500 | 3.0697853s | 127.0.0.1 | POST "/api/chat"
time=2024-08-09T09:58:09.875-07:00 level=DEBUG source=server.go:1048 msg="stopping llama server"
time=2024-08-09T09:58:09.875-07:00 level=DEBUG source=sched.go:376 msg="runner released" modelPath=/home/yuri/.ollama/models/blobs/sha256-ef311de6af9db043d51ca4b1e766c28e0a1ac41d60420fed5e001dc470c64b77
time=2024-08-09T09:58:14.877-07:00 level=WARN source=sched.go:642 msg="gpu VRAM usage didn't recover within timeout" seconds=5.090068886 model=/home/yuri/.ollama/models/blobs/sha256-ef311de6af9db043d51ca4b1e766c28e0a1ac41d60420fed5e001dc470c64b77
time=2024-08-09T09:58:14.877-07:00 level=DEBUG source=sched.go:380 msg="sending an unloaded event" modelPath=/home/yuri/.ollama/models/blobs/sha256-ef311de6af9db043d51ca4b1e766c28e0a1ac41d60420fed5e001dc470c64b77
time=2024-08-09T09:58:14.878-07:00 level=DEBUG source=sched.go:303 msg="ignoring unload event with no pending requests"
time=2024-08-09T09:58:15.130-07:00 level=WARN source=sched.go:642 msg="gpu VRAM usage didn't recover within timeout" seconds=5.342633342 model=/home/yuri/.ollama/models/blobs/sha256-ef311de6af9db043d51ca4b1e766c28e0a1ac41d60420fed5e001dc470c64b77
time=2024-08-09T09:58:15.381-07:00 level=WARN source=sched.go:642 msg="gpu VRAM usage didn't recover within timeout" seconds=5.593264672 model=/home/yuri/.ollama/models/blobs/sha256-ef311de6af9db043d51ca4b1e766c28e0a1ac41d60420fed5e001dc470c64b77
time=2024-08-09T09:59:07.125-07:00 level=DEBUG source=sched.go:119 msg="shutting down scheduler pending loop"
time=2024-08-09T09:59:07.125-07:00 level=DEBUG source=sched.go:313 msg="shutting down scheduler completed loop"
time=2024-08-09T09:59:07.125-07:00 level=DEBUG source=assets.go:112 msg="cleaning up" dir=/tmp/ollama1197517005
```
It appears to still load the model into VRAM even when 0 layers are offloaded.
Version: 0.3.4
FreeBSD 14.1
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.3.4
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6288/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6288/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3213
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3213/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3213/comments
|
https://api.github.com/repos/ollama/ollama/issues/3213/events
|
https://github.com/ollama/ollama/issues/3213
| 2,191,283,603
|
I_kwDOJ0Z1Ps6CnFWT
| 3,213
|
open /home/house365ai/xxm/model/Qwen1.5-14B-Chat/tokenizer.model:
|
{
"login": "njhouse365",
"id": 130344095,
"node_id": "U_kgDOB8Tknw",
"avatar_url": "https://avatars.githubusercontent.com/u/130344095?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/njhouse365",
"html_url": "https://github.com/njhouse365",
"followers_url": "https://api.github.com/users/njhouse365/followers",
"following_url": "https://api.github.com/users/njhouse365/following{/other_user}",
"gists_url": "https://api.github.com/users/njhouse365/gists{/gist_id}",
"starred_url": "https://api.github.com/users/njhouse365/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/njhouse365/subscriptions",
"organizations_url": "https://api.github.com/users/njhouse365/orgs",
"repos_url": "https://api.github.com/users/njhouse365/repos",
"events_url": "https://api.github.com/users/njhouse365/events{/privacy}",
"received_events_url": "https://api.github.com/users/njhouse365/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-03-18T05:44:18
| 2024-03-19T00:49:43
| 2024-03-18T08:36:58
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What model would you like?
my Modelfile
FROM /home/house365ai/xxm/model/Qwen1.5-14B-Chat
ollama create Qwen1.5-14B-Chat -f Modelfile
how solve it?
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3213/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3213/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8341
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8341/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8341/comments
|
https://api.github.com/repos/ollama/ollama/issues/8341/events
|
https://github.com/ollama/ollama/issues/8341
| 2,773,670,616
|
I_kwDOJ0Z1Ps6lUtrY
| 8,341
|
[feature] start ollama automatically on startup
|
{
"login": "remco-pc",
"id": 8077908,
"node_id": "MDQ6VXNlcjgwNzc5MDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/8077908?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/remco-pc",
"html_url": "https://github.com/remco-pc",
"followers_url": "https://api.github.com/users/remco-pc/followers",
"following_url": "https://api.github.com/users/remco-pc/following{/other_user}",
"gists_url": "https://api.github.com/users/remco-pc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/remco-pc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/remco-pc/subscriptions",
"organizations_url": "https://api.github.com/users/remco-pc/orgs",
"repos_url": "https://api.github.com/users/remco-pc/repos",
"events_url": "https://api.github.com/users/remco-pc/events{/privacy}",
"received_events_url": "https://api.github.com/users/remco-pc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 0
| 2025-01-07T20:23:57
| 2025-01-07T20:23:57
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
i've been playing with this feature to automatically start ollama serve from startup (docker php init) but it won't start with an & (background process). then i tried to put it in a script with a lock file in cron and sees if that start the script. it starts my script but it then does not start `ollama serve` which should !
running it in bash works like a charm, but automating this is currently a pain in the ...
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8341/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8341/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/1273
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1273/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1273/comments
|
https://api.github.com/repos/ollama/ollama/issues/1273/events
|
https://github.com/ollama/ollama/pull/1273
| 2,010,474,590
|
PR_kwDOJ0Z1Ps5gWNX7
| 1,273
|
added llama_runner_timeout ModelFile parameter for longer timeouts
|
{
"login": "bigattichouse",
"id": 67535,
"node_id": "MDQ6VXNlcjY3NTM1",
"avatar_url": "https://avatars.githubusercontent.com/u/67535?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bigattichouse",
"html_url": "https://github.com/bigattichouse",
"followers_url": "https://api.github.com/users/bigattichouse/followers",
"following_url": "https://api.github.com/users/bigattichouse/following{/other_user}",
"gists_url": "https://api.github.com/users/bigattichouse/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bigattichouse/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bigattichouse/subscriptions",
"organizations_url": "https://api.github.com/users/bigattichouse/orgs",
"repos_url": "https://api.github.com/users/bigattichouse/repos",
"events_url": "https://api.github.com/users/bigattichouse/events{/privacy}",
"received_events_url": "https://api.github.com/users/bigattichouse/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2023-11-25T04:54:03
| 2023-11-25T05:46:35
| 2023-11-25T05:45:48
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1273",
"html_url": "https://github.com/ollama/ollama/pull/1273",
"diff_url": "https://github.com/ollama/ollama/pull/1273.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1273.patch",
"merged_at": null
}
|
Allows the user to choose longer or shorter timeouts in the ModelFile for how long the server will wait for the llama runner. Created this patch in response to 'timed out waiting for llama runner to start' error.
Defaults to the 3 minutes hard coded in the current main branch.
|
{
"login": "bigattichouse",
"id": 67535,
"node_id": "MDQ6VXNlcjY3NTM1",
"avatar_url": "https://avatars.githubusercontent.com/u/67535?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bigattichouse",
"html_url": "https://github.com/bigattichouse",
"followers_url": "https://api.github.com/users/bigattichouse/followers",
"following_url": "https://api.github.com/users/bigattichouse/following{/other_user}",
"gists_url": "https://api.github.com/users/bigattichouse/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bigattichouse/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bigattichouse/subscriptions",
"organizations_url": "https://api.github.com/users/bigattichouse/orgs",
"repos_url": "https://api.github.com/users/bigattichouse/repos",
"events_url": "https://api.github.com/users/bigattichouse/events{/privacy}",
"received_events_url": "https://api.github.com/users/bigattichouse/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1273/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1273/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7005
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7005/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7005/comments
|
https://api.github.com/repos/ollama/ollama/issues/7005/events
|
https://github.com/ollama/ollama/issues/7005
| 2,553,392,563
|
I_kwDOJ0Z1Ps6YMa2z
| 7,005
|
Docker not use GPU after idle
|
{
"login": "phukrit7171",
"id": 64061607,
"node_id": "MDQ6VXNlcjY0MDYxNjA3",
"avatar_url": "https://avatars.githubusercontent.com/u/64061607?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/phukrit7171",
"html_url": "https://github.com/phukrit7171",
"followers_url": "https://api.github.com/users/phukrit7171/followers",
"following_url": "https://api.github.com/users/phukrit7171/following{/other_user}",
"gists_url": "https://api.github.com/users/phukrit7171/gists{/gist_id}",
"starred_url": "https://api.github.com/users/phukrit7171/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/phukrit7171/subscriptions",
"organizations_url": "https://api.github.com/users/phukrit7171/orgs",
"repos_url": "https://api.github.com/users/phukrit7171/repos",
"events_url": "https://api.github.com/users/phukrit7171/events{/privacy}",
"received_events_url": "https://api.github.com/users/phukrit7171/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg",
"url": "https://api.github.com/repos/ollama/ollama/labels/nvidia",
"name": "nvidia",
"color": "8CDB00",
"default": false,
"description": "Issues relating to Nvidia GPUs and CUDA"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info",
"name": "needs more info",
"color": "BA8041",
"default": false,
"description": "More information is needed to assist"
},
{
"id": 6677677816,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgVG-A",
"url": "https://api.github.com/repos/ollama/ollama/labels/docker",
"name": "docker",
"color": "0052CC",
"default": false,
"description": "Issues relating to using ollama in containers"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 7
| 2024-09-27T17:17:00
| 2024-09-30T15:46:21
| 2024-09-30T15:46:21
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
After the model is cleared from the graphics card RAM, when it is run again, the model is not loaded to the graphics card RAM but runs on the CPU instead, which slows it down a lot. You have to do docker stop ollama and docker start ollama to get it to run again with the graphics card.
### OS
Linux, Docker
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.3.12
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7005/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7005/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5141
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5141/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5141/comments
|
https://api.github.com/repos/ollama/ollama/issues/5141/events
|
https://github.com/ollama/ollama/issues/5141
| 2,362,404,205
|
I_kwDOJ0Z1Ps6Mz21t
| 5,141
|
Make "pull" support more than one model
|
{
"login": "Speedway1",
"id": 100301611,
"node_id": "U_kgDOBfp7Kw",
"avatar_url": "https://avatars.githubusercontent.com/u/100301611?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Speedway1",
"html_url": "https://github.com/Speedway1",
"followers_url": "https://api.github.com/users/Speedway1/followers",
"following_url": "https://api.github.com/users/Speedway1/following{/other_user}",
"gists_url": "https://api.github.com/users/Speedway1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Speedway1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Speedway1/subscriptions",
"organizations_url": "https://api.github.com/users/Speedway1/orgs",
"repos_url": "https://api.github.com/users/Speedway1/repos",
"events_url": "https://api.github.com/users/Speedway1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Speedway1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-06-19T13:33:25
| 2024-09-24T15:42:26
| 2024-09-24T15:42:25
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
"ollama pull " currently only supports one parameter. However when setting up a new server, or when do a bulk update of LLMs, we need to do a batch of LLM pulls.
It would be very handy for the command to support more than one model as parameter.
E.g.
ollama pull deepseek-coder-v2 phi3:14b codestral
As opposed to:
for i in deepseek-coder-v2 phi3:14b codestral
do
ollama pull $i
done
It also means that the job can be given a nohup and booted into background and for longer downloads it can simply run as a background task until all the models are pulled.
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5141/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5141/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6974
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6974/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6974/comments
|
https://api.github.com/repos/ollama/ollama/issues/6974/events
|
https://github.com/ollama/ollama/issues/6974
| 2,549,682,283
|
I_kwDOJ0Z1Ps6X-RBr
| 6,974
|
Ollama on Windows occupied all available ports when downloading
|
{
"login": "TheStarAlight",
"id": 105955974,
"node_id": "U_kgDOBlDChg",
"avatar_url": "https://avatars.githubusercontent.com/u/105955974?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TheStarAlight",
"html_url": "https://github.com/TheStarAlight",
"followers_url": "https://api.github.com/users/TheStarAlight/followers",
"following_url": "https://api.github.com/users/TheStarAlight/following{/other_user}",
"gists_url": "https://api.github.com/users/TheStarAlight/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TheStarAlight/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TheStarAlight/subscriptions",
"organizations_url": "https://api.github.com/users/TheStarAlight/orgs",
"repos_url": "https://api.github.com/users/TheStarAlight/repos",
"events_url": "https://api.github.com/users/TheStarAlight/events{/privacy}",
"received_events_url": "https://api.github.com/users/TheStarAlight/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg",
"url": "https://api.github.com/repos/ollama/ollama/labels/windows",
"name": "windows",
"color": "0052CC",
"default": false,
"description": ""
},
{
"id": 6677370291,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCVsw",
"url": "https://api.github.com/repos/ollama/ollama/labels/networking",
"name": "networking",
"color": "0B5368",
"default": false,
"description": "Issues relating to ollama pull and push"
}
] |
closed
| false
| null |
[] | null | 7
| 2024-09-26T06:54:51
| 2024-10-24T11:46:05
| 2024-09-26T19:00:32
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
When I'm trying to download a model from Ollama for Windows, after a while, my browsers cannot visit any other website, showing "connection refused". And the download would also fail (after the first part of this model finished, the next part cannot start and reports error).
The log `~/AppData/Local/Ollama/server.log` shows `Error: listen tcp 127.0.0.1:11434: bind: Only one usage of each socket address (protocol/network address/port) is normally permitted.`
I also used the `netstat`, which shows that port number up to 65535 is occupied by ollama.
Is there any way to solve it?
### OS
Windows
### GPU
Nvidia
### CPU
AMD
### Ollama version
Win 0.3.12
|
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6974/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6974/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2298
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2298/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2298/comments
|
https://api.github.com/repos/ollama/ollama/issues/2298/events
|
https://github.com/ollama/ollama/pull/2298
| 2,111,247,173
|
PR_kwDOJ0Z1Ps5loeAB
| 2,298
|
structured debug prompt
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-02-01T00:47:59
| 2024-02-01T21:16:50
| 2024-02-01T21:16:49
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/2298",
"html_url": "https://github.com/ollama/ollama/pull/2298",
"diff_url": "https://github.com/ollama/ollama/pull/2298.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2298.patch",
"merged_at": "2024-02-01T21:16:49"
}
| null |
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2298/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2298/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2572
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2572/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2572/comments
|
https://api.github.com/repos/ollama/ollama/issues/2572/events
|
https://github.com/ollama/ollama/issues/2572
| 2,140,911,407
|
I_kwDOJ0Z1Ps5_m7cv
| 2,572
|
PrivateGPT example is broken for me
|
{
"login": "levicki",
"id": 16415478,
"node_id": "MDQ6VXNlcjE2NDE1NDc4",
"avatar_url": "https://avatars.githubusercontent.com/u/16415478?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/levicki",
"html_url": "https://github.com/levicki",
"followers_url": "https://api.github.com/users/levicki/followers",
"following_url": "https://api.github.com/users/levicki/following{/other_user}",
"gists_url": "https://api.github.com/users/levicki/gists{/gist_id}",
"starred_url": "https://api.github.com/users/levicki/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/levicki/subscriptions",
"organizations_url": "https://api.github.com/users/levicki/orgs",
"repos_url": "https://api.github.com/users/levicki/repos",
"events_url": "https://api.github.com/users/levicki/events{/privacy}",
"received_events_url": "https://api.github.com/users/levicki/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-02-18T10:28:12
| 2024-09-12T01:57:03
| 2024-09-12T01:57:03
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
After installing it as per your provided instructions and running `ingest.py` on a folder with 19 PDF documents it crashes with the following stack trace:
```
Creating new vectorstore
Loading documents from source_documents
Loading new documents: 100%|████████████████████| 19/19 [00:02<00:00, 7.12it/s]
Loaded 1695 new documents from source_documents
Split into 8065 chunks of text (max. 500 tokens each)
Creating embeddings. May take some minutes...
Traceback (most recent call last):
File "c:\PROGRAMS\PRIVATEGPT\ingest.py", line 161, in <module>
main()
File "c:\PROGRAMS\PRIVATEGPT\ingest.py", line 153, in main
db = Chroma.from_documents(texts, embeddings, persist_directory=persist_directory)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\PROGRAMS\PRIVATEGPT\venv\Lib\site-packages\langchain\vectorstores\chroma.py", line 612, in from_documents
return cls.from_texts(
^^^^^^^^^^^^^^^
File "c:\PROGRAMS\PRIVATEGPT\venv\Lib\site-packages\langchain\vectorstores\chroma.py", line 576, in from_texts
chroma_collection.add_texts(texts=texts, metadatas=metadatas, ids=ids)
File "c:\PROGRAMS\PRIVATEGPT\venv\Lib\site-packages\langchain\vectorstores\chroma.py", line 222, in add_texts
raise e
File "c:\PROGRAMS\PRIVATEGPT\venv\Lib\site-packages\langchain\vectorstores\chroma.py", line 208, in add_texts
self._collection.upsert(
File "c:\PROGRAMS\PRIVATEGPT\venv\Lib\site-packages\chromadb\api\models\Collection.py", line 298, in upsert
self._client._upsert(
File "c:\PROGRAMS\PRIVATEGPT\venv\Lib\site-packages\chromadb\api\segment.py", line 290, in _upsert
self._producer.submit_embeddings(coll["topic"], records_to_submit)
File "c:\PROGRAMS\PRIVATEGPT\venv\Lib\site-packages\chromadb\db\mixins\embeddings_queue.py", line 127, in submit_embeddings
raise ValueError(
ValueError:
Cannot submit more than 5,461 embeddings at once.
Please submit your embeddings in batches of size
5,461 or less.
```
I have no idea where it got that "1695 new documents" idea from, since the folder only contains 19 PDF files (as the loading line shows).
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2572/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2572/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6523
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6523/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6523/comments
|
https://api.github.com/repos/ollama/ollama/issues/6523/events
|
https://github.com/ollama/ollama/pull/6523
| 2,488,103,562
|
PR_kwDOJ0Z1Ps55gawM
| 6,523
|
llama: clean up sync
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-08-27T02:03:02
| 2024-08-30T00:30:13
| 2024-08-30T00:30:11
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6523",
"html_url": "https://github.com/ollama/ollama/pull/6523",
"diff_url": "https://github.com/ollama/ollama/pull/6523.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6523.patch",
"merged_at": "2024-08-30T00:30:11"
}
| null |
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6523/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6523/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/8585
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8585/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8585/comments
|
https://api.github.com/repos/ollama/ollama/issues/8585/events
|
https://github.com/ollama/ollama/issues/8585
| 2,811,203,849
|
I_kwDOJ0Z1Ps6nj5EJ
| 8,585
|
Error: neither ‘from’ or ‘files’ was specified when creating a model
|
{
"login": "latent-variable",
"id": 22504489,
"node_id": "MDQ6VXNlcjIyNTA0NDg5",
"avatar_url": "https://avatars.githubusercontent.com/u/22504489?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/latent-variable",
"html_url": "https://github.com/latent-variable",
"followers_url": "https://api.github.com/users/latent-variable/followers",
"following_url": "https://api.github.com/users/latent-variable/following{/other_user}",
"gists_url": "https://api.github.com/users/latent-variable/gists{/gist_id}",
"starred_url": "https://api.github.com/users/latent-variable/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/latent-variable/subscriptions",
"organizations_url": "https://api.github.com/users/latent-variable/orgs",
"repos_url": "https://api.github.com/users/latent-variable/repos",
"events_url": "https://api.github.com/users/latent-variable/events{/privacy}",
"received_events_url": "https://api.github.com/users/latent-variable/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 6
| 2025-01-25T22:30:06
| 2025-01-25T23:11:18
| 2025-01-25T23:11:17
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Hello,
I’m encountering an issue when trying to create a model using ollama create on my Mac. The command fails with the following error message:
transferring model data
Error: neither 'from' or 'files' was specified
This issue occurs despite ensuring the path to the .gguf file is correct. The problem did not happen with earlier versions of Ollama.
Steps to Reproduce:
1. Run the following command:
```
ollama create FuseO1 -f Modelfile
```
2. The error appears immediately after transferring model data.
Environment Details:
• Ollama Version:
• CLI: 0.5.7
• Client: 0.2.1 (Warning shown for version mismatch)
• OS: macOS
• File Contents:
Here is the content of my Modelfile:
```
FROM ./FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview.i1-Q4_K_M.gguf
TEMPLATE """<|begin▁of▁sentence|>{system_prompt}<|User|>{prompt}<|Assistant|><|end▁of▁sentence|><|Assistant|>
"""
PARAMETER stop "<|begin▁of▁sentence|>"
PARAMETER stop "<|end▁of▁sentence|>"
PARAMETER stop "<|User|>"
PARAMETER stop "<|Assistant|>"
```
Troubleshooting Attempts:
1. Verified the file path (both relative and absolute paths).
2. Ensured there are no empty spaces in filenames.
3. Tested with multiple .gguf files and filenames.
4. Successfully imported the same files when running Ollama in a Docker container, so the issue seems specific to the macOS version.
Expected Behavior:
The ollama create command should successfully create a model as it did previously.
Additional Context:
• The issue began after updating to the current version.
• The version mismatch warning for the client version (0.2.1) might be related, but I’m unsure if that’s the root cause.
Let me know if you need more details to troubleshoot!
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.5.7
|
{
"login": "latent-variable",
"id": 22504489,
"node_id": "MDQ6VXNlcjIyNTA0NDg5",
"avatar_url": "https://avatars.githubusercontent.com/u/22504489?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/latent-variable",
"html_url": "https://github.com/latent-variable",
"followers_url": "https://api.github.com/users/latent-variable/followers",
"following_url": "https://api.github.com/users/latent-variable/following{/other_user}",
"gists_url": "https://api.github.com/users/latent-variable/gists{/gist_id}",
"starred_url": "https://api.github.com/users/latent-variable/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/latent-variable/subscriptions",
"organizations_url": "https://api.github.com/users/latent-variable/orgs",
"repos_url": "https://api.github.com/users/latent-variable/repos",
"events_url": "https://api.github.com/users/latent-variable/events{/privacy}",
"received_events_url": "https://api.github.com/users/latent-variable/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8585/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8585/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2910
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2910/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2910/comments
|
https://api.github.com/repos/ollama/ollama/issues/2910/events
|
https://github.com/ollama/ollama/pull/2910
| 2,166,441,797
|
PR_kwDOJ0Z1Ps5okYH8
| 2,910
|
Run inference in a subprocess
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-03-04T10:01:47
| 2024-10-17T22:38:00
| 2024-04-07T06:09:01
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | true
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/2910",
"html_url": "https://github.com/ollama/ollama/pull/2910",
"diff_url": "https://github.com/ollama/ollama/pull/2910.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2910.patch",
"merged_at": null
}
|
This changes the underlying llama server to run in a subprocess, bringing back code from https://github.com/ollama/ollama/blob/v0.1.17/llm/llama.go while keeping the multi-variant support. This is helpful to make sure resources are freed when a model is unloaded and will help allow concurrent models to be loaded.
Note this should probably go in after https://github.com/ollama/ollama/pull/2885
Remaining
- [ ] Handle crash/exit scenario (api will hang)
- [ ] Surface stderr message as an api error
- [ ] CI
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2910/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2910/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3875
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3875/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3875/comments
|
https://api.github.com/repos/ollama/ollama/issues/3875/events
|
https://github.com/ollama/ollama/issues/3875
| 2,261,313,798
|
I_kwDOJ0Z1Ps6GyOkG
| 3,875
|
Error: pull model manifest: 401
|
{
"login": "seedpower",
"id": 11022830,
"node_id": "MDQ6VXNlcjExMDIyODMw",
"avatar_url": "https://avatars.githubusercontent.com/u/11022830?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/seedpower",
"html_url": "https://github.com/seedpower",
"followers_url": "https://api.github.com/users/seedpower/followers",
"following_url": "https://api.github.com/users/seedpower/following{/other_user}",
"gists_url": "https://api.github.com/users/seedpower/gists{/gist_id}",
"starred_url": "https://api.github.com/users/seedpower/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/seedpower/subscriptions",
"organizations_url": "https://api.github.com/users/seedpower/orgs",
"repos_url": "https://api.github.com/users/seedpower/repos",
"events_url": "https://api.github.com/users/seedpower/events{/privacy}",
"received_events_url": "https://api.github.com/users/seedpower/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 8
| 2024-04-24T13:31:16
| 2024-10-11T06:10:10
| 2024-05-21T17:45:09
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
OS: Ubuntu 22.04 server
ollama version: 0.1.32
Using official bash script to install it or docker method to run it, both can't pull any model and get same next error:
```
# ollama run llama3
pulling manifest
Error: pull model manifest: 401
```
In same network environment, my macOS system can pull llama3 model.
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.32
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3875/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3875/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1227
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1227/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1227/comments
|
https://api.github.com/repos/ollama/ollama/issues/1227/events
|
https://github.com/ollama/ollama/pull/1227
| 2,005,037,736
|
PR_kwDOJ0Z1Ps5gD8sf
| 1,227
|
update python client create example
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-11-21T20:01:19
| 2023-11-27T20:36:21
| 2023-11-27T20:36:20
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1227",
"html_url": "https://github.com/ollama/ollama/pull/1227",
"diff_url": "https://github.com/ollama/ollama/pull/1227.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1227.patch",
"merged_at": "2023-11-27T20:36:20"
}
|
When we updated our CLI to upload modelfile contents directly to the ollama server we missed updating the python example client. This change brings the logic in the python client in-line with our Go client
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1227/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1227/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5956
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5956/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5956/comments
|
https://api.github.com/repos/ollama/ollama/issues/5956/events
|
https://github.com/ollama/ollama/issues/5956
| 2,430,544,508
|
I_kwDOJ0Z1Ps6Q3yp8
| 5,956
|
Phi3-mini-4k-instruct will need to be updated for latest llama.cpp
|
{
"login": "kaetemi",
"id": 1581053,
"node_id": "MDQ6VXNlcjE1ODEwNTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/1581053?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kaetemi",
"html_url": "https://github.com/kaetemi",
"followers_url": "https://api.github.com/users/kaetemi/followers",
"following_url": "https://api.github.com/users/kaetemi/following{/other_user}",
"gists_url": "https://api.github.com/users/kaetemi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kaetemi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kaetemi/subscriptions",
"organizations_url": "https://api.github.com/users/kaetemi/orgs",
"repos_url": "https://api.github.com/users/kaetemi/repos",
"events_url": "https://api.github.com/users/kaetemi/events{/privacy}",
"received_events_url": "https://api.github.com/users/kaetemi/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
closed
| false
| null |
[] | null | 7
| 2024-07-25T16:45:14
| 2024-08-02T15:08:28
| 2024-07-30T22:34:19
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
See https://github.com/ggerganov/llama.cpp/pull/8627
The blob from the ollama repository fails to load on the latest llama.cpp.
```
0|pv_scheduler | llama-server [phi3-3.8b:1280:1]: llama_model_loader: - kv 16: tokenizer.ggml.tokens arr[str,32064] = ["<unk>", "<s>", "</s>", "<0x00>", "<...
0|pv_scheduler | llama-server [phi3-3.8b:1280:1]: llama_model_loader: - kv 17: tokenizer.ggml.scores arr[f32,32064] = [-1000.000000, -1000.000000, -1000.00...
0|pv_scheduler | llama-server [phi3-3.8b:1280:1]: llama_model_loader: - kv 18: tokenizer.ggml.token_type arr[i32,32064] = [3, 3, 4, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
0|pv_scheduler | llama-server [phi3-3.8b:1280:1]: llama_model_loader: - kv 19: tokenizer.ggml.bos_token_id u32 = 1
0|pv_scheduler | llama-server [phi3-3.8b:1280:1]: llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 32000
0|pv_scheduler | llama-server [phi3-3.8b:1280:1]: llama_model_loader: - kv 21: tokenizer.ggml.unknown_token_id u32 = 0
0|pv_scheduler | llama-server [phi3-3.8b:1280:1]: llama_model_loader: - kv 22: tokenizer.ggml.padding_token_id u32 = 32000
0|pv_scheduler | llama-server [phi3-3.8b:1280:1]: llama_model_loader: - kv 23: tokenizer.ggml.add_bos_token bool = false
0|pv_scheduler | llama-server [phi3-3.8b:1280:1]: llama_model_loader: - kv 24: tokenizer.ggml.add_eos_token bool = false
0|pv_scheduler | llama-server [phi3-3.8b:1280:1]: llama_model_loader: - kv 25: tokenizer.chat_template str = {% for message in messages %}{% if me...
0|pv_scheduler | llama-server [phi3-3.8b:1280:1]: llama_model_loader: - kv 26: general.quantization_version u32 = 2
0|pv_scheduler | llama-server [phi3-3.8b:1280:1]: llama_model_loader: - type f32: 67 tensors
0|pv_scheduler | llama-server [phi3-3.8b:1280:1]: llama_model_loader: - type q4_0: 129 tensors
0|pv_scheduler | llama-server [phi3-3.8b:1280:1]: llama_model_loader: - type q6_K: 1 tensors
0|pv_scheduler | llama-server [phi3-3.8b:1280:1]: llama_model_load: error loading model: error loading model hyperparameters: key not found in model: phi3.attention.sliding_window
0|pv_scheduler | llama-server [phi3-3.8b:1280:1]: llama_load_model_from_file: failed to load model
0|pv_scheduler | llama-server [phi3-3.8b:1280:1]: llama_init_from_gpt_params: error: failed to load model '/root/.ollama/models/blobs/sha256-3e38718d00bb0007ab7c0cb4a038e7718c07b54f486a7810efd03bb4e894592a'
0|pv_scheduler | llama-server [phi3-3.8b:1280:1]: free(): invalid pointer
```
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5956/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5956/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8676
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8676/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8676/comments
|
https://api.github.com/repos/ollama/ollama/issues/8676/events
|
https://github.com/ollama/ollama/pull/8676
| 2,819,521,168
|
PR_kwDOJ0Z1Ps6Jbnv2
| 8,676
|
docs: update api.md with streaming with tools is enabled
|
{
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "https://api.github.com/users/ParthSareen/followers",
"following_url": "https://api.github.com/users/ParthSareen/following{/other_user}",
"gists_url": "https://api.github.com/users/ParthSareen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ParthSareen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ParthSareen/subscriptions",
"organizations_url": "https://api.github.com/users/ParthSareen/orgs",
"repos_url": "https://api.github.com/users/ParthSareen/repos",
"events_url": "https://api.github.com/users/ParthSareen/events{/privacy}",
"received_events_url": "https://api.github.com/users/ParthSareen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2025-01-29T23:01:05
| 2025-01-30T13:08:49
| 2025-01-29T23:14:30
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/8676",
"html_url": "https://github.com/ollama/ollama/pull/8676",
"diff_url": "https://github.com/ollama/ollama/pull/8676.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8676.patch",
"merged_at": "2025-01-29T23:14:30"
}
|
Shoutout to @sixlive for finding this!
docs were outdated and didnt mention that we can now stream tools
|
{
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "https://api.github.com/users/ParthSareen/followers",
"following_url": "https://api.github.com/users/ParthSareen/following{/other_user}",
"gists_url": "https://api.github.com/users/ParthSareen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ParthSareen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ParthSareen/subscriptions",
"organizations_url": "https://api.github.com/users/ParthSareen/orgs",
"repos_url": "https://api.github.com/users/ParthSareen/repos",
"events_url": "https://api.github.com/users/ParthSareen/events{/privacy}",
"received_events_url": "https://api.github.com/users/ParthSareen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8676/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8676/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2759
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2759/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2759/comments
|
https://api.github.com/repos/ollama/ollama/issues/2759/events
|
https://github.com/ollama/ollama/pull/2759
| 2,153,404,573
|
PR_kwDOJ0Z1Ps5n38Pc
| 2,759
|
docs: Add LLM-X to Web Integration section
|
{
"login": "mrdjohnson",
"id": 6767910,
"node_id": "MDQ6VXNlcjY3Njc5MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6767910?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrdjohnson",
"html_url": "https://github.com/mrdjohnson",
"followers_url": "https://api.github.com/users/mrdjohnson/followers",
"following_url": "https://api.github.com/users/mrdjohnson/following{/other_user}",
"gists_url": "https://api.github.com/users/mrdjohnson/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrdjohnson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrdjohnson/subscriptions",
"organizations_url": "https://api.github.com/users/mrdjohnson/orgs",
"repos_url": "https://api.github.com/users/mrdjohnson/repos",
"events_url": "https://api.github.com/users/mrdjohnson/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrdjohnson/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-02-26T07:03:20
| 2024-03-07T15:11:53
| 2024-03-07T15:11:53
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/2759",
"html_url": "https://github.com/ollama/ollama/pull/2759",
"diff_url": "https://github.com/ollama/ollama/pull/2759.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2759.patch",
"merged_at": "2024-03-07T15:11:53"
}
|
Adding yet another web project to the list in the readme!
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2759/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2759/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/8286
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8286/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8286/comments
|
https://api.github.com/repos/ollama/ollama/issues/8286/events
|
https://github.com/ollama/ollama/issues/8286
| 2,765,815,836
|
I_kwDOJ0Z1Ps6k2wAc
| 8,286
|
Allow use of locally installed CUDA or ROCm
|
{
"login": "erkinalp",
"id": 5833034,
"node_id": "MDQ6VXNlcjU4MzMwMzQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/5833034?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/erkinalp",
"html_url": "https://github.com/erkinalp",
"followers_url": "https://api.github.com/users/erkinalp/followers",
"following_url": "https://api.github.com/users/erkinalp/following{/other_user}",
"gists_url": "https://api.github.com/users/erkinalp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/erkinalp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/erkinalp/subscriptions",
"organizations_url": "https://api.github.com/users/erkinalp/orgs",
"repos_url": "https://api.github.com/users/erkinalp/repos",
"events_url": "https://api.github.com/users/erkinalp/events{/privacy}",
"received_events_url": "https://api.github.com/users/erkinalp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 1
| 2025-01-02T10:51:34
| 2025-01-03T09:10:04
| null |
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Ollama tries to install its own copy of CUDA or ROCm, even when the same version is already installed as a system-wide installation
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8286/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8286/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/3260
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3260/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3260/comments
|
https://api.github.com/repos/ollama/ollama/issues/3260/events
|
https://github.com/ollama/ollama/issues/3260
| 2,195,998,846
|
I_kwDOJ0Z1Ps6C5Eh-
| 3,260
|
Syntax error: end of file unexpected (expecting ";;")
|
{
"login": "TacitTactics",
"id": 14880732,
"node_id": "MDQ6VXNlcjE0ODgwNzMy",
"avatar_url": "https://avatars.githubusercontent.com/u/14880732?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TacitTactics",
"html_url": "https://github.com/TacitTactics",
"followers_url": "https://api.github.com/users/TacitTactics/followers",
"following_url": "https://api.github.com/users/TacitTactics/following{/other_user}",
"gists_url": "https://api.github.com/users/TacitTactics/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TacitTactics/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TacitTactics/subscriptions",
"organizations_url": "https://api.github.com/users/TacitTactics/orgs",
"repos_url": "https://api.github.com/users/TacitTactics/repos",
"events_url": "https://api.github.com/users/TacitTactics/events{/privacy}",
"received_events_url": "https://api.github.com/users/TacitTactics/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 3
| 2024-03-19T20:53:06
| 2024-03-21T07:43:26
| 2024-03-21T07:43:26
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
file format issue.
### What did you expect to see?
No errors
### Steps to reproduce
run provided curl call to install script, as is.
### Are there any recent changes that introduced the issue?
Work around: install dos2unix, download the install script, and then run " dos2unix <filename>"
### OS
Linux
### Architecture
amd64
### Platform
_No response_
### Ollama version
current
### GPU
Nvidia
### GPU info
_No response_
### CPU
Intel
### Other software
_No response_
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3260/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3260/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6644
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6644/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6644/comments
|
https://api.github.com/repos/ollama/ollama/issues/6644/events
|
https://github.com/ollama/ollama/pull/6644
| 2,506,482,257
|
PR_kwDOJ0Z1Ps56dOVn
| 6,644
|
Update README.md
|
{
"login": "jake83741",
"id": 125723241,
"node_id": "U_kgDOB35iaQ",
"avatar_url": "https://avatars.githubusercontent.com/u/125723241?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jake83741",
"html_url": "https://github.com/jake83741",
"followers_url": "https://api.github.com/users/jake83741/followers",
"following_url": "https://api.github.com/users/jake83741/following{/other_user}",
"gists_url": "https://api.github.com/users/jake83741/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jake83741/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jake83741/subscriptions",
"organizations_url": "https://api.github.com/users/jake83741/orgs",
"repos_url": "https://api.github.com/users/jake83741/repos",
"events_url": "https://api.github.com/users/jake83741/events{/privacy}",
"received_events_url": "https://api.github.com/users/jake83741/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-09-04T23:39:55
| 2024-09-04T23:48:35
| 2024-09-04T23:46:03
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6644",
"html_url": "https://github.com/ollama/ollama/pull/6644",
"diff_url": "https://github.com/ollama/ollama/pull/6644.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6644.patch",
"merged_at": "2024-09-04T23:46:03"
}
|
This is a pull request to include my Discord bot project, vnc-lm into the community integrations section. https://github.com/jk011ru/vnc-lm . Thanks
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6644/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6644/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/8574
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8574/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8574/comments
|
https://api.github.com/repos/ollama/ollama/issues/8574/events
|
https://github.com/ollama/ollama/issues/8574
| 2,810,789,589
|
I_kwDOJ0Z1Ps6niT7V
| 8,574
|
Mini-InternVL
|
{
"login": "enryteam",
"id": 20081090,
"node_id": "MDQ6VXNlcjIwMDgxMDkw",
"avatar_url": "https://avatars.githubusercontent.com/u/20081090?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/enryteam",
"html_url": "https://github.com/enryteam",
"followers_url": "https://api.github.com/users/enryteam/followers",
"following_url": "https://api.github.com/users/enryteam/following{/other_user}",
"gists_url": "https://api.github.com/users/enryteam/gists{/gist_id}",
"starred_url": "https://api.github.com/users/enryteam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/enryteam/subscriptions",
"organizations_url": "https://api.github.com/users/enryteam/orgs",
"repos_url": "https://api.github.com/users/enryteam/repos",
"events_url": "https://api.github.com/users/enryteam/events{/privacy}",
"received_events_url": "https://api.github.com/users/enryteam/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
open
| false
| null |
[] | null | 1
| 2025-01-25T05:50:37
| 2025-01-28T13:33:26
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
https://hf-mirror.com/OpenGVLab/Mini-InternVL-Chat-4B-V1-5
thanks.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8574/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8574/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/1768
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1768/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1768/comments
|
https://api.github.com/repos/ollama/ollama/issues/1768/events
|
https://github.com/ollama/ollama/issues/1768
| 2,064,427,724
|
I_kwDOJ0Z1Ps57DKrM
| 1,768
|
The API - http://127.0.0.1:11434/api doesn't work.
|
{
"login": "PriyaranjanMaratheDish",
"id": 133165012,
"node_id": "U_kgDOB-_v1A",
"avatar_url": "https://avatars.githubusercontent.com/u/133165012?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PriyaranjanMaratheDish",
"html_url": "https://github.com/PriyaranjanMaratheDish",
"followers_url": "https://api.github.com/users/PriyaranjanMaratheDish/followers",
"following_url": "https://api.github.com/users/PriyaranjanMaratheDish/following{/other_user}",
"gists_url": "https://api.github.com/users/PriyaranjanMaratheDish/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PriyaranjanMaratheDish/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PriyaranjanMaratheDish/subscriptions",
"organizations_url": "https://api.github.com/users/PriyaranjanMaratheDish/orgs",
"repos_url": "https://api.github.com/users/PriyaranjanMaratheDish/repos",
"events_url": "https://api.github.com/users/PriyaranjanMaratheDish/events{/privacy}",
"received_events_url": "https://api.github.com/users/PriyaranjanMaratheDish/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 8
| 2024-01-03T17:51:24
| 2024-07-17T10:56:36
| 2024-01-04T19:41:34
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
1)The API - http://127.0.0.1:11434/api doesn't work. Are there any additional steps for http://127.0.0.1:11434/api to work correctly?
Doesn't work on my mac and EC2 as well.
|
{
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.github.com/users/technovangelist/followers",
"following_url": "https://api.github.com/users/technovangelist/following{/other_user}",
"gists_url": "https://api.github.com/users/technovangelist/gists{/gist_id}",
"starred_url": "https://api.github.com/users/technovangelist/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/technovangelist/subscriptions",
"organizations_url": "https://api.github.com/users/technovangelist/orgs",
"repos_url": "https://api.github.com/users/technovangelist/repos",
"events_url": "https://api.github.com/users/technovangelist/events{/privacy}",
"received_events_url": "https://api.github.com/users/technovangelist/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1768/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1768/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7941
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7941/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7941/comments
|
https://api.github.com/repos/ollama/ollama/issues/7941/events
|
https://github.com/ollama/ollama/issues/7941
| 2,719,189,962
|
I_kwDOJ0Z1Ps6iE4vK
| 7,941
|
signal arrived during cgo execution
|
{
"login": "datamg-star",
"id": 181604665,
"node_id": "U_kgDOCtMROQ",
"avatar_url": "https://avatars.githubusercontent.com/u/181604665?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/datamg-star",
"html_url": "https://github.com/datamg-star",
"followers_url": "https://api.github.com/users/datamg-star/followers",
"following_url": "https://api.github.com/users/datamg-star/following{/other_user}",
"gists_url": "https://api.github.com/users/datamg-star/gists{/gist_id}",
"starred_url": "https://api.github.com/users/datamg-star/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/datamg-star/subscriptions",
"organizations_url": "https://api.github.com/users/datamg-star/orgs",
"repos_url": "https://api.github.com/users/datamg-star/repos",
"events_url": "https://api.github.com/users/datamg-star/events{/privacy}",
"received_events_url": "https://api.github.com/users/datamg-star/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 1
| 2024-12-05T02:55:53
| 2024-12-19T15:31:15
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
[root@localhost data]# ollama run llama3.1:8b
>>> a
It looks likeError: an error was encountered while running the model: unexpected EOF
tail -200 /var/log/messages
Dec 5 10:29:10 localhost ollama: Device 0: NVIDIA A800-SXM4-40GB, compute capability 8.0, VMM: yes
Dec 5 10:29:10 localhost ollama: llm_load_tensors: ggml ctx size = 0.27 MiB
Dec 5 10:29:11 localhost ollama: llm_load_tensors: offloading 32 repeating layers to GPU
Dec 5 10:29:11 localhost ollama: llm_load_tensors: offloading non-repeating layers to GPU
Dec 5 10:29:11 localhost ollama: llm_load_tensors: offloaded 33/33 layers to GPU
Dec 5 10:29:11 localhost ollama: llm_load_tensors: CPU buffer size = 281.81 MiB
Dec 5 10:29:11 localhost ollama: llm_load_tensors: CUDA0 buffer size = 4403.50 MiB
Dec 5 10:29:16 localhost ollama: llama_new_context_with_model: n_ctx = 8192
Dec 5 10:29:16 localhost ollama: llama_new_context_with_model: n_batch = 2048
Dec 5 10:29:16 localhost ollama: llama_new_context_with_model: n_ubatch = 512
Dec 5 10:29:16 localhost ollama: llama_new_context_with_model: flash_attn = 0
Dec 5 10:29:16 localhost ollama: llama_new_context_with_model: freq_base = 500000.0
Dec 5 10:29:16 localhost ollama: llama_new_context_with_model: freq_scale = 1
Dec 5 10:29:16 localhost ollama: llama_kv_cache_init: CUDA0 KV buffer size = 1024.00 MiB
Dec 5 10:29:16 localhost ollama: llama_new_context_with_model: KV self size = 1024.00 MiB, K (f16): 512.00 MiB, V (f16): 512.00 MiB
Dec 5 10:29:16 localhost ollama: llama_new_context_with_model: CUDA_Host output buffer size = 2.02 MiB
Dec 5 10:29:16 localhost ollama: llama_new_context_with_model: CUDA0 compute buffer size = 560.00 MiB
Dec 5 10:29:16 localhost ollama: llama_new_context_with_model: CUDA_Host compute buffer size = 24.01 MiB
Dec 5 10:29:16 localhost ollama: llama_new_context_with_model: graph nodes = 1030
Dec 5 10:29:16 localhost ollama: llama_new_context_with_model: graph splits = 2
Dec 5 10:29:16 localhost ollama: time=2024-12-05T10:29:16.838+08:00 level=INFO source=server.go:601 msg="llama runner started in 13.05 seconds"
Dec 5 10:29:16 localhost ollama: [GIN] 2024/12/05 - 10:29:16 | 200 | 13.185062523s | 127.0.0.1 | POST "/api/generate"
Dec 5 10:29:24 localhost ollama: SIGSEGV: segmentation violation
Dec 5 10:29:24 localhost ollama: PC=0x7f74e0682a00 m=4 sigcode=1 addr=0x7f74359ca7ca
Dec 5 10:29:24 localhost ollama: signal arrived during cgo execution
Dec 5 10:29:24 localhost ollama: goroutine 36 gp=0xc000104700 m=4 mp=0xc000057808 [syscall]:
Dec 5 10:29:24 localhost ollama: runtime.cgocall(0x5640ed665110, 0xc0002a8b48)
Dec 5 10:29:24 localhost ollama: runtime/cgocall.go:157 +0x4b fp=0xc0002a8b20 sp=0xc0002a8ae8 pc=0x5640ed3e63cb
Dec 5 10:29:24 localhost ollama: github.com/ollama/ollama/llama._Cfunc_llama_decode(0x7f74893e78e0, {0x1, 0x7f74887e89a0, 0x0, 0x0, 0x7f74887ea9b0, 0x7f74887ec9c0, 0x7f74887ee9d0, 0x7f7488802480, 0x0, ...})
Dec 5 10:29:24 localhost ollama: _cgo_gotypes.go:543 +0x52 fp=0xc0002a8b48 sp=0xc0002a8b20 pc=0x5640ed4e3952
Dec 5 10:29:24 localhost ollama: github.com/ollama/ollama/llama.(*Context).Decode.func1(0x5640ed660e0b?, 0x7f74893e78e0?)
Dec 5 10:29:24 localhost ollama: github.com/ollama/ollama/llama/llama.go:167 +0xd8 fp=0xc0002a8c68 sp=0xc0002a8b48 pc=0x5640ed4e5f78
Dec 5 10:29:24 localhost ollama: github.com/ollama/ollama/llama.(*Context).Decode(0x5640edc560e0?, 0x0?)
Dec 5 10:29:24 localhost ollama: github.com/ollama/ollama/llama/llama.go:167 +0x13 fp=0xc0002a8cb0 sp=0xc0002a8c68 pc=0x5640ed4e5e13
Dec 5 10:29:24 localhost ollama: main.(*Server).processBatch(0xc00013c120, 0xc0002ac000, 0xc0002a8f10)
Dec 5 10:29:24 localhost ollama: github.com/ollama/ollama/llama/runner/runner.go:425 +0x24d fp=0xc0002a8ed0 sp=0xc0002a8cb0 pc=0x5640ed65facd
Dec 5 10:29:24 localhost ollama: main.(*Server).run(0xc00013c120, {0x5640ed99ecc0, 0xc00017a050})
Dec 5 10:29:24 localhost ollama: github.com/ollama/ollama/llama/runner/runner.go:333 +0x1e5 fp=0xc0002a8fb8 sp=0xc0002a8ed0 pc=0x5640ed65f545
Dec 5 10:29:24 localhost ollama: main.main.gowrap2()
Dec 5 10:29:24 localhost ollama: github.com/ollama/ollama/llama/runner/runner.go:934 +0x28 fp=0xc0002a8fe0 sp=0xc0002a8fb8 pc=0x5640ed664148
Dec 5 10:29:24 localhost ollama: runtime.goexit({})
Dec 5 10:29:24 localhost ollama: runtime/asm_amd64.s:1695 +0x1 fp=0xc0002a8fe8 sp=0xc0002a8fe0 pc=0x5640ed44ede1
Dec 5 10:29:24 localhost ollama: created by main.main in goroutine 1
Dec 5 10:29:24 localhost ollama: github.com/ollama/ollama/llama/runner/runner.go:934 +0xc52
Dec 5 10:29:24 localhost ollama: goroutine 1 gp=0xc0000061c0 m=nil [IO wait]:
Dec 5 10:29:24 localhost ollama: runtime.gopark(0xc000034a08?, 0x0?, 0xc0?, 0x61?, 0xc0000298b8?)
Dec 5 10:29:24 localhost ollama: runtime/proc.go:402 +0xce fp=0xc000029880 sp=0xc000029860 pc=0x5640ed41d00e
Dec 5 10:29:24 localhost ollama: runtime.netpollblock(0xc000029918?, 0xed3e5b26?, 0x40?)
Dec 5 10:29:24 localhost ollama: runtime/netpoll.go:573 +0xf7 fp=0xc0000298b8 sp=0xc000029880 pc=0x5640ed415257
Dec 5 10:29:24 localhost ollama: internal/poll.runtime_pollWait(0x7f74975fef20, 0x72)
Dec 5 10:29:24 localhost ollama: runtime/netpoll.go:345 +0x85 fp=0xc0000298d8 sp=0xc0000298b8 pc=0x5640ed449aa5
Dec 5 10:29:24 localhost ollama: internal/poll.(*pollDesc).wait(0x3?, 0x3fe?, 0x0)
Dec 5 10:29:24 localhost ollama: internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc000029900 sp=0xc0000298d8 pc=0x5640ed4999c7
Dec 5 10:29:24 localhost ollama: internal/poll.(*pollDesc).waitRead(...)
Dec 5 10:29:24 localhost ollama: internal/poll/fd_poll_runtime.go:89
Dec 5 10:29:24 localhost ollama: internal/poll.(*FD).Accept(0xc000174080)
Dec 5 10:29:24 localhost ollama: internal/poll/fd_unix.go:611 +0x2ac fp=0xc0000299a8 sp=0xc000029900 pc=0x5640ed49ae8c
Dec 5 10:29:24 localhost ollama: net.(*netFD).accept(0xc000174080)
Dec 5 10:29:24 localhost ollama: net/fd_unix.go:172 +0x29 fp=0xc000029a60 sp=0xc0000299a8 pc=0x5640ed509a09
Dec 5 10:29:24 localhost ollama: net.(*TCPListener).accept(0xc00013e1c0)
Dec 5 10:29:24 localhost ollama: net/tcpsock_posix.go:159 +0x1e fp=0xc000029a88 sp=0xc000029a60 pc=0x5640ed51a73e
Dec 5 10:29:24 localhost ollama: net.(*TCPListener).Accept(0xc00013e1c0)
Dec 5 10:29:24 localhost ollama: net/tcpsock.go:327 +0x30 fp=0xc000029ab8 sp=0xc000029a88 pc=0x5640ed519a90
Dec 5 10:29:24 localhost ollama: net/http.(*onceCloseListener).Accept(0xc00013c1b0?)
Dec 5 10:29:24 localhost ollama: <autogenerated>:1 +0x24 fp=0xc000029ad0 sp=0xc000029ab8 pc=0x5640ed640ca4
Dec 5 10:29:24 localhost ollama: net/http.(*Server).Serve(0xc0001220f0, {0x5640ed99e680, 0xc00013e1c0})
Dec 5 10:29:24 localhost ollama: net/http/server.go:3260 +0x33e fp=0xc000029c00 sp=0xc000029ad0 pc=0x5640ed637abe
Dec 5 10:29:24 localhost ollama: main.main()
Dec 5 10:29:24 localhost ollama: github.com/ollama/ollama/llama/runner/runner.go:954 +0xfec fp=0xc000029f50 sp=0xc000029c00 pc=0x5640ed663ecc
Dec 5 10:29:24 localhost ollama: runtime.main()
Dec 5 10:29:24 localhost ollama: runtime/proc.go:271 +0x29d fp=0xc000029fe0 sp=0xc000029f50 pc=0x5640ed41cbdd
Dec 5 10:29:24 localhost ollama: runtime.goexit({})
Dec 5 10:29:24 localhost ollama: runtime/asm_amd64.s:1695 +0x1 fp=0xc000029fe8 sp=0xc000029fe0 pc=0x5640ed44ede1
Dec 5 10:29:24 localhost ollama: goroutine 2 gp=0xc000006c40 m=nil [force gc (idle)]:
Dec 5 10:29:24 localhost ollama: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
Dec 5 10:29:24 localhost ollama: runtime/proc.go:402 +0xce fp=0xc000050fa8 sp=0xc000050f88 pc=0x5640ed41d00e
Dec 5 10:29:24 localhost ollama: runtime.goparkunlock(...)
Dec 5 10:29:24 localhost ollama: runtime/proc.go:408
Dec 5 10:29:24 localhost ollama: runtime.forcegchelper()
Dec 5 10:29:24 localhost ollama: runtime/proc.go:326 +0xb8 fp=0xc000050fe0 sp=0xc000050fa8 pc=0x5640ed41ce98
Dec 5 10:29:24 localhost ollama: runtime.goexit({})
Dec 5 10:29:24 localhost ollama: runtime/asm_amd64.s:1695 +0x1 fp=0xc000050fe8 sp=0xc000050fe0 pc=0x5640ed44ede1
Dec 5 10:29:24 localhost ollama: created by runtime.init.6 in goroutine 1
Dec 5 10:29:24 localhost ollama: runtime/proc.go:314 +0x1a
Dec 5 10:29:24 localhost ollama: goroutine 18 gp=0xc00008a380 m=nil [GC sweep wait]:
Dec 5 10:29:24 localhost ollama: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
Dec 5 10:29:24 localhost ollama: runtime/proc.go:402 +0xce fp=0xc00004c780 sp=0xc00004c760 pc=0x5640ed41d00e
Dec 5 10:29:24 localhost ollama: runtime.goparkunlock(...)
Dec 5 10:29:24 localhost ollama: runtime/proc.go:408
Dec 5 10:29:24 localhost ollama: runtime.bgsweep(0xc000096000)
Dec 5 10:29:24 localhost ollama: runtime/mgcsweep.go:278 +0x94 fp=0xc00004c7c8 sp=0xc00004c780 pc=0x5640ed407b54
Dec 5 10:29:24 localhost ollama: runtime.gcenable.gowrap1()
Dec 5 10:29:24 localhost ollama: runtime/mgc.go:203 +0x25 fp=0xc00004c7e0 sp=0xc00004c7c8 pc=0x5640ed3fc685
Dec 5 10:29:24 localhost ollama: runtime.goexit({})
Dec 5 10:29:25 localhost ollama: runtime/asm_amd64.s:1695 +0x1 fp=0xc00004c7e8 sp=0xc00004c7e0 pc=0x5640ed44ede1
Dec 5 10:29:25 localhost ollama: created by runtime.gcenable in goroutine 1
Dec 5 10:29:25 localhost ollama: runtime/mgc.go:203 +0x66
Dec 5 10:29:25 localhost ollama: goroutine 19 gp=0xc00008a540 m=nil [GC scavenge wait]:
Dec 5 10:29:25 localhost ollama: runtime.gopark(0xc000096000?, 0x5640ed8a02b0?, 0x1?, 0x0?, 0xc00008a540?)
Dec 5 10:29:25 localhost ollama: runtime/proc.go:402 +0xce fp=0xc00004cf78 sp=0xc00004cf58 pc=0x5640ed41d00e
Dec 5 10:29:25 localhost ollama: runtime.goparkunlock(...)
Dec 5 10:29:25 localhost ollama: runtime/proc.go:408
Dec 5 10:29:25 localhost ollama: runtime.(*scavengerState).park(0x5640edb6d540)
Dec 5 10:29:25 localhost ollama: runtime/mgcscavenge.go:425 +0x49 fp=0xc00004cfa8 sp=0xc00004cf78 pc=0x5640ed405549
Dec 5 10:29:25 localhost ollama: runtime.bgscavenge(0xc000096000)
Dec 5 10:29:25 localhost ollama: runtime/mgcscavenge.go:653 +0x3c fp=0xc00004cfc8 sp=0xc00004cfa8 pc=0x5640ed405adc
Dec 5 10:29:25 localhost ollama: runtime.gcenable.gowrap2()
Dec 5 10:29:25 localhost ollama: runtime/mgc.go:204 +0x25 fp=0xc00004cfe0 sp=0xc00004cfc8 pc=0x5640ed3fc625
Dec 5 10:29:25 localhost ollama: runtime.goexit({})
Dec 5 10:29:25 localhost ollama: runtime/asm_amd64.s:1695 +0x1 fp=0xc00004cfe8 sp=0xc00004cfe0 pc=0x5640ed44ede1
Dec 5 10:29:25 localhost ollama: created by runtime.gcenable in goroutine 1
Dec 5 10:29:25 localhost ollama: runtime/mgc.go:204 +0xa5
Dec 5 10:29:25 localhost ollama: goroutine 34 gp=0xc000104380 m=nil [finalizer wait]:
Dec 5 10:29:25 localhost ollama: runtime.gopark(0xc000050648?, 0x5640ed3eff85?, 0xa8?, 0x1?, 0xc0000061c0?)
Dec 5 10:29:25 localhost ollama: runtime/proc.go:402 +0xce fp=0xc000050620 sp=0xc000050600 pc=0x5640ed41d00e
Dec 5 10:29:25 localhost ollama: runtime.runfinq()
Dec 5 10:29:25 localhost ollama: runtime/mfinal.go:194 +0x107 fp=0xc0000507e0 sp=0xc000050620 pc=0x5640ed3fb6c7
Dec 5 10:29:25 localhost ollama: runtime.goexit({})
Dec 5 10:29:25 localhost ollama: runtime/asm_amd64.s:1695 +0x1 fp=0xc0000507e8 sp=0xc0000507e0 pc=0x5640ed44ede1
Dec 5 10:29:25 localhost ollama: created by runtime.createfing in goroutine 1
Dec 5 10:29:25 localhost ollama: runtime/mfinal.go:164 +0x3d
Dec 5 10:29:25 localhost ollama: goroutine 32 gp=0xc000104540 m=nil [IO wait]:
Dec 5 10:29:25 localhost ollama: runtime.gopark(0x10?, 0x10?, 0xf0?, 0x5d?, 0xb?)
Dec 5 10:29:25 localhost ollama: runtime/proc.go:402 +0xce fp=0xc000185da8 sp=0xc000185d88 pc=0x5640ed41d00e
Dec 5 10:29:25 localhost ollama: runtime.netpollblock(0x5640ed483558?, 0xed3e5b26?, 0x40?)
Dec 5 10:29:25 localhost ollama: runtime/netpoll.go:573 +0xf7 fp=0xc000185de0 sp=0xc000185da8 pc=0x5640ed415257
Dec 5 10:29:25 localhost ollama: internal/poll.runtime_pollWait(0x7f74975fee28, 0x72)
Dec 5 10:29:25 localhost ollama: runtime/netpoll.go:345 +0x85 fp=0xc000185e00 sp=0xc000185de0 pc=0x5640ed449aa5
Dec 5 10:29:25 localhost ollama: internal/poll.(*pollDesc).wait(0xc000174100?, 0xc000114ee1?, 0x0)
Dec 5 10:29:25 localhost ollama: internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc000185e28 sp=0xc000185e00 pc=0x5640ed4999c7
Dec 5 10:29:25 localhost ollama: internal/poll.(*pollDesc).waitRead(...)
Dec 5 10:29:25 localhost ollama: internal/poll/fd_poll_runtime.go:89
Dec 5 10:29:25 localhost ollama: internal/poll.(*FD).Read(0xc000174100, {0xc000114ee1, 0x1, 0x1})
Dec 5 10:29:25 localhost ollama: internal/poll/fd_unix.go:164 +0x27a fp=0xc000185ec0 sp=0xc000185e28 pc=0x5640ed49a51a
Dec 5 10:29:25 localhost ollama: net.(*netFD).Read(0xc000174100, {0xc000114ee1?, 0xc000185f48?, 0x5640ed44b6d0?})
Dec 5 10:29:25 localhost ollama: net/fd_posix.go:55 +0x25 fp=0xc000185f08 sp=0xc000185ec0 pc=0x5640ed508905
Dec 5 10:29:25 localhost ollama: net.(*conn).Read(0xc000112090, {0xc000114ee1?, 0x0?, 0x5640edc560e0?})
Dec 5 10:29:25 localhost ollama: net/net.go:185 +0x45 fp=0xc000185f50 sp=0xc000185f08 pc=0x5640ed512bc5
Dec 5 10:29:25 localhost ollama: net.(*TCPConn).Read(0x5640edb2e870?, {0xc000114ee1?, 0x0?, 0x0?})
Dec 5 10:29:25 localhost ollama: <autogenerated>:1 +0x25 fp=0xc000185f80 sp=0xc000185f50 pc=0x5640ed51e5a5
Dec 5 10:29:25 localhost ollama: net/http.(*connReader).backgroundRead(0xc000114ed0)
Dec 5 10:29:25 localhost ollama: net/http/server.go:681 +0x37 fp=0xc000185fc8 sp=0xc000185f80 pc=0x5640ed62d437
Dec 5 10:29:25 localhost ollama: net/http.(*connReader).startBackgroundRead.gowrap2()
Dec 5 10:29:25 localhost ollama: net/http/server.go:677 +0x25 fp=0xc000185fe0 sp=0xc000185fc8 pc=0x5640ed62d365
Dec 5 10:29:25 localhost ollama: runtime.goexit({})
Dec 5 10:29:25 localhost ollama: runtime/asm_amd64.s:1695 +0x1 fp=0xc000185fe8 sp=0xc000185fe0 pc=0x5640ed44ede1
Dec 5 10:29:25 localhost ollama: created by net/http.(*connReader).startBackgroundRead in goroutine 37
Dec 5 10:29:25 localhost ollama: net/http/server.go:677 +0xba
Dec 5 10:29:25 localhost ollama: goroutine 37 gp=0xc0001048c0 m=nil [select]:
Dec 5 10:29:25 localhost ollama: runtime.gopark(0xc0000d9a48?, 0x2?, 0xd8?, 0x96?, 0xc0000d97ec?)
Dec 5 10:29:25 localhost ollama: runtime/proc.go:402 +0xce fp=0xc0000d9658 sp=0xc0000d9638 pc=0x5640ed41d00e
Dec 5 10:29:25 localhost ollama: runtime.selectgo(0xc0000d9a48, 0xc0000d97e8, 0xc0002b0000?, 0x0, 0x1?, 0x1)
Dec 5 10:29:25 localhost ollama: runtime/select.go:327 +0x725 fp=0xc0000d9778 sp=0xc0000d9658 pc=0x5640ed42e3e5
Dec 5 10:29:25 localhost ollama: main.(*Server).completion(0xc00013c120, {0x5640ed99e830, 0xc0000aca80}, 0xc0000a2d80)
Dec 5 10:29:25 localhost ollama: github.com/ollama/ollama/llama/runner/runner.go:679 +0xa45 fp=0xc0000d9ab8 sp=0xc0000d9778 pc=0x5640ed6618e5
Dec 5 10:29:25 localhost ollama: main.(*Server).completion-fm({0x5640ed99e830?, 0xc0000aca80?}, 0x5640ed63bded?)
Dec 5 10:29:25 localhost ollama: <autogenerated>:1 +0x36 fp=0xc0000d9ae8 sp=0xc0000d9ab8 pc=0x5640ed664936
Dec 5 10:29:25 localhost ollama: net/http.HandlerFunc.ServeHTTP(0xc000116d00?, {0x5640ed99e830?, 0xc0000aca80?}, 0x10?)
Dec 5 10:29:25 localhost ollama: net/http/server.go:2171 +0x29 fp=0xc0000d9b10 sp=0xc0000d9ae8 pc=0x5640ed634889
Dec 5 10:29:25 localhost ollama: net/http.(*ServeMux).ServeHTTP(0x5640ed3eff85?, {0x5640ed99e830, 0xc0000aca80}, 0xc0000a2d80)
Dec 5 10:29:25 localhost ollama: net/http/server.go:2688 +0x1ad fp=0xc0000d9b60 sp=0xc0000d9b10 pc=0x5640ed63670d
Dec 5 10:29:25 localhost ollama: net/http.serverHandler.ServeHTTP({0x5640ed99db80?}, {0x5640ed99e830?, 0xc0000aca80?}, 0x6?)
Dec 5 10:29:25 localhost ollama: net/http/server.go:3142 +0x8e fp=0xc0000d9b90 sp=0xc0000d9b60 pc=0x5640ed63772e
Dec 5 10:29:25 localhost ollama: net/http.(*conn).serve(0xc00013c1b0, {0x5640ed99ec88, 0xc000114db0})
Dec 5 10:29:25 localhost ollama: net/http/server.go:2044 +0x5e8 fp=0xc0000d9fb8 sp=0xc0000d9b90 pc=0x5640ed6334c8
Dec 5 10:29:25 localhost ollama: net/http.(*Server).Serve.gowrap3()
Dec 5 10:29:25 localhost ollama: net/http/server.go:3290 +0x28 fp=0xc0000d9fe0 sp=0xc0000d9fb8 pc=0x5640ed637ea8
Dec 5 10:29:25 localhost ollama: runtime.goexit({})
Dec 5 10:29:25 localhost ollama: runtime/asm_amd64.s:1695 +0x1 fp=0xc0000d9fe8 sp=0xc0000d9fe0 pc=0x5640ed44ede1
Dec 5 10:29:25 localhost ollama: created by net/http.(*Server).Serve in goroutine 1
Dec 5 10:29:25 localhost ollama: net/http/server.go:3290 +0x4b4
Dec 5 10:29:25 localhost ollama: rax 0x7f74600fc0e0
Dec 5 10:29:25 localhost ollama: rbx 0x7f749864b7b0
Dec 5 10:29:25 localhost ollama: rcx 0x7f74600fc0e0
Dec 5 10:29:25 localhost ollama: rdx 0x7f74e0682a00
Dec 5 10:29:25 localhost ollama: rdi 0x7f74600fc0e0
Dec 5 10:29:25 localhost ollama: rsi 0x7f74359ca7ca
Dec 5 10:29:25 localhost ollama: rbp 0x7f749864b700
Dec 5 10:29:25 localhost ollama: rsp 0x7f749864b6a8
Dec 5 10:29:25 localhost ollama: r8 0x4
Dec 5 10:29:25 localhost ollama: r9 0x4c
Dec 5 10:29:25 localhost ollama: r10 0x0
Dec 5 10:29:25 localhost ollama: r11 0x7f74e06b4750
Dec 5 10:29:25 localhost ollama: r12 0x7f7468296fd0
Dec 5 10:29:25 localhost ollama: r13 0x7f7468297910
Dec 5 10:29:25 localhost ollama: r14 0x7f74682970d0
Dec 5 10:29:25 localhost ollama: r15 0x7f746851a1e0
Dec 5 10:29:25 localhost ollama: rip 0x7f74e0682a00
Dec 5 10:29:25 localhost ollama: rflags 0x10287
Dec 5 10:29:25 localhost ollama: cs 0x33
Dec 5 10:29:25 localhost ollama: fs 0x0
Dec 5 10:29:25 localhost ollama: gs 0x0
Dec 5 10:29:25 localhost ollama: [GIN] 2024/12/05 - 10:29:25 | 200 | 2.175286662s | 127.0.0.1 | POST "/api/chat"
Dec 5 10:30:02 localhost systemd: Started Session 306 of user root.
Dec 5 10:34:30 localhost ollama: time=2024-12-05T10:34:30.066+08:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.032960268 model=/data/ollama/models/blobs/sha256-667b0c1932bc6ffc593ed1d03f895bf2dc8dc6df21db3042284a6f4416b06a29
Dec 5 10:34:30 localhost ollama: time=2024-12-05T10:34:30.316+08:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.2833089730000005 model=/data/ollama/models/blobs/sha256-667b0c1932bc6ffc593ed1d03f895bf2dc8dc6df21db3042284a6f4416b06a29
Dec 5 10:34:30 localhost ollama: time=2024-12-05T10:34:30.565+08:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.532611974 model=/data/ollama/models/blobs/sha256-667b0c1932bc6ffc593ed1d03f895bf2dc8dc6df21db3042284a6f4416b06a29
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7941/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7941/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/6053
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6053/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6053/comments
|
https://api.github.com/repos/ollama/ollama/issues/6053/events
|
https://github.com/ollama/ollama/pull/6053
| 2,435,599,844
|
PR_kwDOJ0Z1Ps52wZVB
| 6,053
|
docs: Add ingest to list of cli tools
|
{
"login": "sammcj",
"id": 862951,
"node_id": "MDQ6VXNlcjg2Mjk1MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/862951?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sammcj",
"html_url": "https://github.com/sammcj",
"followers_url": "https://api.github.com/users/sammcj/followers",
"following_url": "https://api.github.com/users/sammcj/following{/other_user}",
"gists_url": "https://api.github.com/users/sammcj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sammcj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sammcj/subscriptions",
"organizations_url": "https://api.github.com/users/sammcj/orgs",
"repos_url": "https://api.github.com/users/sammcj/repos",
"events_url": "https://api.github.com/users/sammcj/events{/privacy}",
"received_events_url": "https://api.github.com/users/sammcj/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-07-29T14:34:51
| 2024-08-09T07:38:34
| 2024-08-09T07:38:33
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6053",
"html_url": "https://github.com/ollama/ollama/pull/6053",
"diff_url": "https://github.com/ollama/ollama/pull/6053.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6053.patch",
"merged_at": null
}
|
Add ingest (https://github.com/sammcj/ingest) to list of CLI tools for Ollama.
Ingest is a tool for parsing files/directories into a LLM friendly markdown formatted prompt and can directly pass the content and prompt to Ollama.
|
{
"login": "sammcj",
"id": 862951,
"node_id": "MDQ6VXNlcjg2Mjk1MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/862951?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sammcj",
"html_url": "https://github.com/sammcj",
"followers_url": "https://api.github.com/users/sammcj/followers",
"following_url": "https://api.github.com/users/sammcj/following{/other_user}",
"gists_url": "https://api.github.com/users/sammcj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sammcj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sammcj/subscriptions",
"organizations_url": "https://api.github.com/users/sammcj/orgs",
"repos_url": "https://api.github.com/users/sammcj/repos",
"events_url": "https://api.github.com/users/sammcj/events{/privacy}",
"received_events_url": "https://api.github.com/users/sammcj/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6053/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6053/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1590
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1590/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1590/comments
|
https://api.github.com/repos/ollama/ollama/issues/1590/events
|
https://github.com/ollama/ollama/issues/1590
| 2,047,632,584
|
I_kwDOJ0Z1Ps56DGTI
| 1,590
|
Add support for Intel Arc GPUs
|
{
"login": "taep96",
"id": 64481039,
"node_id": "MDQ6VXNlcjY0NDgxMDM5",
"avatar_url": "https://avatars.githubusercontent.com/u/64481039?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/taep96",
"html_url": "https://github.com/taep96",
"followers_url": "https://api.github.com/users/taep96/followers",
"following_url": "https://api.github.com/users/taep96/following{/other_user}",
"gists_url": "https://api.github.com/users/taep96/gists{/gist_id}",
"starred_url": "https://api.github.com/users/taep96/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/taep96/subscriptions",
"organizations_url": "https://api.github.com/users/taep96/orgs",
"repos_url": "https://api.github.com/users/taep96/repos",
"events_url": "https://api.github.com/users/taep96/events{/privacy}",
"received_events_url": "https://api.github.com/users/taep96/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 6677491450,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgJu-g",
"url": "https://api.github.com/repos/ollama/ollama/labels/intel",
"name": "intel",
"color": "226E5B",
"default": false,
"description": "issues relating to Intel GPUs"
}
] |
open
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 66
| 2023-12-18T23:25:37
| 2025-01-24T05:50:38
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null | null | null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1590/reactions",
"total_count": 90,
"+1": 56,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 17,
"eyes": 17
}
|
https://api.github.com/repos/ollama/ollama/issues/1590/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/3540
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3540/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3540/comments
|
https://api.github.com/repos/ollama/ollama/issues/3540/events
|
https://github.com/ollama/ollama/pull/3540
| 2,231,714,189
|
PR_kwDOJ0Z1Ps5sCb57
| 3,540
|
Implement 'split_mode' and 'tensor_split' support in modelfiles
|
{
"login": "jukofyork",
"id": 69222624,
"node_id": "MDQ6VXNlcjY5MjIyNjI0",
"avatar_url": "https://avatars.githubusercontent.com/u/69222624?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jukofyork",
"html_url": "https://github.com/jukofyork",
"followers_url": "https://api.github.com/users/jukofyork/followers",
"following_url": "https://api.github.com/users/jukofyork/following{/other_user}",
"gists_url": "https://api.github.com/users/jukofyork/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jukofyork/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jukofyork/subscriptions",
"organizations_url": "https://api.github.com/users/jukofyork/orgs",
"repos_url": "https://api.github.com/users/jukofyork/repos",
"events_url": "https://api.github.com/users/jukofyork/events{/privacy}",
"received_events_url": "https://api.github.com/users/jukofyork/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 6
| 2024-04-08T17:13:21
| 2024-04-27T13:28:00
| 2024-04-10T02:34:32
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3540",
"html_url": "https://github.com/ollama/ollama/pull/3540",
"diff_url": "https://github.com/ollama/ollama/pull/3540.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3540.patch",
"merged_at": null
}
|
This adds support for the `tensor_split` and `split_mode` options in `llama.cpp::server`.
The `split_mode` option has three possible values, and from `llama.cpp::server --help`:
> How to split the model across multiple GPUs, one of:
> - "layer": split layers and KV across GPUs (default).
> - "row": split rows across GPUs.
> - "none": use one GPU only.
It also changes the meaning of the `main_gpu` parameter:
> The GPU to use for the model (with split_mode = "none") or for intermediate results and KV (with split_mode = "row").
---
To use:
```
git clone https://github.com/ollama/ollama
cd ollama
git pull origin pull/3540/head
```
Then compile as normal (you might want to edit the "0.0.0" version number in `version/version.go` before compiling if you use with OpenWebUI or it will think the version is below the minimum it requires).
|
{
"login": "jukofyork",
"id": 69222624,
"node_id": "MDQ6VXNlcjY5MjIyNjI0",
"avatar_url": "https://avatars.githubusercontent.com/u/69222624?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jukofyork",
"html_url": "https://github.com/jukofyork",
"followers_url": "https://api.github.com/users/jukofyork/followers",
"following_url": "https://api.github.com/users/jukofyork/following{/other_user}",
"gists_url": "https://api.github.com/users/jukofyork/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jukofyork/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jukofyork/subscriptions",
"organizations_url": "https://api.github.com/users/jukofyork/orgs",
"repos_url": "https://api.github.com/users/jukofyork/repos",
"events_url": "https://api.github.com/users/jukofyork/events{/privacy}",
"received_events_url": "https://api.github.com/users/jukofyork/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3540/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3540/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1649
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1649/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1649/comments
|
https://api.github.com/repos/ollama/ollama/issues/1649/events
|
https://github.com/ollama/ollama/issues/1649
| 2,051,513,243
|
I_kwDOJ0Z1Ps56R5ub
| 1,649
|
Llama not using cuda cuBLAS error 13
|
{
"login": "hbqdev",
"id": 49971676,
"node_id": "MDQ6VXNlcjQ5OTcxNjc2",
"avatar_url": "https://avatars.githubusercontent.com/u/49971676?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hbqdev",
"html_url": "https://github.com/hbqdev",
"followers_url": "https://api.github.com/users/hbqdev/followers",
"following_url": "https://api.github.com/users/hbqdev/following{/other_user}",
"gists_url": "https://api.github.com/users/hbqdev/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hbqdev/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hbqdev/subscriptions",
"organizations_url": "https://api.github.com/users/hbqdev/orgs",
"repos_url": "https://api.github.com/users/hbqdev/repos",
"events_url": "https://api.github.com/users/hbqdev/events{/privacy}",
"received_events_url": "https://api.github.com/users/hbqdev/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg",
"url": "https://api.github.com/repos/ollama/ollama/labels/nvidia",
"name": "nvidia",
"color": "8CDB00",
"default": false,
"description": "Issues relating to Nvidia GPUs and CUDA"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 4
| 2023-12-21T01:14:26
| 2024-02-01T23:23:08
| 2024-02-01T23:23:08
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
It seems this issue was first reported here
https://github.com/jmorganca/ollama/issues/920****
```
Dec 20 17:03:07 NightFuryX ollama[12288]: llama_new_context_with_model: total VRAM used: 5913.56 MiB (model: 3577.55 MiB, context: 2336.00 MiB)
Dec 20 17:03:11 NightFuryX ollama[12288]: CUDA error 700 at /go/src/github.com/jmorganca/ollama/llm/llama.cpp/gguf/ggml-cuda.cu:8111: an illegal memory access was encountered
Dec 20 17:03:11 NightFuryX ollama[12288]: current device: 1
Dec 20 17:03:11 NightFuryX ollama[12288]: GGML_ASSERT: /go/src/github.com/jmorganca/ollama/llm/llama.cpp/gguf/ggml-cuda.cu:8111: !"CUDA error"
Dec 20 17:03:12 NightFuryX ollama[12288]: 2023/12/20 17:03:12 llama.go:451: 700 at /go/src/github.com/jmorganca/ollama/llm/llama.cpp/gguf/ggml-cuda.cu:8111: an illegal memory access was encountered
Dec 20 17:03:12 NightFuryX ollama[12288]: current device: 1
Dec 20 17:03:12 NightFuryX ollama[12288]: GGML_ASSERT: /go/src/github.com/jmorganca/ollama/llm/llama.cpp/gguf/ggml-cuda.cu:8111: !"CUDA error"
Dec 20 17:03:12 NightFuryX ollama[12288]: 2023/12/20 17:03:12 llama.go:459: error starting llama runner: llama runner process has terminated
Dec 20 17:03:12 NightFuryX ollama[12288]: 2023/12/20 17:03:12 llama.go:525: llama runner stopped successfully
Dec 20 17:03:12 NightFuryX ollama[12288]: 2023/12/20 17:03:12 llama.go:436: starting llama runner
Dec 20 17:03:12 NightFuryX ollama[12288]: 2023/12/20 17:03:12 llama.go:494: waiting for llama runner to start responding
Dec 20 17:03:12 NightFuryX ollama[12381]: {"timestamp":1703120592,"level":"WARNING","function":"server_params_parse","line":2160,"message":"Not compiled with GPU offload support, --n-gpu-layers option will be ignored. See main README.md for information on enabling GPU BLAS support","n_gpu_layers":-1}
```
however on the latest build I still have this error. I tried on both linux and WSL2 and same issue. NVCC is installed
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1649/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1649/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2511
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2511/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2511/comments
|
https://api.github.com/repos/ollama/ollama/issues/2511/events
|
https://github.com/ollama/ollama/pull/2511
| 2,135,852,135
|
PR_kwDOJ0Z1Ps5m8K1D
| 2,511
|
[nit] Remove unused msg local var.
|
{
"login": "ttsugriy",
"id": 172294,
"node_id": "MDQ6VXNlcjE3MjI5NA==",
"avatar_url": "https://avatars.githubusercontent.com/u/172294?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ttsugriy",
"html_url": "https://github.com/ttsugriy",
"followers_url": "https://api.github.com/users/ttsugriy/followers",
"following_url": "https://api.github.com/users/ttsugriy/following{/other_user}",
"gists_url": "https://api.github.com/users/ttsugriy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ttsugriy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ttsugriy/subscriptions",
"organizations_url": "https://api.github.com/users/ttsugriy/orgs",
"repos_url": "https://api.github.com/users/ttsugriy/repos",
"events_url": "https://api.github.com/users/ttsugriy/events{/privacy}",
"received_events_url": "https://api.github.com/users/ttsugriy/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-02-15T07:46:38
| 2024-02-20T19:18:59
| 2024-02-20T19:02:35
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/2511",
"html_url": "https://github.com/ollama/ollama/pull/2511",
"diff_url": "https://github.com/ollama/ollama/pull/2511.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2511.patch",
"merged_at": "2024-02-20T19:02:35"
}
|
It's not used but clutters code.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2511/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2511/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2418
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2418/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2418/comments
|
https://api.github.com/repos/ollama/ollama/issues/2418/events
|
https://github.com/ollama/ollama/issues/2418
| 2,126,284,310
|
I_kwDOJ0Z1Ps5-vIYW
| 2,418
|
What are the system requirements?
|
{
"login": "worikgh",
"id": 5387413,
"node_id": "MDQ6VXNlcjUzODc0MTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/5387413?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/worikgh",
"html_url": "https://github.com/worikgh",
"followers_url": "https://api.github.com/users/worikgh/followers",
"following_url": "https://api.github.com/users/worikgh/following{/other_user}",
"gists_url": "https://api.github.com/users/worikgh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/worikgh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/worikgh/subscriptions",
"organizations_url": "https://api.github.com/users/worikgh/orgs",
"repos_url": "https://api.github.com/users/worikgh/repos",
"events_url": "https://api.github.com/users/worikgh/events{/privacy}",
"received_events_url": "https://api.github.com/users/worikgh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 6
| 2024-02-09T00:37:47
| 2024-07-12T20:10:30
| 2024-02-18T08:57:05
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
It would be very useful to have a section on system requirements in the README.md
Nothing too detailed, but:
* Disc space required
* Main ram
* Video/Compute card requirements
Keep up the good work!
|
{
"login": "worikgh",
"id": 5387413,
"node_id": "MDQ6VXNlcjUzODc0MTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/5387413?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/worikgh",
"html_url": "https://github.com/worikgh",
"followers_url": "https://api.github.com/users/worikgh/followers",
"following_url": "https://api.github.com/users/worikgh/following{/other_user}",
"gists_url": "https://api.github.com/users/worikgh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/worikgh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/worikgh/subscriptions",
"organizations_url": "https://api.github.com/users/worikgh/orgs",
"repos_url": "https://api.github.com/users/worikgh/repos",
"events_url": "https://api.github.com/users/worikgh/events{/privacy}",
"received_events_url": "https://api.github.com/users/worikgh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2418/reactions",
"total_count": 5,
"+1": 4,
"-1": 1,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2418/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2600
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2600/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2600/comments
|
https://api.github.com/repos/ollama/ollama/issues/2600/events
|
https://github.com/ollama/ollama/pull/2600
| 2,143,148,487
|
PR_kwDOJ0Z1Ps5nVHWd
| 2,600
|
Document setting server vars for windows
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-02-19T21:12:26
| 2024-02-19T21:46:39
| 2024-02-19T21:46:37
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/2600",
"html_url": "https://github.com/ollama/ollama/pull/2600",
"diff_url": "https://github.com/ollama/ollama/pull/2600.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2600.patch",
"merged_at": "2024-02-19T21:46:37"
}
|
Fixes #2546
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2600/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2600/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/8328
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8328/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8328/comments
|
https://api.github.com/repos/ollama/ollama/issues/8328/events
|
https://github.com/ollama/ollama/issues/8328
| 2,771,743,238
|
I_kwDOJ0Z1Ps6lNXIG
| 8,328
|
[Model request] alea-institute/kl3m-003-3.7b
|
{
"login": "sncix",
"id": 85628682,
"node_id": "MDQ6VXNlcjg1NjI4Njgy",
"avatar_url": "https://avatars.githubusercontent.com/u/85628682?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sncix",
"html_url": "https://github.com/sncix",
"followers_url": "https://api.github.com/users/sncix/followers",
"following_url": "https://api.github.com/users/sncix/following{/other_user}",
"gists_url": "https://api.github.com/users/sncix/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sncix/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sncix/subscriptions",
"organizations_url": "https://api.github.com/users/sncix/orgs",
"repos_url": "https://api.github.com/users/sncix/repos",
"events_url": "https://api.github.com/users/sncix/events{/privacy}",
"received_events_url": "https://api.github.com/users/sncix/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
open
| false
| null |
[] | null | 0
| 2025-01-07T02:01:17
| 2025-01-07T02:01:17
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
https://huggingface.co/alea-institute/kl3m-003-3.7b
https://www.kl3m.ai/
KL3M is a family of language models claimed to be trained on clean, legally-permissible data. It has obtained the [Fairly Trained L-Certification](https://www.fairlytrained.org/certifications). `kl3m-003-3.7b` is the latest available model of that family.
A similar model, `kl3m-004`, is expected to be released in Q1 2025.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8328/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8328/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/5775
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5775/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5775/comments
|
https://api.github.com/repos/ollama/ollama/issues/5775/events
|
https://github.com/ollama/ollama/issues/5775
| 2,416,937,658
|
I_kwDOJ0Z1Ps6QD4q6
| 5,775
|
Assistant doesn't continue from its last message
|
{
"login": "yilmaz08",
"id": 84680978,
"node_id": "MDQ6VXNlcjg0NjgwOTc4",
"avatar_url": "https://avatars.githubusercontent.com/u/84680978?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yilmaz08",
"html_url": "https://github.com/yilmaz08",
"followers_url": "https://api.github.com/users/yilmaz08/followers",
"following_url": "https://api.github.com/users/yilmaz08/following{/other_user}",
"gists_url": "https://api.github.com/users/yilmaz08/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yilmaz08/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yilmaz08/subscriptions",
"organizations_url": "https://api.github.com/users/yilmaz08/orgs",
"repos_url": "https://api.github.com/users/yilmaz08/repos",
"events_url": "https://api.github.com/users/yilmaz08/events{/privacy}",
"received_events_url": "https://api.github.com/users/yilmaz08/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 4
| 2024-07-18T16:59:37
| 2024-07-21T19:02:53
| 2024-07-20T03:19:27
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I love using llama3:8b with Open WebUI's text generation and recently I've realized whatever I write there llama3:8b just says random stuff.
After that I've tried the message editing in Open WebUI and even if I edit it the message continues like no assistant message was provided.
Finally I have tested the API with the same text and it still happened. So I posted this here.
Here is the body of my POST request to http://localhost:11434/api/chat/:
```
{
"model": "llama3",
"messages": [
{"role": "user", "content": "hi"},
{"role": "assistant", "content": "Hello this message is edited, "}
],
"stream": false
}
```
Response:
```
{
"model": "llama3",
"created_at": "2024-07-18T16:56:59.425466236Z",
"message": {
"role": "assistant",
"content": "Hi! It's nice to meet you. Is there something I can help you with, or would you like to chat?"
},
"done_reason": "stop",
"done": true,
"total_duration": 1085833912,
"load_duration": 14700744,
"prompt_eval_count": 11,
"prompt_eval_duration": 56151000,
"eval_count": 26,
"eval_duration": 882579000
}
```
I would normally expect it to start with "Hello this message is edited, " part I've provided however it ignores last assistant message.
I am not sure why exactly but same thing happens for phi3 model too.
Is this feature removed or is it a bug on my side?
System:
OS: Arch Linux 6.9.9-arch1-1
GPU: NVIDIA 3060 Mobile
CPU: Intel i7-12700H
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.2.5
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5775/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5775/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3164
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3164/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3164/comments
|
https://api.github.com/repos/ollama/ollama/issues/3164/events
|
https://github.com/ollama/ollama/issues/3164
| 2,188,106,862
|
I_kwDOJ0Z1Ps6Ca9xu
| 3,164
|
CUDA error: an illegal memory access was encountered
|
{
"login": "lizhichao999",
"id": 34128722,
"node_id": "MDQ6VXNlcjM0MTI4NzIy",
"avatar_url": "https://avatars.githubusercontent.com/u/34128722?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lizhichao999",
"html_url": "https://github.com/lizhichao999",
"followers_url": "https://api.github.com/users/lizhichao999/followers",
"following_url": "https://api.github.com/users/lizhichao999/following{/other_user}",
"gists_url": "https://api.github.com/users/lizhichao999/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lizhichao999/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lizhichao999/subscriptions",
"organizations_url": "https://api.github.com/users/lizhichao999/orgs",
"repos_url": "https://api.github.com/users/lizhichao999/repos",
"events_url": "https://api.github.com/users/lizhichao999/events{/privacy}",
"received_events_url": "https://api.github.com/users/lizhichao999/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg",
"url": "https://api.github.com/repos/ollama/ollama/labels/windows",
"name": "windows",
"color": "0052CC",
"default": false,
"description": ""
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg",
"url": "https://api.github.com/repos/ollama/ollama/labels/nvidia",
"name": "nvidia",
"color": "8CDB00",
"default": false,
"description": "Issues relating to Nvidia GPUs and CUDA"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-03-15T09:41:36
| 2024-03-15T19:58:45
| 2024-03-15T19:58:45
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?



ollama run llama2
When I executed the command: ollama run llama2, an error occurred related to the data information.
system:Windows Server 2022
GPU: NVIDIA RTX A6000
nvidia driver version:31.0.15.5123
### What did you expect to see?
Operate normally: ollama run llama2
### Steps to reproduce
ollama run llama2
### Are there any recent changes that introduced the issue?
_No response_
### OS
Windows
### Architecture
amd64
### Platform
_No response_
### Ollama version
0.1.29
### GPU
Nvidia
### GPU info
RTX A6000
### CPU
Intel
### Other software
_No response_
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3164/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3164/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7544
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7544/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7544/comments
|
https://api.github.com/repos/ollama/ollama/issues/7544/events
|
https://github.com/ollama/ollama/issues/7544
| 2,640,217,485
|
I_kwDOJ0Z1Ps6dXoWN
| 7,544
|
Despite advertised, granite3-dense does not seem to support tools.
|
{
"login": "chhu",
"id": 208672,
"node_id": "MDQ6VXNlcjIwODY3Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/208672?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chhu",
"html_url": "https://github.com/chhu",
"followers_url": "https://api.github.com/users/chhu/followers",
"following_url": "https://api.github.com/users/chhu/following{/other_user}",
"gists_url": "https://api.github.com/users/chhu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chhu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chhu/subscriptions",
"organizations_url": "https://api.github.com/users/chhu/orgs",
"repos_url": "https://api.github.com/users/chhu/repos",
"events_url": "https://api.github.com/users/chhu/events{/privacy}",
"received_events_url": "https://api.github.com/users/chhu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 5
| 2024-11-07T08:09:24
| 2025-01-13T01:22:02
| 2025-01-13T01:22:02
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
[granite3-dense](https://ollama.com/library/granite3-dense)
Gave bash as tool, but it is refusing to use it, other models work fine (qwen2.5 32b outshines all others for shell use).
Tool setup and sys prompt here: https://github.com/chhu/ollash/blob/main/index.js
asterope:~ >ask List file contents of current folder
Querying granite3-dense:8b-instruct-q8_0...
```bash
ls
```
asterope:~ >
### OS
Linux
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.1.40
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7544/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7544/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3921
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3921/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3921/comments
|
https://api.github.com/repos/ollama/ollama/issues/3921/events
|
https://github.com/ollama/ollama/issues/3921
| 2,264,485,955
|
I_kwDOJ0Z1Ps6G-VBD
| 3,921
|
Copying quantized models doesn't work
|
{
"login": "saul-jb",
"id": 2025187,
"node_id": "MDQ6VXNlcjIwMjUxODc=",
"avatar_url": "https://avatars.githubusercontent.com/u/2025187?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/saul-jb",
"html_url": "https://github.com/saul-jb",
"followers_url": "https://api.github.com/users/saul-jb/followers",
"following_url": "https://api.github.com/users/saul-jb/following{/other_user}",
"gists_url": "https://api.github.com/users/saul-jb/gists{/gist_id}",
"starred_url": "https://api.github.com/users/saul-jb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/saul-jb/subscriptions",
"organizations_url": "https://api.github.com/users/saul-jb/orgs",
"repos_url": "https://api.github.com/users/saul-jb/repos",
"events_url": "https://api.github.com/users/saul-jb/events{/privacy}",
"received_events_url": "https://api.github.com/users/saul-jb/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 10
| 2024-04-25T20:59:31
| 2024-05-14T03:00:15
| 2024-05-09T22:21:12
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I've just built the latest version through docker (5f73c08729e97eb3f760633c6ffba4f34cfe5538) and am getting errors copying some models:
```
$ ollama cp llama3:8b-instruct-q5_K_M llama3-8b-1
Error: model "llama3:8b-instruct-q5_K_M" not found
$ ollama cp llama3 llama3-8b-1
Error: model "llama3" not found
$ ollama cp yarn-llama2:13b-128k-q5_K_M test
Error: model "yarn-llama2:13b-128k-q5_K_M" not found
```
I have these models installed:
```
$ ollama list
...
llama3:70b-instruct be39eb53a197 39 GB 46 hours ago
llama3:8b-instruct-q5_K_M fdc4ae3d5d42 5.7 GB 21 hours ago
yarn-llama2:13b-128k-q5_K_M 6c618202668d 9.2 GB 5 weeks ago
llava:latest 8dd30f6b0cb1 4.7 GB 16 hours ago
```
Other models seem to work:
```
$ ollama cp llava test
copied 'llava' to 'test'
$ ollama cp llava:latest test
copied 'llava:latest' to 'test'
```
I was hoping that #3713 would address it but it still remains an issue.
### OS
Linux
### GPU
AMD
### CPU
Intel
### Ollama version
0.1.32-42-g5f73c08-dirty
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3921/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3921/timeline
| null |
completed
| false
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.