url
stringlengths 51
54
| repository_url
stringclasses 1
value | labels_url
stringlengths 65
68
| comments_url
stringlengths 60
63
| events_url
stringlengths 58
61
| html_url
stringlengths 39
44
| id
int64 1.78B
2.82B
| node_id
stringlengths 18
19
| number
int64 1
8.69k
| title
stringlengths 1
382
| user
dict | labels
listlengths 0
5
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
2
| milestone
null | comments
int64 0
323
| created_at
timestamp[s] | updated_at
timestamp[s] | closed_at
timestamp[s] | author_association
stringclasses 4
values | sub_issues_summary
dict | active_lock_reason
null | draft
bool 2
classes | pull_request
dict | body
stringlengths 2
118k
⌀ | closed_by
dict | reactions
dict | timeline_url
stringlengths 60
63
| performed_via_github_app
null | state_reason
stringclasses 4
values | is_pull_request
bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/ollama/ollama/issues/1135
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1135/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1135/comments
|
https://api.github.com/repos/ollama/ollama/issues/1135/events
|
https://github.com/ollama/ollama/issues/1135
| 1,994,087,136
|
I_kwDOJ0Z1Ps5221rg
| 1,135
|
json response stalls?
|
{
"login": "hemanth",
"id": 18315,
"node_id": "MDQ6VXNlcjE4MzE1",
"avatar_url": "https://avatars.githubusercontent.com/u/18315?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hemanth",
"html_url": "https://github.com/hemanth",
"followers_url": "https://api.github.com/users/hemanth/followers",
"following_url": "https://api.github.com/users/hemanth/following{/other_user}",
"gists_url": "https://api.github.com/users/hemanth/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hemanth/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hemanth/subscriptions",
"organizations_url": "https://api.github.com/users/hemanth/orgs",
"repos_url": "https://api.github.com/users/hemanth/repos",
"events_url": "https://api.github.com/users/hemanth/events{/privacy}",
"received_events_url": "https://api.github.com/users/hemanth/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 11
| 2023-11-15T05:42:31
| 2024-03-11T18:42:47
| 2024-03-11T18:42:47
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
```
/tmp
❯ ollama --version
ollama version 0.1.9
```
https://github.com/jmorganca/ollama/assets/18315/d0d8ecb1-142f-464c-bb49-8d147eb3d322
Sometimes we see an empty response:
```json
{"model":"llama2","created_at":"2023-11-15T05:46:21.685664Z","response":"{} ","done":true,"context":[29961,25580,29962,3532,14816,29903,29958,5299,829,14816,29903,6778,13,13,29911,514,592,263,270,328,2212,446,518,29914,25580,29962,6571,29871],"total_duration":216306917,"load_duration":982333,"prompt_eval_count":1,"eval_count":3,"eval_duration":192199000}
```
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1135/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1135/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/120
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/120/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/120/comments
|
https://api.github.com/repos/ollama/ollama/issues/120/events
|
https://github.com/ollama/ollama/issues/120
| 1,811,269,814
|
I_kwDOJ0Z1Ps5r9ci2
| 120
|
Are the models quantizated?
|
{
"login": "chsasank",
"id": 9305875,
"node_id": "MDQ6VXNlcjkzMDU4NzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/9305875?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chsasank",
"html_url": "https://github.com/chsasank",
"followers_url": "https://api.github.com/users/chsasank/followers",
"following_url": "https://api.github.com/users/chsasank/following{/other_user}",
"gists_url": "https://api.github.com/users/chsasank/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chsasank/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chsasank/subscriptions",
"organizations_url": "https://api.github.com/users/chsasank/orgs",
"repos_url": "https://api.github.com/users/chsasank/repos",
"events_url": "https://api.github.com/users/chsasank/events{/privacy}",
"received_events_url": "https://api.github.com/users/chsasank/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2023-07-19T06:44:04
| 2023-07-19T06:50:39
| 2023-07-19T06:50:39
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Can you give more details on quantization level the models are running and if they can be changed?
|
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/120/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/120/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7846
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7846/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7846/comments
|
https://api.github.com/repos/ollama/ollama/issues/7846/events
|
https://github.com/ollama/ollama/issues/7846
| 2,695,941,621
|
I_kwDOJ0Z1Ps6gsM31
| 7,846
|
Verifing before delete model
|
{
"login": "liel-almog",
"id": 70017134,
"node_id": "MDQ6VXNlcjcwMDE3MTM0",
"avatar_url": "https://avatars.githubusercontent.com/u/70017134?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/liel-almog",
"html_url": "https://github.com/liel-almog",
"followers_url": "https://api.github.com/users/liel-almog/followers",
"following_url": "https://api.github.com/users/liel-almog/following{/other_user}",
"gists_url": "https://api.github.com/users/liel-almog/gists{/gist_id}",
"starred_url": "https://api.github.com/users/liel-almog/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/liel-almog/subscriptions",
"organizations_url": "https://api.github.com/users/liel-almog/orgs",
"repos_url": "https://api.github.com/users/liel-almog/repos",
"events_url": "https://api.github.com/users/liel-almog/events{/privacy}",
"received_events_url": "https://api.github.com/users/liel-almog/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 3
| 2024-11-26T20:05:30
| 2024-11-29T11:34:28
| 2024-11-29T11:34:09
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi guys,
I've just accidentally deleted a model and it is very annoying because models are very large and takes sometime to download.
I think it will be very helpful to add a verify (y/N) before deleting a model
|
{
"login": "liel-almog",
"id": 70017134,
"node_id": "MDQ6VXNlcjcwMDE3MTM0",
"avatar_url": "https://avatars.githubusercontent.com/u/70017134?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/liel-almog",
"html_url": "https://github.com/liel-almog",
"followers_url": "https://api.github.com/users/liel-almog/followers",
"following_url": "https://api.github.com/users/liel-almog/following{/other_user}",
"gists_url": "https://api.github.com/users/liel-almog/gists{/gist_id}",
"starred_url": "https://api.github.com/users/liel-almog/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/liel-almog/subscriptions",
"organizations_url": "https://api.github.com/users/liel-almog/orgs",
"repos_url": "https://api.github.com/users/liel-almog/repos",
"events_url": "https://api.github.com/users/liel-almog/events{/privacy}",
"received_events_url": "https://api.github.com/users/liel-almog/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7846/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7846/timeline
| null |
not_planned
| false
|
https://api.github.com/repos/ollama/ollama/issues/8157
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8157/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8157/comments
|
https://api.github.com/repos/ollama/ollama/issues/8157/events
|
https://github.com/ollama/ollama/issues/8157
| 2,748,213,456
|
I_kwDOJ0Z1Ps6jzmjQ
| 8,157
|
falcon3:10b gives empty response sometimes
|
{
"login": "i0ntempest",
"id": 16017904,
"node_id": "MDQ6VXNlcjE2MDE3OTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/16017904?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/i0ntempest",
"html_url": "https://github.com/i0ntempest",
"followers_url": "https://api.github.com/users/i0ntempest/followers",
"following_url": "https://api.github.com/users/i0ntempest/following{/other_user}",
"gists_url": "https://api.github.com/users/i0ntempest/gists{/gist_id}",
"starred_url": "https://api.github.com/users/i0ntempest/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/i0ntempest/subscriptions",
"organizations_url": "https://api.github.com/users/i0ntempest/orgs",
"repos_url": "https://api.github.com/users/i0ntempest/repos",
"events_url": "https://api.github.com/users/i0ntempest/events{/privacy}",
"received_events_url": "https://api.github.com/users/i0ntempest/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 0
| 2024-12-18T16:06:07
| 2024-12-18T16:06:07
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Ollama 0.5.4 with falcon3:10b randomly gives empty response. The question I asked is "Why is e^(B_t-t/2) a martingale? Specifically why is it finite?". Initial debugging with help from Ollama Discord points to structured output problems.
Server log: [com.i0ntpst.ollama.log](https://github.com/user-attachments/files/18185728/com.i0ntpst.ollama.log)
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.5.1
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8157/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8157/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/7652
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7652/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7652/comments
|
https://api.github.com/repos/ollama/ollama/issues/7652/events
|
https://github.com/ollama/ollama/issues/7652
| 2,655,880,046
|
I_kwDOJ0Z1Ps6eTYNu
| 7,652
|
Error: pull model manifest: Get "https://registry.ollama.ai/v2/library/qwen2.5/manifests/25b": EOF
|
{
"login": "Frankcsc",
"id": 45666537,
"node_id": "MDQ6VXNlcjQ1NjY2NTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/45666537?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Frankcsc",
"html_url": "https://github.com/Frankcsc",
"followers_url": "https://api.github.com/users/Frankcsc/followers",
"following_url": "https://api.github.com/users/Frankcsc/following{/other_user}",
"gists_url": "https://api.github.com/users/Frankcsc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Frankcsc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Frankcsc/subscriptions",
"organizations_url": "https://api.github.com/users/Frankcsc/orgs",
"repos_url": "https://api.github.com/users/Frankcsc/repos",
"events_url": "https://api.github.com/users/Frankcsc/events{/privacy}",
"received_events_url": "https://api.github.com/users/Frankcsc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 2
| 2024-11-13T15:29:31
| 2024-11-14T01:49:26
| 2024-11-14T01:49:26
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
When I was downloading the qwen2.5:32b model, I downloaded about 30% and suddenly reported this error. After I restarted the computer and checked the network, the error still appeared.
`ollama run qwen2.5:32b`
`pulling manifest`
`Error: pull model manifest: Get "https://registry.ollama.ai/v2/library/qwen2.5/manifests/32b": EOF`
OS: MacOS
GPU: M4
CPU: M4
Ollama version: 0.4.1
```time=2024-11-13T09:57:03.033+08:00 level=INFO source=images.go:1022 msg="request failed: Get \"https://registry.ollama.ai/v2/library/qwen2.5/manifests/32b\": EOF"
[GIN] 2024/11/13 - 09:57:03 | 200 | 299.253291ms | 127.0.0.1 | POST "/api/pull"
update check failed - TypeError: fetch failed
update check failed - TypeError: fetch failed
update check failed - TypeError: fetch failed
update check failed - TypeError: fetch failed
update check failed - TypeError: fetch failed
update check failed - TypeError: fetch failed
update check failed - TypeError: fetch failed
update check failed - TypeError: fetch failed
update check failed - TypeError: fetch failed
update check failed - TypeError: fetch failed
update check failed - TypeError: fetch failed
update check failed - TypeError: fetch failed
update check failed - TypeError: fetch failed
[GIN] 2024/11/13 - 23:06:30 | 200 | 30.875µs | 127.0.0.1 | HEAD "/"
[GIN] 2024/11/13 - 23:06:30 | 404 | 1.31325ms | 127.0.0.1 | POST "/api/show"
time=2024-11-13T23:06:30.752+08:00 level=INFO source=images.go:1022 msg="request failed: Get \"https://registry.ollama.ai/v2/library/qwen2.5/manifests/32b\": EOF"
[GIN] 2024/11/13 - 23:06:30 | 200 | 716.532875ms | 127.0.0.1 | POST "/api/pull"
[GIN] 2024/11/13 - 23:12:10 | 200 | 45.959µs | 127.0.0.1 | HEAD "/"
[GIN] 2024/11/13 - 23:12:10 | 404 | 1.077791ms | 127.0.0.1 | POST "/api/show"
time=2024-11-13T23:12:10.423+08:00 level=INFO source=images.go:1022 msg="request failed: Get \"https://registry.ollama.ai/v2/library/qwen2.5/manifests/32b\": EOF"
[GIN] 2024/11/13 - 23:12:10 | 200 | 318.063792ms | 127.0.0.1 | POST "/api/pull"
[GIN] 2024/11/13 - 23:13:07 | 200 | 45.833µs | 127.0.0.1 | HEAD "/"
[GIN] 2024/11/13 - 23:13:07 | 200 | 2.437833ms | 127.0.0.1 | GET "/api/tags"
```
Thanks.
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.4.1
|
{
"login": "Frankcsc",
"id": 45666537,
"node_id": "MDQ6VXNlcjQ1NjY2NTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/45666537?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Frankcsc",
"html_url": "https://github.com/Frankcsc",
"followers_url": "https://api.github.com/users/Frankcsc/followers",
"following_url": "https://api.github.com/users/Frankcsc/following{/other_user}",
"gists_url": "https://api.github.com/users/Frankcsc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Frankcsc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Frankcsc/subscriptions",
"organizations_url": "https://api.github.com/users/Frankcsc/orgs",
"repos_url": "https://api.github.com/users/Frankcsc/repos",
"events_url": "https://api.github.com/users/Frankcsc/events{/privacy}",
"received_events_url": "https://api.github.com/users/Frankcsc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7652/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7652/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7297
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7297/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7297/comments
|
https://api.github.com/repos/ollama/ollama/issues/7297/events
|
https://github.com/ollama/ollama/issues/7297
| 2,602,693,625
|
I_kwDOJ0Z1Ps6bIfP5
| 7,297
|
Better docker support
|
{
"login": "jhgoodwin",
"id": 2154552,
"node_id": "MDQ6VXNlcjIxNTQ1NTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/2154552?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jhgoodwin",
"html_url": "https://github.com/jhgoodwin",
"followers_url": "https://api.github.com/users/jhgoodwin/followers",
"following_url": "https://api.github.com/users/jhgoodwin/following{/other_user}",
"gists_url": "https://api.github.com/users/jhgoodwin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jhgoodwin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jhgoodwin/subscriptions",
"organizations_url": "https://api.github.com/users/jhgoodwin/orgs",
"repos_url": "https://api.github.com/users/jhgoodwin/repos",
"events_url": "https://api.github.com/users/jhgoodwin/events{/privacy}",
"received_events_url": "https://api.github.com/users/jhgoodwin/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 4
| 2024-10-21T14:13:23
| 2024-10-22T13:55:58
| 2024-10-22T13:55:58
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I noticed some tools suggest to use a special hostname ( host.docker.internal ) to map back to the docker host. This works, _IF_ you setup the host to allow connections other than localhost. This generally works fine, but in Linux, when I upgrade, it tends to overwrite my OLLAMA_HOST environment variable on the service config.
Am I doing something wrong or is this a feature request to make this less painful?
Thanks for making ollama.
|
{
"login": "jhgoodwin",
"id": 2154552,
"node_id": "MDQ6VXNlcjIxNTQ1NTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/2154552?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jhgoodwin",
"html_url": "https://github.com/jhgoodwin",
"followers_url": "https://api.github.com/users/jhgoodwin/followers",
"following_url": "https://api.github.com/users/jhgoodwin/following{/other_user}",
"gists_url": "https://api.github.com/users/jhgoodwin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jhgoodwin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jhgoodwin/subscriptions",
"organizations_url": "https://api.github.com/users/jhgoodwin/orgs",
"repos_url": "https://api.github.com/users/jhgoodwin/repos",
"events_url": "https://api.github.com/users/jhgoodwin/events{/privacy}",
"received_events_url": "https://api.github.com/users/jhgoodwin/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7297/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7297/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7512
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7512/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7512/comments
|
https://api.github.com/repos/ollama/ollama/issues/7512/events
|
https://github.com/ollama/ollama/issues/7512
| 2,635,828,815
|
I_kwDOJ0Z1Ps6dG45P
| 7,512
|
Snap 0.3.13 missing libcudart.so.12
|
{
"login": "edmcman",
"id": 1017189,
"node_id": "MDQ6VXNlcjEwMTcxODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/1017189?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/edmcman",
"html_url": "https://github.com/edmcman",
"followers_url": "https://api.github.com/users/edmcman/followers",
"following_url": "https://api.github.com/users/edmcman/following{/other_user}",
"gists_url": "https://api.github.com/users/edmcman/gists{/gist_id}",
"starred_url": "https://api.github.com/users/edmcman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/edmcman/subscriptions",
"organizations_url": "https://api.github.com/users/edmcman/orgs",
"repos_url": "https://api.github.com/users/edmcman/repos",
"events_url": "https://api.github.com/users/edmcman/events{/privacy}",
"received_events_url": "https://api.github.com/users/edmcman/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5755339642,
"node_id": "LA_kwDOJ0Z1Ps8AAAABVwuDeg",
"url": "https://api.github.com/repos/ollama/ollama/labels/linux",
"name": "linux",
"color": "516E70",
"default": false,
"description": ""
},
{
"id": 6678628138,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjhPHKg",
"url": "https://api.github.com/repos/ollama/ollama/labels/install",
"name": "install",
"color": "E0B88D",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] | null | 5
| 2024-11-05T15:46:15
| 2024-11-05T18:14:25
| 2024-11-05T16:42:03
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I am trying ollama for the first time. Since there was a snap available, I tried that first. I can download llama 3.2, but when I attempt to run it, I get:
```
(venv) ed@banana ~/P/re-copilot (dev)> ollama run llama3.2
Error: llama runner process has terminated: exit status 127
```
Here are the logs from `snap logs ollama`:
```
2024-11-05T10:38:44-05:00 ollama.listener[631587]: time=2024-11-05T10:38:44.915-05:00 level=INFO source=server.go:399 msg="starting llama server" cmd="/tmp/ollama2989640063/runners/cuda_v12/ollama_llama_server --model /var/snap/ollama/common/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff --ctx-size 8192 --batch-size 512 --embedding --log-disable --n-gpu-layers 29 --parallel 4 --port 34001"
2024-11-05T10:38:44-05:00 ollama.listener[631587]: time=2024-11-05T10:38:44.915-05:00 level=INFO source=sched.go:449 msg="loaded runners" count=1
2024-11-05T10:38:44-05:00 ollama.listener[631587]: time=2024-11-05T10:38:44.915-05:00 level=INFO source=server.go:598 msg="waiting for llama runner to start responding"
2024-11-05T10:38:44-05:00 ollama.listener[631587]: time=2024-11-05T10:38:44.916-05:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server error"
2024-11-05T10:38:44-05:00 ollama.listener[631587]: /tmp/ollama2989640063/runners/cuda_v12/ollama_llama_server: error while loading shared libraries: libcudart.so.12: cannot open shared object file: No such file or directory
2024-11-05T10:38:45-05:00 ollama.listener[631587]: time=2024-11-05T10:38:45.166-05:00 level=ERROR source=sched.go:455 msg="error loading llama server" error="llama runner process has terminated: exit status 127"
2024-11-05T10:38:45-05:00 ollama.listener[631587]: [GIN] 2024/11/05 - 10:38:45 | 500 | 400.927983ms | 127.0.0.1 | POST "/api/generate"
2024-11-05T10:38:50-05:00 ollama.listener[631587]: time=2024-11-05T10:38:50.312-05:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.145569241 model=/var/snap/ollama/common/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff
2024-11-05T10:38:50-05:00 ollama.listener[631587]: time=2024-11-05T10:38:50.561-05:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.39517205 model=/var/snap/ollama/common/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff
2024-11-05T10:38:50-05:00 ollama.listener[631587]: time=2024-11-05T10:38:50.811-05:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.644657486 model=/var/snap/ollama/common/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff
```
It seems like the problem is that libcudart.so.12 is missing. Should it be getting that from the host, or the snap?
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.3.13 snap
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7512/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7512/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5772
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5772/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5772/comments
|
https://api.github.com/repos/ollama/ollama/issues/5772/events
|
https://github.com/ollama/ollama/pull/5772
| 2,416,732,571
|
PR_kwDOJ0Z1Ps51zBQ1
| 5,772
|
Add Verbis project to README.md
|
{
"login": "alexmavr",
"id": 680441,
"node_id": "MDQ6VXNlcjY4MDQ0MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/680441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alexmavr",
"html_url": "https://github.com/alexmavr",
"followers_url": "https://api.github.com/users/alexmavr/followers",
"following_url": "https://api.github.com/users/alexmavr/following{/other_user}",
"gists_url": "https://api.github.com/users/alexmavr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alexmavr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alexmavr/subscriptions",
"organizations_url": "https://api.github.com/users/alexmavr/orgs",
"repos_url": "https://api.github.com/users/alexmavr/repos",
"events_url": "https://api.github.com/users/alexmavr/events{/privacy}",
"received_events_url": "https://api.github.com/users/alexmavr/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-07-18T15:27:49
| 2024-09-06T22:14:21
| 2024-09-06T22:14:21
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5772",
"html_url": "https://github.com/ollama/ollama/pull/5772",
"diff_url": "https://github.com/ollama/ollama/pull/5772.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5772.patch",
"merged_at": null
}
|
Verbis is a new community project powered by Ollama.
Verbis securely connects to your SaaS applications (GDrive, Outlook, Slack etc), indexing all data locally on your system, and leveraging our selection of models. This means you can enhance your productivity without ever sending your sensitive data to third parties.
### Why Verbis?
- Security First: All data is indexed and processed locally.
- Open Source: Transparent, community-driven development.
- Productivity Boost: Leverage state-of-the-art models without compromising privacy.
|
{
"login": "alexmavr",
"id": 680441,
"node_id": "MDQ6VXNlcjY4MDQ0MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/680441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alexmavr",
"html_url": "https://github.com/alexmavr",
"followers_url": "https://api.github.com/users/alexmavr/followers",
"following_url": "https://api.github.com/users/alexmavr/following{/other_user}",
"gists_url": "https://api.github.com/users/alexmavr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alexmavr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alexmavr/subscriptions",
"organizations_url": "https://api.github.com/users/alexmavr/orgs",
"repos_url": "https://api.github.com/users/alexmavr/repos",
"events_url": "https://api.github.com/users/alexmavr/events{/privacy}",
"received_events_url": "https://api.github.com/users/alexmavr/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5772/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5772/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4031
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4031/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4031/comments
|
https://api.github.com/repos/ollama/ollama/issues/4031/events
|
https://github.com/ollama/ollama/pull/4031
| 2,269,392,759
|
PR_kwDOJ0Z1Ps5uCGD3
| 4,031
|
Fix/issue 3736: When runners are closing or expiring. Scheduler is getting dirty VRAM size readings.
|
{
"login": "MarkWard0110",
"id": 90335263,
"node_id": "MDQ6VXNlcjkwMzM1MjYz",
"avatar_url": "https://avatars.githubusercontent.com/u/90335263?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MarkWard0110",
"html_url": "https://github.com/MarkWard0110",
"followers_url": "https://api.github.com/users/MarkWard0110/followers",
"following_url": "https://api.github.com/users/MarkWard0110/following{/other_user}",
"gists_url": "https://api.github.com/users/MarkWard0110/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MarkWard0110/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MarkWard0110/subscriptions",
"organizations_url": "https://api.github.com/users/MarkWard0110/orgs",
"repos_url": "https://api.github.com/users/MarkWard0110/repos",
"events_url": "https://api.github.com/users/MarkWard0110/events{/privacy}",
"received_events_url": "https://api.github.com/users/MarkWard0110/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 3
| 2024-04-29T15:51:52
| 2024-05-01T19:13:26
| 2024-05-01T19:13:26
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4031",
"html_url": "https://github.com/ollama/ollama/pull/4031",
"diff_url": "https://github.com/ollama/ollama/pull/4031.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4031.patch",
"merged_at": "2024-05-01T19:13:26"
}
|
Issue: When the Ollama `Scheduler` requests a runner to stop (kill), the `Scheduler` reads the available VRAM and gets a size that includes the terminating runner. This results in offloading to the CPU and slower execution. Each time a new model is swapped in, the new runner reads the previous runner's memory allocation. This affects the new runner's VRAM allocation estimate.
Fix: When stopping a runner, wait for the process to exit so that the memory is free before `Scheduler` checks the amount of VRAM available.
Issue: After a runner finishes a request, an expiration timer is assigned based on the session duration. Subsequent requests will renew the expiration timer after each request has finished. If a request happens to take too long and the timer fires, the runner will be scheduled to be unloaded. If concurrently, a new pending may get an incorrect measure of VRAM, resulting in offloading to the CPU and slower execution. Runners are expiring in the middle of heavy use, which results in the same model closing and reloading. The reloading gets a dirty VRAM measurement because the previous runner is not fully closed before the new runner is created. The concurrency of the pending and completed Go routines. The pending continues on the unloaded event, which can be "any" unloaded event.
Fix: Clear the timer so it does not fire when reusing runners. Only assign the timer when the runner has finished. Clear any assigned timers when closing runners. An active runner should not have an expire timer.
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4031/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4031/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4846
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4846/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4846/comments
|
https://api.github.com/repos/ollama/ollama/issues/4846/events
|
https://github.com/ollama/ollama/issues/4846
| 2,337,346,223
|
I_kwDOJ0Z1Ps6LURKv
| 4,846
|
Performance degrades over time when running in Docker with Nvidia GPU
|
{
"login": "nycameraguy",
"id": 70789698,
"node_id": "MDQ6VXNlcjcwNzg5Njk4",
"avatar_url": "https://avatars.githubusercontent.com/u/70789698?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nycameraguy",
"html_url": "https://github.com/nycameraguy",
"followers_url": "https://api.github.com/users/nycameraguy/followers",
"following_url": "https://api.github.com/users/nycameraguy/following{/other_user}",
"gists_url": "https://api.github.com/users/nycameraguy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nycameraguy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nycameraguy/subscriptions",
"organizations_url": "https://api.github.com/users/nycameraguy/orgs",
"repos_url": "https://api.github.com/users/nycameraguy/repos",
"events_url": "https://api.github.com/users/nycameraguy/events{/privacy}",
"received_events_url": "https://api.github.com/users/nycameraguy/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5808482718,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWjZpng",
"url": "https://api.github.com/repos/ollama/ollama/labels/performance",
"name": "performance",
"color": "A5B5C6",
"default": false,
"description": ""
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg",
"url": "https://api.github.com/repos/ollama/ollama/labels/nvidia",
"name": "nvidia",
"color": "8CDB00",
"default": false,
"description": "Issues relating to Nvidia GPUs and CUDA"
},
{
"id": 6677677816,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgVG-A",
"url": "https://api.github.com/repos/ollama/ollama/labels/docker",
"name": "docker",
"color": "0052CC",
"default": false,
"description": "Issues relating to using ollama in containers"
}
] |
open
| false
| null |
[] | null | 7
| 2024-06-06T05:32:57
| 2025-01-21T09:43:13
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I am working in a multi-GPU environment. I set up multiple docker containers to assign each GPU to it so I can process my workload in parallel.
Here is the command I use to set up the container:
`sudo docker run -d --gpus device=GPU-46b6fece-aec9-853f-0956-2d43359e28e3 -v ollama:/root/.ollama -p 11435:11434 --name ollama0 ollama/ollama`
I change the port for each container and use a list of clients to split the workload.
I noticed the performance of the Ollama Docker container degrades significantly over time. I am processing a workload with over 134,000 queries with llama3:instruct. In the beginning, the processing speed is about 1 to 2 items/s, after processing a few thousands of queries, it slows down to 10 to 12 items/s, and it gets worse over time.
If I remove and reconfigure the container, The performance will return to normal.
### OS
Linux
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.1.38
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4846/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4846/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/626
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/626/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/626/comments
|
https://api.github.com/repos/ollama/ollama/issues/626/events
|
https://github.com/ollama/ollama/pull/626
| 1,916,482,272
|
PR_kwDOJ0Z1Ps5bY5NB
| 626
|
parallel chunked downloads
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2023-09-27T23:32:50
| 2023-10-06T20:01:30
| 2023-10-06T20:01:29
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/626",
"html_url": "https://github.com/ollama/ollama/pull/626",
"diff_url": "https://github.com/ollama/ollama/pull/626.diff",
"patch_url": "https://github.com/ollama/ollama/pull/626.patch",
"merged_at": "2023-10-06T20:01:29"
}
|
this change chunks the download into smaller parts that can be downloaded at the same time. this should result in a bump in download speeds
TODO:
- [x] handle concurrent requests for the same blobs
- [x] handle resuming interrupted downloads
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/626/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/626/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/819
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/819/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/819/comments
|
https://api.github.com/repos/ollama/ollama/issues/819/events
|
https://github.com/ollama/ollama/pull/819
| 1,947,618,249
|
PR_kwDOJ0Z1Ps5dBymf
| 819
|
run linux server if not started
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-10-17T14:50:12
| 2023-12-06T23:54:37
| 2023-11-24T19:12:21
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/819",
"html_url": "https://github.com/ollama/ollama/pull/819",
"diff_url": "https://github.com/ollama/ollama/pull/819.diff",
"patch_url": "https://github.com/ollama/ollama/pull/819.patch",
"merged_at": null
}
|
Start the ollama server automatically if it isnt a system service
Split from #772
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/819/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/819/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7015
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7015/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7015/comments
|
https://api.github.com/repos/ollama/ollama/issues/7015/events
|
https://github.com/ollama/ollama/issues/7015
| 2,553,973,590
|
I_kwDOJ0Z1Ps6YOotW
| 7,015
|
Error Running Ollama After Installation
|
{
"login": "cksdxz1007",
"id": 21142070,
"node_id": "MDQ6VXNlcjIxMTQyMDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/21142070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cksdxz1007",
"html_url": "https://github.com/cksdxz1007",
"followers_url": "https://api.github.com/users/cksdxz1007/followers",
"following_url": "https://api.github.com/users/cksdxz1007/following{/other_user}",
"gists_url": "https://api.github.com/users/cksdxz1007/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cksdxz1007/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cksdxz1007/subscriptions",
"organizations_url": "https://api.github.com/users/cksdxz1007/orgs",
"repos_url": "https://api.github.com/users/cksdxz1007/repos",
"events_url": "https://api.github.com/users/cksdxz1007/events{/privacy}",
"received_events_url": "https://api.github.com/users/cksdxz1007/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 6677279472,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjf8y8A",
"url": "https://api.github.com/repos/ollama/ollama/labels/macos",
"name": "macos",
"color": "E2DBC0",
"default": false,
"description": ""
},
{
"id": 6677370291,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCVsw",
"url": "https://api.github.com/repos/ollama/ollama/labels/networking",
"name": "networking",
"color": "0B5368",
"default": false,
"description": "Issues relating to ollama pull and push"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 11
| 2024-09-28T03:00:35
| 2024-09-30T16:07:48
| 2024-09-30T16:07:30
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
After installing Ollama and attempting to run it, an error occurs. Upon checking the log file ~/.ollama/logs/server.log, the following content is found:
```
Couldn't find '/Users/cynningli/.ollama/id_ed25519'. Generating new private key.
Your new public key is:
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILE1HWt7ruohIwTV4yR9hiBi45VRf3Cs64ohZxX1ijUK
2024/09/28 10:45:25 routes.go:1153: INFO server config env="map[HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/Users/cynningli/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: http_proxy: https_proxy: no_proxy:]"
time=2024-09-28T10:45:25.331+08:00 level=INFO source=images.go:753 msg="total blobs: 0"
time=2024-09-28T10:45:25.331+08:00 level=INFO source=images.go:760 msg="total unused blobs removed: 0"
time=2024-09-28T10:45:25.332+08:00 level=INFO source=routes.go:1200 msg="Listening on 127.0.0.1:11434 (version 0.3.12)"
time=2024-09-28T10:45:25.334+08:00 level=INFO source=common.go:135 msg="extracting embedded files" dir=/var/folders/n1/gc6_9bqx6nv7sglk0b1q38r00000gn/T/ollama711942109/runners
time=2024-09-28T10:45:25.376+08:00 level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners=[metal]
time=2024-09-28T10:45:25.466+08:00 level=INFO source=types.go:107 msg="inference compute" id=0 library=metal variant="" compute="" driver=0.0 name="" total="10.7 GiB" available="10.7 GiB"
/Users/cynningli/.ollama/logs/server.log (END)
```
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.3.12
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7015/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7015/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4557
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4557/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4557/comments
|
https://api.github.com/repos/ollama/ollama/issues/4557/events
|
https://github.com/ollama/ollama/issues/4557
| 2,308,093,474
|
I_kwDOJ0Z1Ps6JkrYi
| 4,557
|
Please add PULI models
|
{
"login": "zorgoz",
"id": 1569170,
"node_id": "MDQ6VXNlcjE1NjkxNzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/1569170?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zorgoz",
"html_url": "https://github.com/zorgoz",
"followers_url": "https://api.github.com/users/zorgoz/followers",
"following_url": "https://api.github.com/users/zorgoz/following{/other_user}",
"gists_url": "https://api.github.com/users/zorgoz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zorgoz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zorgoz/subscriptions",
"organizations_url": "https://api.github.com/users/zorgoz/orgs",
"repos_url": "https://api.github.com/users/zorgoz/repos",
"events_url": "https://api.github.com/users/zorgoz/events{/privacy}",
"received_events_url": "https://api.github.com/users/zorgoz/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
open
| false
| null |
[] | null | 1
| 2024-05-21T11:40:55
| 2024-05-21T13:28:11
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
https://huggingface.co/NYTK/PULI-LlumiX-32K
https://huggingface.co/NYTK/PULI-GPT-3SX
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4557/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4557/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/5505
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5505/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5505/comments
|
https://api.github.com/repos/ollama/ollama/issues/5505/events
|
https://github.com/ollama/ollama/pull/5505
| 2,393,198,149
|
PR_kwDOJ0Z1Ps50kdbr
| 5,505
|
Fix cmake build to install dependent dylibs
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-07-05T22:25:35
| 2024-07-05T23:07:03
| 2024-07-05T23:07:01
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5505",
"html_url": "https://github.com/ollama/ollama/pull/5505",
"diff_url": "https://github.com/ollama/ollama/pull/5505.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5505.patch",
"merged_at": "2024-07-05T23:07:01"
}
| null |
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5505/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5505/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6517
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6517/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6517/comments
|
https://api.github.com/repos/ollama/ollama/issues/6517/events
|
https://github.com/ollama/ollama/issues/6517
| 2,486,945,210
|
I_kwDOJ0Z1Ps6UO8W6
| 6,517
|
Is Fine-Tuning Supported in Ollama?
|
{
"login": "parthipan76",
"id": 96368172,
"node_id": "U_kgDOBb52LA",
"avatar_url": "https://avatars.githubusercontent.com/u/96368172?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/parthipan76",
"html_url": "https://github.com/parthipan76",
"followers_url": "https://api.github.com/users/parthipan76/followers",
"following_url": "https://api.github.com/users/parthipan76/following{/other_user}",
"gists_url": "https://api.github.com/users/parthipan76/gists{/gist_id}",
"starred_url": "https://api.github.com/users/parthipan76/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/parthipan76/subscriptions",
"organizations_url": "https://api.github.com/users/parthipan76/orgs",
"repos_url": "https://api.github.com/users/parthipan76/repos",
"events_url": "https://api.github.com/users/parthipan76/events{/privacy}",
"received_events_url": "https://api.github.com/users/parthipan76/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-08-26T13:57:08
| 2024-08-26T20:09:54
| 2024-08-26T20:09:54
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Hi everyone,
I’ve been exploring Ollama for working with LLaMA models, and I came across an issue where the fine-tune command seems to be unrecognized. Specifically, when I try to fine-tune a model using the command:
ollama fine-tune llama3.1:8b -f training_data.jsonl
I receive an error: unknown command "fine-tune" for "ollama".
Is fine-tuning supported in Ollama? If not, are there any plans to add this feature, or are there alternative methods to fine-tune models within Ollama?
Thanks in advance!
### OS
Linux
### GPU
_No response_
### CPU
Intel
### Ollama version
0.3.6
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6517/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6517/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/309
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/309/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/309/comments
|
https://api.github.com/repos/ollama/ollama/issues/309/events
|
https://github.com/ollama/ollama/pull/309
| 1,842,036,564
|
PR_kwDOJ0Z1Ps5XeQgr
| 309
|
Adds the ability to specify OLLAMA_HOST env var
|
{
"login": "buzzert",
"id": 718594,
"node_id": "MDQ6VXNlcjcxODU5NA==",
"avatar_url": "https://avatars.githubusercontent.com/u/718594?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/buzzert",
"html_url": "https://github.com/buzzert",
"followers_url": "https://api.github.com/users/buzzert/followers",
"following_url": "https://api.github.com/users/buzzert/following{/other_user}",
"gists_url": "https://api.github.com/users/buzzert/gists{/gist_id}",
"starred_url": "https://api.github.com/users/buzzert/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/buzzert/subscriptions",
"organizations_url": "https://api.github.com/users/buzzert/orgs",
"repos_url": "https://api.github.com/users/buzzert/repos",
"events_url": "https://api.github.com/users/buzzert/events{/privacy}",
"received_events_url": "https://api.github.com/users/buzzert/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2023-08-08T20:21:06
| 2023-08-08T20:22:46
| 2023-08-08T20:22:46
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/309",
"html_url": "https://github.com/ollama/ollama/pull/309",
"diff_url": "https://github.com/ollama/ollama/pull/309.diff",
"patch_url": "https://github.com/ollama/ollama/pull/309.patch",
"merged_at": null
}
|
For users who run the Ollama server on a host other than `localhost`, add the ability to specify OLLAMA_HOST as an environment variable.
|
{
"login": "buzzert",
"id": 718594,
"node_id": "MDQ6VXNlcjcxODU5NA==",
"avatar_url": "https://avatars.githubusercontent.com/u/718594?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/buzzert",
"html_url": "https://github.com/buzzert",
"followers_url": "https://api.github.com/users/buzzert/followers",
"following_url": "https://api.github.com/users/buzzert/following{/other_user}",
"gists_url": "https://api.github.com/users/buzzert/gists{/gist_id}",
"starred_url": "https://api.github.com/users/buzzert/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/buzzert/subscriptions",
"organizations_url": "https://api.github.com/users/buzzert/orgs",
"repos_url": "https://api.github.com/users/buzzert/repos",
"events_url": "https://api.github.com/users/buzzert/events{/privacy}",
"received_events_url": "https://api.github.com/users/buzzert/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/309/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/309/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1211
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1211/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1211/comments
|
https://api.github.com/repos/ollama/ollama/issues/1211/events
|
https://github.com/ollama/ollama/pull/1211
| 2,002,948,849
|
PR_kwDOJ0Z1Ps5f8zwT
| 1,211
|
fix: allow specifying relative files in modelfile
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-11-20T20:24:21
| 2023-11-20T21:43:49
| 2023-11-20T21:43:48
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1211",
"html_url": "https://github.com/ollama/ollama/pull/1211",
"diff_url": "https://github.com/ollama/ollama/pull/1211.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1211.patch",
"merged_at": "2023-11-20T21:43:48"
}
|
Small regression here from remote models. Previously you could specify files relative to a modelfile without their path, this is my normal workflow. This change restores this behaviour to match v0.1.9.
Example modelfile:
```
FROM nous-capybara-34b.Q4_0.gguf
TEMPLATE "USER: { .Prompt } ASSISTANT: "
```
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1211/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1211/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/8660
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8660/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8660/comments
|
https://api.github.com/repos/ollama/ollama/issues/8660/events
|
https://github.com/ollama/ollama/issues/8660
| 2,818,256,756
|
I_kwDOJ0Z1Ps6n-y90
| 8,660
|
GPU Memory Not Released After Exiting deepseek-r1:32b Model
|
{
"login": "Sebjac06",
"id": 172889704,
"node_id": "U_kgDOCk4WaA",
"avatar_url": "https://avatars.githubusercontent.com/u/172889704?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Sebjac06",
"html_url": "https://github.com/Sebjac06",
"followers_url": "https://api.github.com/users/Sebjac06/followers",
"following_url": "https://api.github.com/users/Sebjac06/following{/other_user}",
"gists_url": "https://api.github.com/users/Sebjac06/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Sebjac06/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Sebjac06/subscriptions",
"organizations_url": "https://api.github.com/users/Sebjac06/orgs",
"repos_url": "https://api.github.com/users/Sebjac06/repos",
"events_url": "https://api.github.com/users/Sebjac06/events{/privacy}",
"received_events_url": "https://api.github.com/users/Sebjac06/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 2
| 2025-01-29T13:41:07
| 2025-01-29T13:51:19
| 2025-01-29T13:51:19
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
- Ollama Version: 0.5.7
- Model: deepseek-r1:32b
- GPU: NVIDIA RTX 3090 (24GB VRAM)
- OS: Windows 11 (include build version if known)
After running the `deepseek-r1:32b` model via `ollama run deepseek-r1:32b` and exiting with `/bye` in my terminal, the GPU's dedicated memory remains fully allocated at 24GB despite 0% GPU usage. This persists until I close the ollama application fully, and occurs again when using the model.
Is this a bug? It seems strange that the dedicated memory remains at the maximum even after closing the terrminal.
### OS
Windows
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.5.7
|
{
"login": "Sebjac06",
"id": 172889704,
"node_id": "U_kgDOCk4WaA",
"avatar_url": "https://avatars.githubusercontent.com/u/172889704?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Sebjac06",
"html_url": "https://github.com/Sebjac06",
"followers_url": "https://api.github.com/users/Sebjac06/followers",
"following_url": "https://api.github.com/users/Sebjac06/following{/other_user}",
"gists_url": "https://api.github.com/users/Sebjac06/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Sebjac06/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Sebjac06/subscriptions",
"organizations_url": "https://api.github.com/users/Sebjac06/orgs",
"repos_url": "https://api.github.com/users/Sebjac06/repos",
"events_url": "https://api.github.com/users/Sebjac06/events{/privacy}",
"received_events_url": "https://api.github.com/users/Sebjac06/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8660/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8660/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7790
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7790/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7790/comments
|
https://api.github.com/repos/ollama/ollama/issues/7790/events
|
https://github.com/ollama/ollama/issues/7790
| 2,682,058,464
|
I_kwDOJ0Z1Ps6f3Pbg
| 7,790
|
Using C# and tooling, the tools are not consistently invoked by Ollama, resulting in confusing results and responses
|
{
"login": "jan-johansson-mr",
"id": 11595208,
"node_id": "MDQ6VXNlcjExNTk1MjA4",
"avatar_url": "https://avatars.githubusercontent.com/u/11595208?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jan-johansson-mr",
"html_url": "https://github.com/jan-johansson-mr",
"followers_url": "https://api.github.com/users/jan-johansson-mr/followers",
"following_url": "https://api.github.com/users/jan-johansson-mr/following{/other_user}",
"gists_url": "https://api.github.com/users/jan-johansson-mr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jan-johansson-mr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jan-johansson-mr/subscriptions",
"organizations_url": "https://api.github.com/users/jan-johansson-mr/orgs",
"repos_url": "https://api.github.com/users/jan-johansson-mr/repos",
"events_url": "https://api.github.com/users/jan-johansson-mr/events{/privacy}",
"received_events_url": "https://api.github.com/users/jan-johansson-mr/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 7
| 2024-11-22T06:56:28
| 2024-11-23T05:47:51
| 2024-11-22T14:17:14
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I've written a simple C# console application that has tools managing a set of slots. The scheme is simple, the set has 10 slots available, and slots can be allocated and released.
The tools are:
- CountAvailableSlots
- CountAllocatedSlots
- Capacity (always 10, the count includes both allocated and released slots)
- AllocateSlots
- ReleaseSlots
Here is the C# class managing the set of slots:
```
internal class SlotSetItem
{
private readonly int capacity = 10;
private int slots = 10;
[Description("This tool counts the number of total available slots.")]
public async Task<string> CountAvailableSlots()
{
await Task.Yield();
var color = Console.ForegroundColor;
Console.ForegroundColor = ConsoleColor.Yellow;
Console.WriteLine($"CountAvailableSlots: Returning the number of available slots {slots}");
Console.ForegroundColor = color;
return $"The number of available slots to allocate is {slots}";
}
[Description("This tool counts the number of total allocated slots.")]
public async Task<string> CountAllocatedSlots()
{
await Task.Yield();
var color = Console.ForegroundColor;
Console.ForegroundColor = ConsoleColor.Yellow;
Console.WriteLine($"CountAllocatedSlots: Returning the number of allocated slots {capacity - slots}");
Console.ForegroundColor = color;
return $"The number of allocated slots is {capacity - slots}";
}
[Description("This tool returns the capacity of slots.")]
public async Task<string> Capacity()
{
await Task.Yield();
var color = Console.ForegroundColor;
Console.ForegroundColor = ConsoleColor.Yellow;
Console.WriteLine($"Capacity: Returning capacity");
Console.ForegroundColor = color;
return $"The total capacity of slots, both allocated and released, are {capacity}";
}
[Description("This tool allocate slots")]
public async Task<string> AllocateSlots([Description("The number of slots to allocate")] string numberOfSlotsPrompt)
{
await Task.Yield();
var color = Console.ForegroundColor;
Console.ForegroundColor = ConsoleColor.Yellow;
Console.WriteLine($"""AllocateSlots: Allocating "{numberOfSlotsPrompt}" slots""");
var oldNumberOfSlots = slots;
var numberOfSlots = int.Parse(numberOfSlotsPrompt);
slots = int.Max(0, slots - numberOfSlots);
Console.WriteLine($"The number of allocated slots are {capacity-slots}, leaving {slots} slots to be allocated in the future");
Console.ForegroundColor = color;
return $"The number of allocated slots are {capacity-slots}, leaving {slots} slots to be allocated in the future";
}
[Description("This tool release slots.")]
public async Task<string> ReleaseSlots([Description("The number of slots to release")] string numberOfSlotsPrompt)
{
await Task.Yield();
var color = Console.ForegroundColor;
Console.ForegroundColor = ConsoleColor.Yellow;
Console.WriteLine($"""ReleaseSlots: Releasing "{numberOfSlotsPrompt}" slots""");
var numberOfSlots = int.Parse(numberOfSlotsPrompt);
slots = int.Min(10, slots + numberOfSlots);
Console.WriteLine($"The number of allocated slots are {capacity-slots}, leaving {slots} slots to be allocated in the future");
Console.ForegroundColor = color;
return $"The number of allocated slots are {capacity-slots}, leaving {slots} slots to be allocated in the future";
}
}
```
The code has been modified a lot, since I have had problems with calling consistency by Ollama, e.g. I had integer parameters (now they are strings), and integer returns (now they are strings). I've noticed a better behavior when returning strings describing the result, instead of using integers (when there were integers returned, then I had random releases and so on).
The usage of the tools by Ollama is not consistent. The engine reports that it invokes the tool, but it doesn't happen (the tool is writing out a message when it is invoked).
Here is a typical output when the call chain works, following my instructions:
Note: The number of allocated slots were 7 before I instructed Ollama to allocate 5 slots (and not more than 10 can be allocated)
```
Allocate 5 slots and then release 3 slots
AllocateSlots: Allocating "5" slots
The number of allocated slots are 10, leaving 0 slots to be allocated in the future
ReleaseSlots: Releasing "3" slots
The number of allocated slots are 7, leaving 3 slots to be allocated in the future
```
And here is the output when the call chain doesn't work as expected:
Note: The number of allocated slots were 7 before I instructed Ollama to allocate 5 slots (and not more than 10 can be allocated)
```
Allocate 5 slots and then release 3 slots
AllocateSlots: Allocating "5" slots
The number of allocated slots are 10, leaving 0 slots to be allocated in the future
{"call_id":"4a90b1d3","name":"ReleaseSlots","arguments":{"numberOfSlotsPrompt":"3"}}
```
As you can see, in the last output, that Ollama starts out correctly by allocating slots, and then Ollama intends to release slots, but it never happens - the tool ReleaseSlots never gets invoked.
This happens a lot.
I've no idea of why the invocation doesn't happen.
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.4.3
|
{
"login": "jan-johansson-mr",
"id": 11595208,
"node_id": "MDQ6VXNlcjExNTk1MjA4",
"avatar_url": "https://avatars.githubusercontent.com/u/11595208?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jan-johansson-mr",
"html_url": "https://github.com/jan-johansson-mr",
"followers_url": "https://api.github.com/users/jan-johansson-mr/followers",
"following_url": "https://api.github.com/users/jan-johansson-mr/following{/other_user}",
"gists_url": "https://api.github.com/users/jan-johansson-mr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jan-johansson-mr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jan-johansson-mr/subscriptions",
"organizations_url": "https://api.github.com/users/jan-johansson-mr/orgs",
"repos_url": "https://api.github.com/users/jan-johansson-mr/repos",
"events_url": "https://api.github.com/users/jan-johansson-mr/events{/privacy}",
"received_events_url": "https://api.github.com/users/jan-johansson-mr/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7790/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7790/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4094
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4094/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4094/comments
|
https://api.github.com/repos/ollama/ollama/issues/4094/events
|
https://github.com/ollama/ollama/issues/4094
| 2,274,728,613
|
I_kwDOJ0Z1Ps6HlZql
| 4,094
|
Crash in hipDriverGetVersion on windows
|
{
"login": "ggjk616",
"id": 168710680,
"node_id": "U_kgDOCg5SGA",
"avatar_url": "https://avatars.githubusercontent.com/u/168710680?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ggjk616",
"html_url": "https://github.com/ggjk616",
"followers_url": "https://api.github.com/users/ggjk616/followers",
"following_url": "https://api.github.com/users/ggjk616/following{/other_user}",
"gists_url": "https://api.github.com/users/ggjk616/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ggjk616/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ggjk616/subscriptions",
"organizations_url": "https://api.github.com/users/ggjk616/orgs",
"repos_url": "https://api.github.com/users/ggjk616/repos",
"events_url": "https://api.github.com/users/ggjk616/events{/privacy}",
"received_events_url": "https://api.github.com/users/ggjk616/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg",
"url": "https://api.github.com/repos/ollama/ollama/labels/windows",
"name": "windows",
"color": "0052CC",
"default": false,
"description": ""
},
{
"id": 6433346500,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf3UTxA",
"url": "https://api.github.com/repos/ollama/ollama/labels/amd",
"name": "amd",
"color": "000000",
"default": false,
"description": "Issues relating to AMD GPUs and ROCm"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 7
| 2024-05-02T06:33:48
| 2024-05-06T22:08:31
| 2024-05-06T22:08:31
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Can you help me,In the documentation, I noticed the following statement: “You can set OLLAMA_LLM_LIBRARY to any of the available LLM libraries to bypass autodetection, so for example, if you have a CUDA card, but want to force the CPU LLM library with AVX2 vector support, use:
OLLAMA_LLM_LIBRARY="cpu_avx2" ollama serve”
But After setting OLLAMA_LLM_LIBRARY=“cpu_avx2”, the program still detects my GPU when loading the model, resulting in an error: Error: Post “https://127.0.0.1:11434/api/chat”: read tcp 127.0.0.1:56915->127.0.0.1:11434: wsarecv: An existing connection was forcibly closed by the remote host.
### OS
Windows
### GPU
AMD
### CPU
Intel
### Ollama version
_No response_
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4094/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4094/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2056
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2056/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2056/comments
|
https://api.github.com/repos/ollama/ollama/issues/2056/events
|
https://github.com/ollama/ollama/pull/2056
| 2,088,945,280
|
PR_kwDOJ0Z1Ps5kdcVr
| 2,056
|
Mechanical switch from log to slog
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 4
| 2024-01-18T19:45:25
| 2024-01-18T22:27:28
| 2024-01-18T22:27:24
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/2056",
"html_url": "https://github.com/ollama/ollama/pull/2056",
"diff_url": "https://github.com/ollama/ollama/pull/2056.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2056.patch",
"merged_at": "2024-01-18T22:27:24"
}
|
A few obvious levels were adjusted, but generally everything mapped to "info" level.
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2056/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2056/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3304
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3304/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3304/comments
|
https://api.github.com/repos/ollama/ollama/issues/3304/events
|
https://github.com/ollama/ollama/issues/3304
| 2,203,697,649
|
I_kwDOJ0Z1Ps6DWcHx
| 3,304
|
Bug found ROCm docker (ver 0.1.29) didnt suppot daul CPU, But 0.1.28 is fine w/ Dual CPU
|
{
"login": "MorrisLu-Taipei",
"id": 22585297,
"node_id": "MDQ6VXNlcjIyNTg1Mjk3",
"avatar_url": "https://avatars.githubusercontent.com/u/22585297?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MorrisLu-Taipei",
"html_url": "https://github.com/MorrisLu-Taipei",
"followers_url": "https://api.github.com/users/MorrisLu-Taipei/followers",
"following_url": "https://api.github.com/users/MorrisLu-Taipei/following{/other_user}",
"gists_url": "https://api.github.com/users/MorrisLu-Taipei/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MorrisLu-Taipei/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MorrisLu-Taipei/subscriptions",
"organizations_url": "https://api.github.com/users/MorrisLu-Taipei/orgs",
"repos_url": "https://api.github.com/users/MorrisLu-Taipei/repos",
"events_url": "https://api.github.com/users/MorrisLu-Taipei/events{/privacy}",
"received_events_url": "https://api.github.com/users/MorrisLu-Taipei/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6433346500,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf3UTxA",
"url": "https://api.github.com/repos/ollama/ollama/labels/amd",
"name": "amd",
"color": "000000",
"default": false,
"description": "Issues relating to AMD GPUs and ROCm"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 7
| 2024-03-23T05:02:35
| 2024-04-23T15:31:40
| 2024-04-23T15:31:40
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
**ROCm docker (ver 0.1.29) started well**
time=2024-03-23T04:49:45.409Z level=INFO source=amd_linux.go:50 msg="AMD Driver: 6.3.6"
time=2024-03-23T04:49:45.409Z level=INFO source=amd_linux.go:88 msg="detected amdgpu versions [gfx000 gfx1100]"
time=2024-03-23T04:49:45.423Z level=WARN source=amd_linux.go:114 msg="amdgpu [0] gfx000 is not supported by /tmp/ollama1698492372/rocm [gfx1030 gfx1100 gfx1101 gfx1102 gfx900 gfx906 gfx908 gfx90a gfx940 gfx941 gfx942]"
time=2024-03-23T04:49:45.423Z level=WARN source=amd_linux.go:116 msg="See https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md for HSA_OVERRIDE_GFX_VERSION usage"
time=2024-03-23T04:49:45.423Z level=INFO source=amd_linux.go:119 msg="amdgpu [1] gfx1100 is supported"
time=2024-03-23T04:49:45.423Z level=INFO source=amd_linux.go:246 msg="[1] amdgpu totalMemory 20464M"
time=2024-03-23T04:49:45.423Z level=INFO source=amd_linux.go:247 msg="[1] amdgpu freeMemory 20464M"
time=2024-03-23T04:49:45.423Z level=INFO source=amd_common.go:54 msg="Setting HIP_VISIBLE_DEVICES=1"
**BUT running model use CPU always.**
llama_new_context_with_model: n_ctx = 2048
llama_new_context_with_model: freq_base = 1000000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: CPU KV buffer size = 256.00 MiB
llama_new_context_with_model: KV self size = 256.00 MiB, K (f16): 128.00 MiB, V (f16): 128.00 MiB
llama_new_context_with_model: CPU input buffer size = 13.02 MiB
llama_new_context_with_model: CPU compute buffer size = 160.00 MiB
### What did you expect to see?
Should use ROCm GPU to run model (ROCm-Docker) , not use CPU
### Steps to reproduce
_No response_
### Are there any recent changes that introduced the issue?
_No response_
### OS
Linux
### Architecture
x86
### Platform
Docker
### Ollama version
0.1.29
### GPU
AMD
### GPU info
ROCk module is loaded
=====================
HSA System Attributes
=====================
Runtime Version: 1.1
System Timestamp Freq.: 1000.000000MHz
Sig. Max Wait Duration: 18446744073709551615 (0xFFFFFFFFFFFFFFFF) (timestamp count)
Machine Model: LARGE
System Endianness: LITTLE
Mwaitx: DISABLED
DMAbuf Support: YES
==========
HSA Agents
==========
*******
Agent 1
*******
Name: Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz
Uuid: CPU-XX
Marketing Name: Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz
Vendor Name: CPU
Feature: None specified
Profile: FULL_PROFILE
Float Round Mode: NEAR
Max Queue Number: 0(0x0)
Queue Min Size: 0(0x0)
Queue Max Size: 0(0x0)
Queue Type: MULTI
Node: 0
Device Type: CPU
Cache Info:
L1: 32768(0x8000) KB
Chip ID: 0(0x0)
ASIC Revision: 0(0x0)
Cacheline Size: 64(0x40)
Max Clock Freq. (MHz): 3100
BDFID: 0
Internal Node ID: 0
Compute Unit: 20
SIMDs per CU: 0
Shader Engines: 0
Shader Arrs. per Eng.: 0
WatchPts on Addr. Ranges:1
Features: None
Pool Info:
Pool 1
Segment: GLOBAL; FLAGS: FINE GRAINED
Size: 3942924(0x3c2a0c) KB
Allocatable: TRUE
Alloc Granule: 4KB
Alloc Alignment: 4KB
Accessible by all: TRUE
Pool 2
Segment: GLOBAL; FLAGS: KERNARG, FINE GRAINED
Size: 3942924(0x3c2a0c) KB
Allocatable: TRUE
Alloc Granule: 4KB
Alloc Alignment: 4KB
Accessible by all: TRUE
Pool 3
Segment: GLOBAL; FLAGS: COARSE GRAINED
Size: 3942924(0x3c2a0c) KB
Allocatable: TRUE
Alloc Granule: 4KB
Alloc Alignment: 4KB
Accessible by all: TRUE
ISA Info:
*******
Agent 2
*******
Name: Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz
Uuid: CPU-XX
Marketing Name: Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz
Vendor Name: CPU
Feature: None specified
Profile: FULL_PROFILE
Float Round Mode: NEAR
Max Queue Number: 0(0x0)
Queue Min Size: 0(0x0)
Queue Max Size: 0(0x0)
Queue Type: MULTI
Node: 1
Device Type: CPU
Cache Info:
L1: 32768(0x8000) KB
Chip ID: 0(0x0)
ASIC Revision: 0(0x0)
Cacheline Size: 64(0x40)
Max Clock Freq. (MHz): 3100
BDFID: 0
Internal Node ID: 1
Compute Unit: 20
SIMDs per CU: 0
Shader Engines: 0
Shader Arrs. per Eng.: 0
WatchPts on Addr. Ranges:1
Features: None
Pool Info:
Pool 1
Segment: GLOBAL; FLAGS: FINE GRAINED
Size: 4072064(0x3e2280) KB
Allocatable: TRUE
Alloc Granule: 4KB
Alloc Alignment: 4KB
Accessible by all: TRUE
Pool 2
Segment: GLOBAL; FLAGS: KERNARG, FINE GRAINED
Size: 4072064(0x3e2280) KB
Allocatable: TRUE
Alloc Granule: 4KB
Alloc Alignment: 4KB
Accessible by all: TRUE
Pool 3
Segment: GLOBAL; FLAGS: COARSE GRAINED
Size: 4072064(0x3e2280) KB
Allocatable: TRUE
Alloc Granule: 4KB
Alloc Alignment: 4KB
Accessible by all: TRUE
ISA Info:
*******
Agent 3
*******
Name: gfx1100
Uuid: GPU-26f6b21ec442090e
Marketing Name: Radeon RX 7900 XT
Vendor Name: AMD
Feature: KERNEL_DISPATCH
Profile: BASE_PROFILE
Float Round Mode: NEAR
Max Queue Number: 128(0x80)
Queue Min Size: 64(0x40)
Queue Max Size: 131072(0x20000)
Queue Type: MULTI
Node: 2
Device Type: GPU
Cache Info:
L1: 32(0x20) KB
L2: 6144(0x1800) KB
L3: 81920(0x14000) KB
Chip ID: 29772(0x744c)
ASIC Revision: 0(0x0)
Cacheline Size: 64(0x40)
Max Clock Freq. (MHz): 2129
BDFID: 1024
Internal Node ID: 2
Compute Unit: 84
SIMDs per CU: 2
Shader Engines: 6
Shader Arrs. per Eng.: 2
WatchPts on Addr. Ranges:4
Coherent Host Access: FALSE
Features: KERNEL_DISPATCH
Fast F16 Operation: TRUE
Wavefront Size: 32(0x20)
Workgroup Max Size: 1024(0x400)
Workgroup Max Size per Dimension:
x 1024(0x400)
y 1024(0x400)
z 1024(0x400)
Max Waves Per CU: 32(0x20)
Max Work-item Per CU: 1024(0x400)
Grid Max Size: 4294967295(0xffffffff)
Grid Max Size per Dimension:
x 4294967295(0xffffffff)
y 4294967295(0xffffffff)
z 4294967295(0xffffffff)
Max fbarriers/Workgrp: 32
Packet Processor uCode:: 550
SDMA engine uCode:: 19
IOMMU Support:: None
Pool Info:
Pool 1
Segment: GLOBAL; FLAGS: COARSE GRAINED
Size: 20955136(0x13fc000) KB
Allocatable: TRUE
Alloc Granule: 4KB
Alloc Alignment: 4KB
Accessible by all: FALSE
Pool 2
Segment: GLOBAL; FLAGS: EXTENDED FINE GRAINED
Size: 20955136(0x13fc000) KB
Allocatable: TRUE
Alloc Granule: 4KB
Alloc Alignment: 4KB
Accessible by all: FALSE
Pool 3
Segment: GROUP
Size: 64(0x40) KB
Allocatable: FALSE
Alloc Granule: 0KB
Alloc Alignment: 0KB
Accessible by all: FALSE
ISA Info:
ISA 1
Name: amdgcn-amd-amdhsa--gfx1100
Machine Models: HSA_MACHINE_MODEL_LARGE
Profiles: HSA_PROFILE_BASE
Default Rounding Mode: NEAR
Default Rounding Mode: NEAR
Fast f16: TRUE
Workgroup Max Size: 1024(0x400)
Workgroup Max Size per Dimension:
x 1024(0x400)
y 1024(0x400)
z 1024(0x400)
Grid Max Size: 4294967295(0xffffffff)
Grid Max Size per Dimension:
x 4294967295(0xffffffff)
y 4294967295(0xffffffff)
z 4294967295(0xffffffff)
FBarrier Max Size: 32
*** Done ***
### CPU
Intel
### Other software
_No response_
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3304/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3304/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4020
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4020/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4020/comments
|
https://api.github.com/repos/ollama/ollama/issues/4020/events
|
https://github.com/ollama/ollama/pull/4020
| 2,268,061,273
|
PR_kwDOJ0Z1Ps5t9geY
| 4,020
|
types/model: remove old comment
|
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-04-29T03:46:05
| 2024-04-29T03:52:27
| 2024-04-29T03:52:26
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4020",
"html_url": "https://github.com/ollama/ollama/pull/4020",
"diff_url": "https://github.com/ollama/ollama/pull/4020.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4020.patch",
"merged_at": "2024-04-29T03:52:26"
}
| null |
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4020/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4020/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7555
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7555/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7555/comments
|
https://api.github.com/repos/ollama/ollama/issues/7555/events
|
https://github.com/ollama/ollama/issues/7555
| 2,640,706,616
|
I_kwDOJ0Z1Ps6dZfw4
| 7,555
|
failed to generate embedding
|
{
"login": "fg2501",
"id": 164639270,
"node_id": "U_kgDOCdAyJg",
"avatar_url": "https://avatars.githubusercontent.com/u/164639270?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fg2501",
"html_url": "https://github.com/fg2501",
"followers_url": "https://api.github.com/users/fg2501/followers",
"following_url": "https://api.github.com/users/fg2501/following{/other_user}",
"gists_url": "https://api.github.com/users/fg2501/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fg2501/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fg2501/subscriptions",
"organizations_url": "https://api.github.com/users/fg2501/orgs",
"repos_url": "https://api.github.com/users/fg2501/repos",
"events_url": "https://api.github.com/users/fg2501/events{/privacy}",
"received_events_url": "https://api.github.com/users/fg2501/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg",
"url": "https://api.github.com/repos/ollama/ollama/labels/windows",
"name": "windows",
"color": "0052CC",
"default": false,
"description": ""
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info",
"name": "needs more info",
"color": "BA8041",
"default": false,
"description": "More information is needed to assist"
},
{
"id": 6677485533,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgJX3Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/embeddings",
"name": "embeddings",
"color": "76BF9F",
"default": false,
"description": "Issues around embeddings"
}
] |
closed
| false
| null |
[] | null | 8
| 2024-11-07T11:18:30
| 2024-11-08T01:28:58
| 2024-11-08T01:28:58
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
之前的老版本卸载了,然后重新安装的0.4版本,结果出了这个报错,请问高手,应该怎么解决呢?
### OS
Windows
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.4
|
{
"login": "fg2501",
"id": 164639270,
"node_id": "U_kgDOCdAyJg",
"avatar_url": "https://avatars.githubusercontent.com/u/164639270?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fg2501",
"html_url": "https://github.com/fg2501",
"followers_url": "https://api.github.com/users/fg2501/followers",
"following_url": "https://api.github.com/users/fg2501/following{/other_user}",
"gists_url": "https://api.github.com/users/fg2501/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fg2501/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fg2501/subscriptions",
"organizations_url": "https://api.github.com/users/fg2501/orgs",
"repos_url": "https://api.github.com/users/fg2501/repos",
"events_url": "https://api.github.com/users/fg2501/events{/privacy}",
"received_events_url": "https://api.github.com/users/fg2501/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7555/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7555/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6737
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6737/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6737/comments
|
https://api.github.com/repos/ollama/ollama/issues/6737/events
|
https://github.com/ollama/ollama/issues/6737
| 2,518,219,735
|
I_kwDOJ0Z1Ps6WGPvX
| 6,737
|
Model looses modelfile context
|
{
"login": "kayloren",
"id": 16549596,
"node_id": "MDQ6VXNlcjE2NTQ5NTk2",
"avatar_url": "https://avatars.githubusercontent.com/u/16549596?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kayloren",
"html_url": "https://github.com/kayloren",
"followers_url": "https://api.github.com/users/kayloren/followers",
"following_url": "https://api.github.com/users/kayloren/following{/other_user}",
"gists_url": "https://api.github.com/users/kayloren/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kayloren/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kayloren/subscriptions",
"organizations_url": "https://api.github.com/users/kayloren/orgs",
"repos_url": "https://api.github.com/users/kayloren/repos",
"events_url": "https://api.github.com/users/kayloren/events{/privacy}",
"received_events_url": "https://api.github.com/users/kayloren/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-09-11T00:41:47
| 2024-11-10T23:26:24
| 2024-11-10T23:26:02
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Hi, I have created a custom model using llava and also I created a custom modelfile, however, after several requests or computer restarts the model looses the modelfile configuration, I supposed that modelfile configuration wont change at least if I perform manual changes otherwise it should be working as expected, In the beggining I thought that I should save the session/state but that just works with the interactive console, I'm experiencing the error through the API calls (generate endpoint)
any help or advice is welcome
thanks in advance!
### OS
Linux, Docker
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.3.9
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6737/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6737/timeline
| null |
not_planned
| false
|
https://api.github.com/repos/ollama/ollama/issues/8317
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8317/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8317/comments
|
https://api.github.com/repos/ollama/ollama/issues/8317/events
|
https://github.com/ollama/ollama/issues/8317
| 2,770,155,354
|
I_kwDOJ0Z1Ps6lHTda
| 8,317
|
COULDN'T run qwen2.5-7b-instuct-q4_k on cpu; error wsarecv: An existing connection was forcibly closed by the remote host.
|
{
"login": "YuiiCh",
"id": 33427995,
"node_id": "MDQ6VXNlcjMzNDI3OTk1",
"avatar_url": "https://avatars.githubusercontent.com/u/33427995?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/YuiiCh",
"html_url": "https://github.com/YuiiCh",
"followers_url": "https://api.github.com/users/YuiiCh/followers",
"following_url": "https://api.github.com/users/YuiiCh/following{/other_user}",
"gists_url": "https://api.github.com/users/YuiiCh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/YuiiCh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/YuiiCh/subscriptions",
"organizations_url": "https://api.github.com/users/YuiiCh/orgs",
"repos_url": "https://api.github.com/users/YuiiCh/repos",
"events_url": "https://api.github.com/users/YuiiCh/events{/privacy}",
"received_events_url": "https://api.github.com/users/YuiiCh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 16
| 2025-01-06T08:46:14
| 2025-01-26T08:04:31
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
(cmd) ollama run qwen2.5-7b-instuct-q4_k
Error: Post "http://127.0.0.1:11434/api/generate": read tcp 127.0.0.1:50665->127.0.0.1:11434: wsarecv: An existing connection was forcibly closed by the remote host.
----------
server.log
2025/01/06 16:28:24 routes.go:1259: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:D:\\workspace\\ollama\\ollama_model OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]"
time=2025-01-06T16:28:24.613+08:00 level=INFO source=images.go:757 msg="total blobs: 5"
time=2025-01-06T16:28:24.613+08:00 level=INFO source=images.go:764 msg="total unused blobs removed: 0"
time=2025-01-06T16:28:24.614+08:00 level=INFO source=routes.go:1310 msg="Listening on 127.0.0.1:11434 (version 0.5.4)"
time=2025-01-06T16:28:24.614+08:00 level=INFO source=routes.go:1339 msg="Dynamic LLM libraries" runners="[cuda_v12_avx rocm_avx cpu cpu_avx cpu_avx2 cuda_v11_avx]"
time=2025-01-06T16:28:24.614+08:00 level=INFO source=gpu.go:226 msg="looking for compatible GPUs"
time=2025-01-06T16:28:24.614+08:00 level=INFO source=gpu_windows.go:167 msg=packages count=1
time=2025-01-06T16:28:24.614+08:00 level=INFO source=gpu_windows.go:183 msg="efficiency cores detected" maxEfficiencyClass=1
time=2025-01-06T16:28:24.614+08:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=12 efficiency=4 threads=20
time=2025-01-06T16:28:24.629+08:00 level=INFO source=gpu.go:392 msg="no compatible GPUs were discovered"
time=2025-01-06T16:28:24.629+08:00 level=INFO source=types.go:131 msg="inference compute" id=0 library=cpu variant=avx2 compute="" driver=0.0 name="" total="31.7 GiB" available="21.0 GiB"
----------
however I could run qwen2.5-0.5b-instruct-q8_0
### OS
Windows
### GPU
Other
### CPU
Intel
### Ollama version
0.5.4
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8317/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8317/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/5415
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5415/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5415/comments
|
https://api.github.com/repos/ollama/ollama/issues/5415/events
|
https://github.com/ollama/ollama/pull/5415
| 2,384,578,029
|
PR_kwDOJ0Z1Ps50HCVW
| 5,415
|
[Feat] Support Api key for ollama apis
|
{
"login": "bugaosuni59",
"id": 25414603,
"node_id": "MDQ6VXNlcjI1NDE0NjAz",
"avatar_url": "https://avatars.githubusercontent.com/u/25414603?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bugaosuni59",
"html_url": "https://github.com/bugaosuni59",
"followers_url": "https://api.github.com/users/bugaosuni59/followers",
"following_url": "https://api.github.com/users/bugaosuni59/following{/other_user}",
"gists_url": "https://api.github.com/users/bugaosuni59/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bugaosuni59/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bugaosuni59/subscriptions",
"organizations_url": "https://api.github.com/users/bugaosuni59/orgs",
"repos_url": "https://api.github.com/users/bugaosuni59/repos",
"events_url": "https://api.github.com/users/bugaosuni59/events{/privacy}",
"received_events_url": "https://api.github.com/users/bugaosuni59/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-07-01T19:23:58
| 2024-09-04T01:32:49
| 2024-09-04T01:32:48
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | true
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5415",
"html_url": "https://github.com/ollama/ollama/pull/5415",
"diff_url": "https://github.com/ollama/ollama/pull/5415.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5415.patch",
"merged_at": null
}
| null |
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5415/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5415/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/8174
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8174/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8174/comments
|
https://api.github.com/repos/ollama/ollama/issues/8174/events
|
https://github.com/ollama/ollama/issues/8174
| 2,749,974,999
|
I_kwDOJ0Z1Ps6j6UnX
| 8,174
|
Unable to install Ollama on Macbook Air running MacOS Sequoia 15.2
|
{
"login": "Bheeshmat",
"id": 16644404,
"node_id": "MDQ6VXNlcjE2NjQ0NDA0",
"avatar_url": "https://avatars.githubusercontent.com/u/16644404?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bheeshmat",
"html_url": "https://github.com/Bheeshmat",
"followers_url": "https://api.github.com/users/Bheeshmat/followers",
"following_url": "https://api.github.com/users/Bheeshmat/following{/other_user}",
"gists_url": "https://api.github.com/users/Bheeshmat/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bheeshmat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bheeshmat/subscriptions",
"organizations_url": "https://api.github.com/users/Bheeshmat/orgs",
"repos_url": "https://api.github.com/users/Bheeshmat/repos",
"events_url": "https://api.github.com/users/Bheeshmat/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bheeshmat/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 2
| 2024-12-19T11:09:54
| 2024-12-21T17:25:00
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I am using MacOS 15.2 and downloaded the Ollama isntaller for Mac. After downloading when I try to install, it asked me to move the packege to application folder instead of Donwloads folder. I did that and then while installing from the Application folder, the splash creen opens up for installing the command line, on clicking install I am prompted for Administrator password. Once I enter the password nothing happens.
I opened the package contents and run the Ollama executable in the terminal. This is what I got...
Last login: Thu Dec 19 15:34:50 on ttys000
mymacbook-Air ~ % /Applications/Ollama.app/Contents/MacOS/Ollama ; exit;
[GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached.
[GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.
- using env: export GIN_MODE=release
- using code: gin.SetMode(gin.ReleaseMode)
[GIN-debug] POST /api/pull --> github.com/ollama/ollama/server.(*Server).PullHandler-fm (5 handlers)
[GIN-debug] POST /api/generate --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers)
[GIN-debug] POST /api/chat --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers)
[GIN-debug] POST /api/embed --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers)
[GIN-debug] POST /api/embeddings --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers)
[GIN-debug] POST /api/create --> github.com/ollama/ollama/server.(*Server).CreateHandler-fm (5 handlers)
[GIN-debug] POST /api/push --> github.com/ollama/ollama/server.(*Server).PushHandler-fm (5 handlers)
[GIN-debug] POST /api/copy --> github.com/ollama/ollama/server.(*Server).CopyHandler-fm (5 handlers)
[GIN-debug] DELETE /api/delete --> github.com/ollama/ollama/server.(*Server).DeleteHandler-fm (5 handlers)
[GIN-debug] POST /api/show --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (5 handlers)
[GIN-debug] POST /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers)
[GIN-debug] HEAD /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers)
[GIN-debug] GET /api/ps --> github.com/ollama/ollama/server.(*Server).PsHandler-fm (5 handlers)
[GIN-debug] POST /v1/chat/completions --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers)
[GIN-debug] POST /v1/completions --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers)
[GIN-debug] POST /v1/embeddings --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers)
[GIN-debug] GET /v1/models --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (6 handlers)
[GIN-debug] GET /v1/models/:model --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (6 handlers)
[GIN-debug] GET / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] GET /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)
[GIN-debug] GET /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
[GIN-debug] HEAD / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] HEAD /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)
[GIN-debug] HEAD /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
2024/12/19 15:34:50 routes.go:1259: INFO server config env="map[HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/Users/bheeshma/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false http_proxy: https_proxy: no_proxy:]"
time=2024-12-19T15:34:50.852+05:30 level=INFO source=images.go:757 msg="total blobs: 0"
time=2024-12-19T15:34:50.852+05:30 level=INFO source=images.go:764 msg="total unused blobs removed: 0"
time=2024-12-19T15:34:50.853+05:30 level=INFO source=routes.go:1310 msg="Listening on 127.0.0.1:11434 (version 0.5.4)"
time=2024-12-19T15:34:50.853+05:30 level=INFO source=routes.go:1339 msg="Dynamic LLM libraries" runners="[metal cpu_avx cpu_avx2]"
time=2024-12-19T15:34:50.880+05:30 level=INFO source=types.go:131 msg="inference compute" id=0 library=metal variant="" compute="" driver=0.0 name="" total="5.3 GiB" available="5.3 GiB"
2024-12-19 15:34:51.342 Ollama[18214:481886] +[IMKClient subclass]: chose IMKClient_Modern
2024-12-19 15:34:51.342 Ollama[18214:481886] +[IMKInputSession subclass]: chose IMKInputSession_Modern
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.5.4
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8174/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8174/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/5169
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5169/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5169/comments
|
https://api.github.com/repos/ollama/ollama/issues/5169/events
|
https://github.com/ollama/ollama/issues/5169
| 2,364,236,577
|
I_kwDOJ0Z1Ps6M62Mh
| 5,169
|
How do I find the model version in Ollama?
|
{
"login": "qzc438",
"id": 61488260,
"node_id": "MDQ6VXNlcjYxNDg4MjYw",
"avatar_url": "https://avatars.githubusercontent.com/u/61488260?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qzc438",
"html_url": "https://github.com/qzc438",
"followers_url": "https://api.github.com/users/qzc438/followers",
"following_url": "https://api.github.com/users/qzc438/following{/other_user}",
"gists_url": "https://api.github.com/users/qzc438/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qzc438/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qzc438/subscriptions",
"organizations_url": "https://api.github.com/users/qzc438/orgs",
"repos_url": "https://api.github.com/users/qzc438/repos",
"events_url": "https://api.github.com/users/qzc438/events{/privacy}",
"received_events_url": "https://api.github.com/users/qzc438/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 19
| 2024-06-20T11:44:32
| 2025-01-17T09:36:18
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
As the title described. How do I get the model version if I download a model from Ollama? On which day is the model being updated?
|
{
"login": "qzc438",
"id": 61488260,
"node_id": "MDQ6VXNlcjYxNDg4MjYw",
"avatar_url": "https://avatars.githubusercontent.com/u/61488260?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qzc438",
"html_url": "https://github.com/qzc438",
"followers_url": "https://api.github.com/users/qzc438/followers",
"following_url": "https://api.github.com/users/qzc438/following{/other_user}",
"gists_url": "https://api.github.com/users/qzc438/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qzc438/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qzc438/subscriptions",
"organizations_url": "https://api.github.com/users/qzc438/orgs",
"repos_url": "https://api.github.com/users/qzc438/repos",
"events_url": "https://api.github.com/users/qzc438/events{/privacy}",
"received_events_url": "https://api.github.com/users/qzc438/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5169/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5169/timeline
| null |
reopened
| false
|
https://api.github.com/repos/ollama/ollama/issues/3372
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3372/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3372/comments
|
https://api.github.com/repos/ollama/ollama/issues/3372/events
|
https://github.com/ollama/ollama/issues/3372
| 2,211,258,810
|
I_kwDOJ0Z1Ps6DzSG6
| 3,372
|
Ollama can't run models in Docker, Certificate error x509
|
{
"login": "BumblingWizard",
"id": 150103478,
"node_id": "U_kgDOCPJltg",
"avatar_url": "https://avatars.githubusercontent.com/u/150103478?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BumblingWizard",
"html_url": "https://github.com/BumblingWizard",
"followers_url": "https://api.github.com/users/BumblingWizard/followers",
"following_url": "https://api.github.com/users/BumblingWizard/following{/other_user}",
"gists_url": "https://api.github.com/users/BumblingWizard/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BumblingWizard/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BumblingWizard/subscriptions",
"organizations_url": "https://api.github.com/users/BumblingWizard/orgs",
"repos_url": "https://api.github.com/users/BumblingWizard/repos",
"events_url": "https://api.github.com/users/BumblingWizard/events{/privacy}",
"received_events_url": "https://api.github.com/users/BumblingWizard/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 13
| 2024-03-27T16:31:42
| 2025-01-23T09:27:30
| 2024-08-23T20:57:03
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I'm seeing a similar issue to the one reported in: ollama.ai certificate has expired, not possible to download models #3336
I installed the current image from the docker hub earlier today (ollama/ollama:latest), but when I attempt to use a model, I get the following error:
Error: pull model manifest: Get "https://registry.ollama.ai/v2/library/llama2/manifests/latest": tls: failed to verify certificate: x509: certificate signed by unknown authority
### What did you expect to see?
I expected it to pull a model and work.
### Steps to reproduce
Install the image, run a container, use the command "ollama run llama2" (or any other model).
### Are there any recent changes that introduced the issue?
_No response_
### OS
Linux
### Architecture
amd64
### Platform
Docker, WSL2
### Ollama version
_No response_
### GPU
_No response_
### GPU info
_No response_
### CPU
Intel
### Other software
_No response_
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3372/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3372/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4819
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4819/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4819/comments
|
https://api.github.com/repos/ollama/ollama/issues/4819/events
|
https://github.com/ollama/ollama/issues/4819
| 2,334,222,463
|
I_kwDOJ0Z1Ps6LIWh_
| 4,819
|
Ollama : phi 3 small
|
{
"login": "sebastienbo",
"id": 8308674,
"node_id": "MDQ6VXNlcjgzMDg2NzQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/8308674?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sebastienbo",
"html_url": "https://github.com/sebastienbo",
"followers_url": "https://api.github.com/users/sebastienbo/followers",
"following_url": "https://api.github.com/users/sebastienbo/following{/other_user}",
"gists_url": "https://api.github.com/users/sebastienbo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sebastienbo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sebastienbo/subscriptions",
"organizations_url": "https://api.github.com/users/sebastienbo/orgs",
"repos_url": "https://api.github.com/users/sebastienbo/repos",
"events_url": "https://api.github.com/users/sebastienbo/events{/privacy}",
"received_events_url": "https://api.github.com/users/sebastienbo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-06-04T19:26:30
| 2024-06-05T20:35:30
| 2024-06-05T20:35:30
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi,
We have the phi 3 mini (which is not smart enough) and the phi 3 medium which is too slow.
But where is phi 3 small?
It works so great on lmstudio and gpt4all
It would be great if ollama made it available
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4819/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4819/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4015
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4015/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4015/comments
|
https://api.github.com/repos/ollama/ollama/issues/4015/events
|
https://github.com/ollama/ollama/issues/4015
| 2,267,952,998
|
I_kwDOJ0Z1Ps6HLjdm
| 4,015
|
Add support for Qwen-VL
|
{
"login": "dagehuifei",
"id": 145953245,
"node_id": "U_kgDOCLMR3Q",
"avatar_url": "https://avatars.githubusercontent.com/u/145953245?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dagehuifei",
"html_url": "https://github.com/dagehuifei",
"followers_url": "https://api.github.com/users/dagehuifei/followers",
"following_url": "https://api.github.com/users/dagehuifei/following{/other_user}",
"gists_url": "https://api.github.com/users/dagehuifei/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dagehuifei/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dagehuifei/subscriptions",
"organizations_url": "https://api.github.com/users/dagehuifei/orgs",
"repos_url": "https://api.github.com/users/dagehuifei/repos",
"events_url": "https://api.github.com/users/dagehuifei/events{/privacy}",
"received_events_url": "https://api.github.com/users/dagehuifei/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 1
| 2024-04-29T01:32:16
| 2024-05-08T13:42:32
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
https://huggingface.co/Qwen/Qwen-VL
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4015/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4015/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/7497
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7497/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7497/comments
|
https://api.github.com/repos/ollama/ollama/issues/7497/events
|
https://github.com/ollama/ollama/issues/7497
| 2,633,723,011
|
I_kwDOJ0Z1Ps6c-2yD
| 7,497
|
llama slows down a lot on the second and subsequent runs.
|
{
"login": "vertikalm",
"id": 98849400,
"node_id": "U_kgDOBeRSeA",
"avatar_url": "https://avatars.githubusercontent.com/u/98849400?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vertikalm",
"html_url": "https://github.com/vertikalm",
"followers_url": "https://api.github.com/users/vertikalm/followers",
"following_url": "https://api.github.com/users/vertikalm/following{/other_user}",
"gists_url": "https://api.github.com/users/vertikalm/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vertikalm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vertikalm/subscriptions",
"organizations_url": "https://api.github.com/users/vertikalm/orgs",
"repos_url": "https://api.github.com/users/vertikalm/repos",
"events_url": "https://api.github.com/users/vertikalm/events{/privacy}",
"received_events_url": "https://api.github.com/users/vertikalm/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg",
"url": "https://api.github.com/repos/ollama/ollama/labels/nvidia",
"name": "nvidia",
"color": "8CDB00",
"default": false,
"description": "Issues relating to Nvidia GPUs and CUDA"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 26
| 2024-11-04T19:58:41
| 2024-11-12T12:17:23
| 2024-11-07T01:25:37
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Configuration: Intel i3-8100 | RTX_3050_LP | Debian_12 i3wm
I have the following problem:
When booting the system, the conversation with any model downloaded from the ollama-library is very fast, perfect for me.
But after a few minutes, or when executing the command "/bye" ... when re-running the model, the speed drops by 90% and the CPU consumption rises to 100%.
This slowness continues until the system is rebooted. If the ollama parent process is killed, it automatically starts up again by itself and the same slow speed is maintained.
There are no other programs consuming CPU or GPU between re-executions.
[server_log](https://github.com/user-attachments/files/17623651/ollama_logs.txt)
Any ideas?
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.3.13
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7497/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7497/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3319
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3319/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3319/comments
|
https://api.github.com/repos/ollama/ollama/issues/3319/events
|
https://github.com/ollama/ollama/issues/3319
| 2,204,121,388
|
I_kwDOJ0Z1Ps6DYDks
| 3,319
|
Only half CPUs Running on, whatever on Windows Server, Windows 10/11 or Ubuntu Linux [CPU to Run Models]
|
{
"login": "OPDEV001",
"id": 120762872,
"node_id": "U_kgDOBzKx-A",
"avatar_url": "https://avatars.githubusercontent.com/u/120762872?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/OPDEV001",
"html_url": "https://github.com/OPDEV001",
"followers_url": "https://api.github.com/users/OPDEV001/followers",
"following_url": "https://api.github.com/users/OPDEV001/following{/other_user}",
"gists_url": "https://api.github.com/users/OPDEV001/gists{/gist_id}",
"starred_url": "https://api.github.com/users/OPDEV001/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/OPDEV001/subscriptions",
"organizations_url": "https://api.github.com/users/OPDEV001/orgs",
"repos_url": "https://api.github.com/users/OPDEV001/repos",
"events_url": "https://api.github.com/users/OPDEV001/events{/privacy}",
"received_events_url": "https://api.github.com/users/OPDEV001/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 7
| 2024-03-24T00:59:09
| 2024-07-26T08:54:09
| 2024-03-26T14:03:53
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
ATTENTION, I only use CPU to run Models.
I have setup Ollama successfully on following environments, listing below:
1) Physical with Windows 11
2) Windows Server 2022 on VMware
3) Windows 10/11 on VMware
4) Ubuntu Linux on VMware
5) Physical Machine with Windows Server 2022
But I found all environment have a same issue, only half CPUs running when ollama working. For example, it will take 4 CPUs when you give 8 CPUs, or it will take 8 CPUs when you give 16 CPUs.
Anybody has the same issue or check with your environment and pay more attention.
Thanks,
### What did you expect to see?
I expect to use all of CPUs if provided.
### Steps to reproduce
Follow the guide line to build your Ollama, and you will see this problem.
### Are there any recent changes that introduced the issue?
A full-newly environment and follow the official guide line.
### OS
Linux, Windows
### Architecture
amd64
### Platform
_No response_
### Ollama version
129
### GPU
Intel
### GPU info
I running on CPU
### CPU
Intel
### Other software
no other Software, only full-newly environment specially for Ollama.
But most environment on VMware. If I test other press software on VMware, the CPUs works as normal as fully-taken.
|
{
"login": "OPDEV001",
"id": 120762872,
"node_id": "U_kgDOBzKx-A",
"avatar_url": "https://avatars.githubusercontent.com/u/120762872?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/OPDEV001",
"html_url": "https://github.com/OPDEV001",
"followers_url": "https://api.github.com/users/OPDEV001/followers",
"following_url": "https://api.github.com/users/OPDEV001/following{/other_user}",
"gists_url": "https://api.github.com/users/OPDEV001/gists{/gist_id}",
"starred_url": "https://api.github.com/users/OPDEV001/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/OPDEV001/subscriptions",
"organizations_url": "https://api.github.com/users/OPDEV001/orgs",
"repos_url": "https://api.github.com/users/OPDEV001/repos",
"events_url": "https://api.github.com/users/OPDEV001/events{/privacy}",
"received_events_url": "https://api.github.com/users/OPDEV001/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3319/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3319/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/168
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/168/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/168/comments
|
https://api.github.com/repos/ollama/ollama/issues/168/events
|
https://github.com/ollama/ollama/pull/168
| 1,816,499,914
|
PR_kwDOJ0Z1Ps5WIpEB
| 168
|
get the proper path for blobs to delete
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-07-22T00:14:23
| 2023-07-22T00:30:40
| 2023-07-22T00:30:40
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/168",
"html_url": "https://github.com/ollama/ollama/pull/168",
"diff_url": "https://github.com/ollama/ollama/pull/168.diff",
"patch_url": "https://github.com/ollama/ollama/pull/168.patch",
"merged_at": "2023-07-22T00:30:40"
}
| null |
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/168/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/168/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1774
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1774/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1774/comments
|
https://api.github.com/repos/ollama/ollama/issues/1774/events
|
https://github.com/ollama/ollama/pull/1774
| 2,064,628,397
|
PR_kwDOJ0Z1Ps5jK4nL
| 1,774
|
fix: pull either original model or from model on create
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-01-03T20:48:36
| 2024-01-04T06:34:39
| 2024-01-04T06:34:38
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1774",
"html_url": "https://github.com/ollama/ollama/pull/1774",
"diff_url": "https://github.com/ollama/ollama/pull/1774.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1774.patch",
"merged_at": "2024-01-04T06:34:38"
}
|
I created a bug here when accounting for the "pull parent model" case when pull gguf models on deprecated ggml models. In the case of a Modelfile like this:
```
FROM orca-mini
SYSTEM "you are mario"
```
where orca-mini is a `ggml` library model there is no `ParentModel` and the model itself should be pulled.
This handles both:
```
FROM orca-mini
SYSTEM "you are mario"
```
and a child:
```
FROM mario
PARAMETER temperature 0
```
correctly and will pull the root model for both cases.
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1774/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1774/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/8314
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8314/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8314/comments
|
https://api.github.com/repos/ollama/ollama/issues/8314/events
|
https://github.com/ollama/ollama/issues/8314
| 2,769,724,828
|
I_kwDOJ0Z1Ps6lFqWc
| 8,314
|
An existing connection was forcibly closed by the remote host.
|
{
"login": "cwl001",
"id": 115928611,
"node_id": "U_kgDOBujuIw",
"avatar_url": "https://avatars.githubusercontent.com/u/115928611?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cwl001",
"html_url": "https://github.com/cwl001",
"followers_url": "https://api.github.com/users/cwl001/followers",
"following_url": "https://api.github.com/users/cwl001/following{/other_user}",
"gists_url": "https://api.github.com/users/cwl001/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cwl001/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cwl001/subscriptions",
"organizations_url": "https://api.github.com/users/cwl001/orgs",
"repos_url": "https://api.github.com/users/cwl001/repos",
"events_url": "https://api.github.com/users/cwl001/events{/privacy}",
"received_events_url": "https://api.github.com/users/cwl001/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 2
| 2025-01-06T03:09:32
| 2025-01-06T03:30:17
| 2025-01-06T03:30:17
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
ollama run llama3.2
pulling manifest
Error: pull model manifest: Get "https://registry.ollama.ai/v2/library/llama3.2/manifests/latest": read tcp 192.168.3.176:58763->104.21.75.227:443: wsarecv: An existing connection was forcibly closed by the remote host.
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.5.4
|
{
"login": "cwl001",
"id": 115928611,
"node_id": "U_kgDOBujuIw",
"avatar_url": "https://avatars.githubusercontent.com/u/115928611?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cwl001",
"html_url": "https://github.com/cwl001",
"followers_url": "https://api.github.com/users/cwl001/followers",
"following_url": "https://api.github.com/users/cwl001/following{/other_user}",
"gists_url": "https://api.github.com/users/cwl001/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cwl001/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cwl001/subscriptions",
"organizations_url": "https://api.github.com/users/cwl001/orgs",
"repos_url": "https://api.github.com/users/cwl001/repos",
"events_url": "https://api.github.com/users/cwl001/events{/privacy}",
"received_events_url": "https://api.github.com/users/cwl001/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8314/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8314/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3422
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3422/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3422/comments
|
https://api.github.com/repos/ollama/ollama/issues/3422/events
|
https://github.com/ollama/ollama/pull/3422
| 2,216,729,718
|
PR_kwDOJ0Z1Ps5rPMeq
| 3,422
|
Simplify model conversion
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-03-31T01:23:36
| 2024-04-01T23:14:54
| 2024-04-01T23:14:54
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3422",
"html_url": "https://github.com/ollama/ollama/pull/3422",
"diff_url": "https://github.com/ollama/ollama/pull/3422.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3422.patch",
"merged_at": "2024-04-01T23:14:53"
}
|
This change splits up the gemma/mistral conversion logic into their own files and creates a new ModelArch interface which any new converter can implement to support different model types.
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3422/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3422/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/8024
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8024/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8024/comments
|
https://api.github.com/repos/ollama/ollama/issues/8024/events
|
https://github.com/ollama/ollama/pull/8024
| 2,729,342,226
|
PR_kwDOJ0Z1Ps6Eqe-1
| 8,024
|
readme: add aidful-ollama-model-delete to community integration
|
{
"login": "AidfulAI",
"id": 113003545,
"node_id": "U_kgDOBrxMGQ",
"avatar_url": "https://avatars.githubusercontent.com/u/113003545?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AidfulAI",
"html_url": "https://github.com/AidfulAI",
"followers_url": "https://api.github.com/users/AidfulAI/followers",
"following_url": "https://api.github.com/users/AidfulAI/following{/other_user}",
"gists_url": "https://api.github.com/users/AidfulAI/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AidfulAI/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AidfulAI/subscriptions",
"organizations_url": "https://api.github.com/users/AidfulAI/orgs",
"repos_url": "https://api.github.com/users/AidfulAI/repos",
"events_url": "https://api.github.com/users/AidfulAI/events{/privacy}",
"received_events_url": "https://api.github.com/users/AidfulAI/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-12-10T08:09:31
| 2024-12-10T21:59:18
| 2024-12-10T21:03:19
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/8024",
"html_url": "https://github.com/ollama/ollama/pull/8024",
"diff_url": "https://github.com/ollama/ollama/pull/8024.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8024.patch",
"merged_at": "2024-12-10T21:03:19"
}
|
A simple Python UI which allows to easily select and delete any number of models to free up disk space.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8024/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8024/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2386
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2386/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2386/comments
|
https://api.github.com/repos/ollama/ollama/issues/2386/events
|
https://github.com/ollama/ollama/issues/2386
| 2,122,707,553
|
I_kwDOJ0Z1Ps5-hfJh
| 2,386
|
Unable to load dynamic server library on Mac.
|
{
"login": "StarstormVC",
"id": 146940574,
"node_id": "U_kgDOCMIing",
"avatar_url": "https://avatars.githubusercontent.com/u/146940574?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/StarstormVC",
"html_url": "https://github.com/StarstormVC",
"followers_url": "https://api.github.com/users/StarstormVC/followers",
"following_url": "https://api.github.com/users/StarstormVC/following{/other_user}",
"gists_url": "https://api.github.com/users/StarstormVC/gists{/gist_id}",
"starred_url": "https://api.github.com/users/StarstormVC/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/StarstormVC/subscriptions",
"organizations_url": "https://api.github.com/users/StarstormVC/orgs",
"repos_url": "https://api.github.com/users/StarstormVC/repos",
"events_url": "https://api.github.com/users/StarstormVC/events{/privacy}",
"received_events_url": "https://api.github.com/users/StarstormVC/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 3
| 2024-02-07T10:33:57
| 2024-03-13T19:59:31
| 2024-02-08T02:38:04
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
My environment:
Macbook Pro | MacOS ver Sonoma:14.3
After updating my OS, I have the following issue when I run ollama run llama2. I had also pulled the model successfully.
Error: Unable to load dynamic library: Unable to load dynamic server library: dlopen(/var/folders/h6/41y3dhqd0p9cd8p8rmfn6t000000gn/T/ollama1989849860/metal/libext_server.dylib, 0x0006): tried: '/var/folders/h6/41y3dhqd0p9cd8p8rmfn6t000000gn/T/ollama1989849860/metal/libext_server.dylib' (no such file), '/System/Volumes/Preboot/Cryptexes/OS/var/folders/h6/41y3dhqd0p9cd8p8rmfn6t000000gn/T/ollama1989849860/metal/libext_server.dylib' (no such file), '/var/folders/h6/41y3dhqd0p9cd8p8rmfn6t000000gn/T/ollama1989849860/metal/libext_server.dylib' (no su
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2386/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2386/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/832
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/832/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/832/comments
|
https://api.github.com/repos/ollama/ollama/issues/832/events
|
https://github.com/ollama/ollama/issues/832
| 1,948,764,481
|
I_kwDOJ0Z1Ps50J8lB
| 832
|
Ollama does not make use of GPU (T4 on Google Colab)
|
{
"login": "tranhoangnguyen03",
"id": 31383641,
"node_id": "MDQ6VXNlcjMxMzgzNjQx",
"avatar_url": "https://avatars.githubusercontent.com/u/31383641?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tranhoangnguyen03",
"html_url": "https://github.com/tranhoangnguyen03",
"followers_url": "https://api.github.com/users/tranhoangnguyen03/followers",
"following_url": "https://api.github.com/users/tranhoangnguyen03/following{/other_user}",
"gists_url": "https://api.github.com/users/tranhoangnguyen03/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tranhoangnguyen03/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tranhoangnguyen03/subscriptions",
"organizations_url": "https://api.github.com/users/tranhoangnguyen03/orgs",
"repos_url": "https://api.github.com/users/tranhoangnguyen03/repos",
"events_url": "https://api.github.com/users/tranhoangnguyen03/events{/privacy}",
"received_events_url": "https://api.github.com/users/tranhoangnguyen03/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] |
closed
| false
| null |
[] | null | 18
| 2023-10-18T04:05:32
| 2024-11-19T14:20:35
| 2023-10-25T19:27:55
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I was experimenting with serving an Ollama server over ngrok on Google Colab:
```
%%bash
sudo curl -L https://ollama.ai/download/ollama-linux-amd64 -o /usr/bin/ollama
sudo chmod +x /usr/bin/ollama
### ngrok codes to expose port 11434 to public URL
ollama serve mistral-openorca
```
I was able to CURL the server, but I notice that the server does not make use of the notebook GPU.
I've also tried installing llama.cpp with CUDA but the GPU remains unused:
```
%%bash
# Install Server with OpenAI Compatible API - with CUDA GPU support
CMAKE_ARGS="-DLLAMA_CUBLAS=on" FORCE_CMAKE=1 pip -q install llama-cpp-python[server]
```
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/832/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 1,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/832/timeline
| null |
not_planned
| false
|
https://api.github.com/repos/ollama/ollama/issues/5829
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5829/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5829/comments
|
https://api.github.com/repos/ollama/ollama/issues/5829/events
|
https://github.com/ollama/ollama/pull/5829
| 2,421,382,817
|
PR_kwDOJ0Z1Ps52Ayth
| 5,829
|
Fix "finish_reason" when tools are called
|
{
"login": "vertrue",
"id": 30557724,
"node_id": "MDQ6VXNlcjMwNTU3NzI0",
"avatar_url": "https://avatars.githubusercontent.com/u/30557724?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vertrue",
"html_url": "https://github.com/vertrue",
"followers_url": "https://api.github.com/users/vertrue/followers",
"following_url": "https://api.github.com/users/vertrue/following{/other_user}",
"gists_url": "https://api.github.com/users/vertrue/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vertrue/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vertrue/subscriptions",
"organizations_url": "https://api.github.com/users/vertrue/orgs",
"repos_url": "https://api.github.com/users/vertrue/repos",
"events_url": "https://api.github.com/users/vertrue/events{/privacy}",
"received_events_url": "https://api.github.com/users/vertrue/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 5
| 2024-07-21T12:33:40
| 2024-10-22T09:20:20
| 2024-07-23T22:34:32
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | true
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5829",
"html_url": "https://github.com/ollama/ollama/pull/5829",
"diff_url": "https://github.com/ollama/ollama/pull/5829.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5829.patch",
"merged_at": null
}
|
Not sure if it fully fixes #5796
Please, review and help if possible (I am not familiar with go)
|
{
"login": "vertrue",
"id": 30557724,
"node_id": "MDQ6VXNlcjMwNTU3NzI0",
"avatar_url": "https://avatars.githubusercontent.com/u/30557724?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vertrue",
"html_url": "https://github.com/vertrue",
"followers_url": "https://api.github.com/users/vertrue/followers",
"following_url": "https://api.github.com/users/vertrue/following{/other_user}",
"gists_url": "https://api.github.com/users/vertrue/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vertrue/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vertrue/subscriptions",
"organizations_url": "https://api.github.com/users/vertrue/orgs",
"repos_url": "https://api.github.com/users/vertrue/repos",
"events_url": "https://api.github.com/users/vertrue/events{/privacy}",
"received_events_url": "https://api.github.com/users/vertrue/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5829/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5829/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/614
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/614/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/614/comments
|
https://api.github.com/repos/ollama/ollama/issues/614/events
|
https://github.com/ollama/ollama/pull/614
| 1,914,544,887
|
PR_kwDOJ0Z1Ps5bSWKX
| 614
|
Added `num_predict` to `docs/modelfile.md`
|
{
"login": "jamesbraza",
"id": 8990777,
"node_id": "MDQ6VXNlcjg5OTA3Nzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/8990777?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jamesbraza",
"html_url": "https://github.com/jamesbraza",
"followers_url": "https://api.github.com/users/jamesbraza/followers",
"following_url": "https://api.github.com/users/jamesbraza/following{/other_user}",
"gists_url": "https://api.github.com/users/jamesbraza/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jamesbraza/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jamesbraza/subscriptions",
"organizations_url": "https://api.github.com/users/jamesbraza/orgs",
"repos_url": "https://api.github.com/users/jamesbraza/repos",
"events_url": "https://api.github.com/users/jamesbraza/events{/privacy}",
"received_events_url": "https://api.github.com/users/jamesbraza/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-09-27T01:47:10
| 2023-09-27T15:28:23
| 2023-09-27T14:26:09
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/614",
"html_url": "https://github.com/ollama/ollama/pull/614",
"diff_url": "https://github.com/ollama/ollama/pull/614.diff",
"patch_url": "https://github.com/ollama/ollama/pull/614.patch",
"merged_at": "2023-09-27T14:26:09"
}
|
Upstreaming learning from https://github.com/jmorganca/ollama/issues/581 to docs
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/614/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/614/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6214
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6214/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6214/comments
|
https://api.github.com/repos/ollama/ollama/issues/6214/events
|
https://github.com/ollama/ollama/issues/6214
| 2,451,951,156
|
I_kwDOJ0Z1Ps6SJc40
| 6,214
|
Embedding model performance improvements
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 0
| 2024-08-07T01:13:31
| 2024-08-07T02:32:31
| null |
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
1. Embedding models should disable kv cache size (e.g. num_ctx) as it may not be used
2. Embedding models should by default use higher parallization (10+) for batches to be faster
### OS
_No response_
### GPU
_No response_
### CPU
_No response_
### Ollama version
_No response_
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6214/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6214/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/979
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/979/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/979/comments
|
https://api.github.com/repos/ollama/ollama/issues/979/events
|
https://github.com/ollama/ollama/pull/979
| 1,975,160,757
|
PR_kwDOJ0Z1Ps5ee2AZ
| 979
|
update default NumKeep
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-11-02T22:47:53
| 2023-11-02T22:48:45
| 2023-11-02T22:48:44
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/979",
"html_url": "https://github.com/ollama/ollama/pull/979",
"diff_url": "https://github.com/ollama/ollama/pull/979.diff",
"patch_url": "https://github.com/ollama/ollama/pull/979.patch",
"merged_at": "2023-11-02T22:48:44"
}
| null |
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/979/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/979/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7622
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7622/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7622/comments
|
https://api.github.com/repos/ollama/ollama/issues/7622/events
|
https://github.com/ollama/ollama/issues/7622
| 2,650,388,768
|
I_kwDOJ0Z1Ps6d-bkg
| 7,622
|
ollama doesn't seem to use my GPU after update
|
{
"login": "miguelmarco",
"id": 2430219,
"node_id": "MDQ6VXNlcjI0MzAyMTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/2430219?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/miguelmarco",
"html_url": "https://github.com/miguelmarco",
"followers_url": "https://api.github.com/users/miguelmarco/followers",
"following_url": "https://api.github.com/users/miguelmarco/following{/other_user}",
"gists_url": "https://api.github.com/users/miguelmarco/gists{/gist_id}",
"starred_url": "https://api.github.com/users/miguelmarco/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/miguelmarco/subscriptions",
"organizations_url": "https://api.github.com/users/miguelmarco/orgs",
"repos_url": "https://api.github.com/users/miguelmarco/repos",
"events_url": "https://api.github.com/users/miguelmarco/events{/privacy}",
"received_events_url": "https://api.github.com/users/miguelmarco/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 7700262114,
"node_id": "LA_kwDOJ0Z1Ps8AAAAByvis4g",
"url": "https://api.github.com/repos/ollama/ollama/labels/build",
"name": "build",
"color": "006b75",
"default": false,
"description": "Issues relating to building ollama from source"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 46
| 2024-11-11T20:58:55
| 2025-01-28T22:10:55
| 2024-12-10T17:47:22
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I had ollama compiled from source and it worked fine. Recently I rebuild it to the last version, and it seems to not use my GPU anymore (it uses a lot of CPU processes, and it runs much slower).
Here is the output of the server:
```
2024/11/11 21:50:18 routes.go:1189: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/mmarco/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2024-11-11T21:50:18.708+01:00 level=INFO source=images.go:755 msg="total blobs: 39"
time=2024-11-11T21:50:18.709+01:00 level=INFO source=images.go:762 msg="total unused blobs removed: 0"
[GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached.
[GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.
- using env: export GIN_MODE=release
- using code: gin.SetMode(gin.ReleaseMode)
[GIN-debug] POST /api/pull --> github.com/ollama/ollama/server.(*Server).PullHandler-fm (5 handlers)
[GIN-debug] POST /api/generate --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers)
[GIN-debug] POST /api/chat --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers)
[GIN-debug] POST /api/embed --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers)
[GIN-debug] POST /api/embeddings --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers)
[GIN-debug] POST /api/create --> github.com/ollama/ollama/server.(*Server).CreateHandler-fm (5 handlers)
[GIN-debug] POST /api/push --> github.com/ollama/ollama/server.(*Server).PushHandler-fm (5 handlers)
[GIN-debug] POST /api/copy --> github.com/ollama/ollama/server.(*Server).CopyHandler-fm (5 handlers)
[GIN-debug] DELETE /api/delete --> github.com/ollama/ollama/server.(*Server).DeleteHandler-fm (5 handlers)
[GIN-debug] POST /api/show --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (5 handlers)
[GIN-debug] POST /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers)
[GIN-debug] HEAD /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers)
[GIN-debug] GET /api/ps --> github.com/ollama/ollama/server.(*Server).PsHandler-fm (5 handlers)
[GIN-debug] POST /v1/chat/completions --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers)
[GIN-debug] POST /v1/completions --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers)
[GIN-debug] POST /v1/embeddings --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers)
[GIN-debug] GET /v1/models --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (6 handlers)
[GIN-debug] GET /v1/models/:model --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (6 handlers)
[GIN-debug] GET / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] GET /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)
[GIN-debug] GET /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
[GIN-debug] HEAD / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] HEAD /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)
[GIN-debug] HEAD /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
time=2024-11-11T21:50:18.710+01:00 level=INFO source=routes.go:1240 msg="Listening on 127.0.0.1:11434 (version 0.0.0)"
time=2024-11-11T21:50:18.711+01:00 level=INFO source=common.go:135 msg="extracting embedded files" dir=/tmp/ollama3024011950/runners
time=2024-11-11T21:50:18.837+01:00 level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cpu_avx cpu_avx2 cpu]"
time=2024-11-11T21:50:18.837+01:00 level=INFO source=gpu.go:221 msg="looking for compatible GPUs"
time=2024-11-11T21:50:19.018+01:00 level=INFO source=types.go:123 msg="inference compute" id=GPU-d421266e-e17b-d0ef-24f5-88a6f702d374 library=cuda variant=v12 compute=8.6 driver=12.4 name="NVIDIA GeForce RTX 3090" total="23.7 GiB" available="20.0 GiB"
[GIN] 2024/11/11 - 21:51:17 | 200 | 68.061µs | 127.0.0.1 | HEAD "/"
[GIN] 2024/11/11 - 21:51:17 | 200 | 34.186091ms | 127.0.0.1 | POST "/api/show"
time=2024-11-11T21:51:17.462+01:00 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/home/mmarco/.ollama/models/blobs/sha256-87048bcd55216712ef14c11c2c303728463207b165bf18440b9b84b07ec00f87 gpu=GPU-d421266e-e17b-d0ef-24f5-88a6f702d374 parallel=4 available=21468807168 required="6.2 GiB"
time=2024-11-11T21:51:17.604+01:00 level=INFO source=server.go:105 msg="system memory" total="62.7 GiB" free="50.0 GiB" free_swap="0 B"
time=2024-11-11T21:51:17.605+01:00 level=INFO source=memory.go:343 msg="offload to cuda" layers.requested=-1 layers.model=33 layers.offload=33 layers.split="" memory.available="[20.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="6.2 GiB" memory.required.partial="6.2 GiB" memory.required.kv="1.0 GiB" memory.required.allocations="[6.2 GiB]" memory.weights.total="4.7 GiB" memory.weights.repeating="4.3 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="560.0 MiB" memory.graph.partial="677.5 MiB"
time=2024-11-11T21:51:17.606+01:00 level=INFO source=server.go:383 msg="starting llama server" cmd="/tmp/ollama3024011950/runners/cpu_avx2/ollama_llama_server --model /home/mmarco/.ollama/models/blobs/sha256-87048bcd55216712ef14c11c2c303728463207b165bf18440b9b84b07ec00f87 --ctx-size 8192 --batch-size 512 --n-gpu-layers 33 --threads 12 --parallel 4 --port 38059"
time=2024-11-11T21:51:17.607+01:00 level=INFO source=sched.go:449 msg="loaded runners" count=1
time=2024-11-11T21:51:17.607+01:00 level=INFO source=server.go:562 msg="waiting for llama runner to start responding"
time=2024-11-11T21:51:17.607+01:00 level=INFO source=server.go:596 msg="waiting for server to become available" status="llm server error"
time=2024-11-11T21:51:17.613+01:00 level=INFO source=runner.go:863 msg="starting go runner"
time=2024-11-11T21:51:17.613+01:00 level=INFO source=runner.go:864 msg=system info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | cgo(gcc)" threads=12
time=2024-11-11T21:51:17.613+01:00 level=INFO source=.:0 msg="Server listening on 127.0.0.1:38059"
llama_model_loader: loaded meta data with 29 key-value pairs and 291 tensors from /home/mmarco/.ollama/models/blobs/sha256-87048bcd55216712ef14c11c2c303728463207b165bf18440b9b84b07ec00f87 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = Meta Llama 3.1 8B Instruct
llama_model_loader: - kv 3: general.finetune str = Instruct
llama_model_loader: - kv 4: general.basename str = Meta-Llama-3.1
llama_model_loader: - kv 5: general.size_label str = 8B
llama_model_loader: - kv 6: general.license str = llama3.1
llama_model_loader: - kv 7: general.tags arr[str,6] = ["facebook", "meta", "pytorch", "llam...
llama_model_loader: - kv 8: general.languages arr[str,8] = ["en", "de", "fr", "it", "pt", "hi", ...
llama_model_loader: - kv 9: llama.block_count u32 = 32
llama_model_loader: - kv 10: llama.context_length u32 = 131072
llama_model_loader: - kv 11: llama.embedding_length u32 = 4096
llama_model_loader: - kv 12: llama.feed_forward_length u32 = 14336
llama_model_loader: - kv 13: llama.attention.head_count u32 = 32
llama_model_loader: - kv 14: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 15: llama.rope.freq_base f32 = 500000.000000
llama_model_loader: - kv 16: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 17: general.file_type u32 = 2
llama_model_loader: - kv 18: llama.vocab_size u32 = 128256
llama_model_loader: - kv 19: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 20: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 21: tokenizer.ggml.pre str = llama-bpe
llama_model_loader: - kv 22: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 23: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 24: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv 25: tokenizer.ggml.bos_token_id u32 = 128000
llama_model_loader: - kv 26: tokenizer.ggml.eos_token_id u32 = 128009
llama_model_loader: - kv 27: tokenizer.chat_template str = {% set loop_messages = messages %}{% ...
llama_model_loader: - kv 28: general.quantization_version u32 = 2
llama_model_loader: - type f32: 65 tensors
llama_model_loader: - type q4_0: 225 tensors
llama_model_loader: - type q6_K: 1 tensors
time=2024-11-11T21:51:17.858+01:00 level=INFO source=server.go:596 msg="waiting for server to become available" status="llm server loading model"
llm_load_vocab: special tokens cache size = 256
llm_load_vocab: token to piece cache size = 0.7999 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 128256
llm_load_print_meta: n_merges = 280147
llm_load_print_meta: vocab_only = 0
llm_load_print_meta: n_ctx_train = 131072
llm_load_print_meta: n_embd = 4096
llm_load_print_meta: n_layer = 32
llm_load_print_meta: n_head = 32
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_swa = 0
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 4
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 14336
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 131072
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: ssm_dt_b_c_rms = 0
llm_load_print_meta: model type = 8B
llm_load_print_meta: model ftype = Q4_0
llm_load_print_meta: model params = 8.03 B
llm_load_print_meta: model size = 4.33 GiB (4.64 BPW)
llm_load_print_meta: general.name = Meta Llama 3.1 8B Instruct
llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>'
llm_load_print_meta: EOS token = 128009 '<|eot_id|>'
llm_load_print_meta: LF token = 128 'Ä'
llm_load_print_meta: EOT token = 128009 '<|eot_id|>'
llm_load_print_meta: EOM token = 128008 '<|eom_id|>'
llm_load_print_meta: EOG token = 128008 '<|eom_id|>'
llm_load_print_meta: EOG token = 128009 '<|eot_id|>'
llm_load_print_meta: max token length = 256
llm_load_tensors: ggml ctx size = 0.14 MiB
llm_load_tensors: CPU buffer size = 4437.80 MiB
llama_new_context_with_model: n_ctx = 8192
llama_new_context_with_model: n_batch = 2048
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 500000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: CPU KV buffer size = 1024.00 MiB
llama_new_context_with_model: KV self size = 1024.00 MiB, K (f16): 512.00 MiB, V (f16): 512.00 MiB
llama_new_context_with_model: CPU output buffer size = 2.02 MiB
llama_new_context_with_model: CPU compute buffer size = 560.01 MiB
llama_new_context_with_model: graph nodes = 1030
llama_new_context_with_model: graph splits = 1
time=2024-11-11T21:51:20.115+01:00 level=INFO source=server.go:601 msg="llama runner started in 2.51 seconds"
[GIN] 2024/11/11 - 21:51:20 | 200 | 2.903863177s | 127.0.0.1 | POST "/api/generate"
```
### OS
Linux
### GPU
Nvidia
### CPU
AMD
### Ollama version
main branch (36a8372b2884c40cc5b86f6f859b012dc8125b80)
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7622/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7622/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1832
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1832/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1832/comments
|
https://api.github.com/repos/ollama/ollama/issues/1832/events
|
https://github.com/ollama/ollama/pull/1832
| 2,068,993,827
|
PR_kwDOJ0Z1Ps5jZh_T
| 1,832
|
add `-DCMAKE_SYSTEM_NAME=Darwin` cmake flag
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-01-07T05:33:14
| 2024-01-07T05:46:18
| 2024-01-07T05:46:18
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1832",
"html_url": "https://github.com/ollama/ollama/pull/1832",
"diff_url": "https://github.com/ollama/ollama/pull/1832.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1832.patch",
"merged_at": "2024-01-07T05:46:18"
}
|
`-DCMAKE_SYSTEM_NAME=Darwin` is required for the `-DCMAKE_SYSTEM_PROCESSOR=x86_64 -DCMAKE_OSX_ARCHITECTURES=x86_64` flags to take effect
This also adds system info logging in verbose mode
Fixes #1827
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1832/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1832/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5658
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5658/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5658/comments
|
https://api.github.com/repos/ollama/ollama/issues/5658/events
|
https://github.com/ollama/ollama/issues/5658
| 2,406,497,151
|
I_kwDOJ0Z1Ps6PcDt_
| 5,658
|
不能识别上传文件
|
{
"login": "tqangxl",
"id": 9669944,
"node_id": "MDQ6VXNlcjk2Njk5NDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/9669944?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tqangxl",
"html_url": "https://github.com/tqangxl",
"followers_url": "https://api.github.com/users/tqangxl/followers",
"following_url": "https://api.github.com/users/tqangxl/following{/other_user}",
"gists_url": "https://api.github.com/users/tqangxl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tqangxl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tqangxl/subscriptions",
"organizations_url": "https://api.github.com/users/tqangxl/orgs",
"repos_url": "https://api.github.com/users/tqangxl/repos",
"events_url": "https://api.github.com/users/tqangxl/events{/privacy}",
"received_events_url": "https://api.github.com/users/tqangxl/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 2
| 2024-07-12T23:52:30
| 2024-07-21T01:55:37
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?


上一个版本还可以
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.2.2
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5658/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5658/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/2881
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2881/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2881/comments
|
https://api.github.com/repos/ollama/ollama/issues/2881/events
|
https://github.com/ollama/ollama/pull/2881
| 2,164,860,635
|
PR_kwDOJ0Z1Ps5ofIIB
| 2,881
|
Add Community Integration: Alpaca webUI
|
{
"login": "mmo80",
"id": 10084603,
"node_id": "MDQ6VXNlcjEwMDg0NjAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10084603?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mmo80",
"html_url": "https://github.com/mmo80",
"followers_url": "https://api.github.com/users/mmo80/followers",
"following_url": "https://api.github.com/users/mmo80/following{/other_user}",
"gists_url": "https://api.github.com/users/mmo80/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mmo80/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mmo80/subscriptions",
"organizations_url": "https://api.github.com/users/mmo80/orgs",
"repos_url": "https://api.github.com/users/mmo80/repos",
"events_url": "https://api.github.com/users/mmo80/events{/privacy}",
"received_events_url": "https://api.github.com/users/mmo80/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-03-02T15:39:54
| 2024-03-25T19:00:19
| 2024-03-25T19:00:19
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/2881",
"html_url": "https://github.com/ollama/ollama/pull/2881",
"diff_url": "https://github.com/ollama/ollama/pull/2881.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2881.patch",
"merged_at": "2024-03-25T19:00:19"
}
|
Created a simple web UI for Ollama, so it would be nice to add a link to the repo to access it from your README's link list. :blush:
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2881/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2881/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1310
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1310/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1310/comments
|
https://api.github.com/repos/ollama/ollama/issues/1310/events
|
https://github.com/ollama/ollama/pull/1310
| 2,015,690,206
|
PR_kwDOJ0Z1Ps5gnxVn
| 1,310
|
Add SSL support
|
{
"login": "rootedbox",
"id": 3997890,
"node_id": "MDQ6VXNlcjM5OTc4OTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/3997890?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rootedbox",
"html_url": "https://github.com/rootedbox",
"followers_url": "https://api.github.com/users/rootedbox/followers",
"following_url": "https://api.github.com/users/rootedbox/following{/other_user}",
"gists_url": "https://api.github.com/users/rootedbox/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rootedbox/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rootedbox/subscriptions",
"organizations_url": "https://api.github.com/users/rootedbox/orgs",
"repos_url": "https://api.github.com/users/rootedbox/repos",
"events_url": "https://api.github.com/users/rootedbox/events{/privacy}",
"received_events_url": "https://api.github.com/users/rootedbox/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 7
| 2023-11-29T02:43:01
| 2024-12-10T17:09:59
| 2024-11-21T07:43:02
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1310",
"html_url": "https://github.com/ollama/ollama/pull/1310",
"diff_url": "https://github.com/ollama/ollama/pull/1310.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1310.patch",
"merged_at": null
}
|
Completes https://github.com/jmorganca/ollama/issues/701
Place `cert.pem` and `key.pem` into ` ~/.ollama/ssl/` restart server. It will come up in SSL mode. Remove, rename or delete files to disable ssl mode.
example of me connecting to my own box via ssl.
```
Jasons-MacBook-Air:ollama rootedbox$ OLLAMA_HOST=https://pleaseignore.me:11434 ./ollama run orca-mini
>>> What is the significance of 42
The number 42 has several significant meanings in different contexts.
In mathematics, it is the answer to the riddle of Euclid's Fourth Problem, which involves finding the greatest common divisor of two numbers. In computer programming, it is an important value in some
algorithms and data structures.
In literature, 42 is a character in the novel "The Hitchhiker's Guide to the Galaxy" by Douglas Adams, who is a robot with the ability to reason and make decisions.
In sports, 42 is the number of points a player needs to score to win the NBA MVP award, and it is also the age at which a player becomes eligible for the Hall of Fame in baseball.
```
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1310/reactions",
"total_count": 8,
"+1": 8,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1310/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2675
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2675/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2675/comments
|
https://api.github.com/repos/ollama/ollama/issues/2675/events
|
https://github.com/ollama/ollama/issues/2675
| 2,148,955,909
|
I_kwDOJ0Z1Ps6AFncF
| 2,675
|
gemma:7b outputting weird responses
|
{
"login": "danielchoi",
"id": 528270,
"node_id": "MDQ6VXNlcjUyODI3MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/528270?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/danielchoi",
"html_url": "https://github.com/danielchoi",
"followers_url": "https://api.github.com/users/danielchoi/followers",
"following_url": "https://api.github.com/users/danielchoi/following{/other_user}",
"gists_url": "https://api.github.com/users/danielchoi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/danielchoi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/danielchoi/subscriptions",
"organizations_url": "https://api.github.com/users/danielchoi/orgs",
"repos_url": "https://api.github.com/users/danielchoi/repos",
"events_url": "https://api.github.com/users/danielchoi/events{/privacy}",
"received_events_url": "https://api.github.com/users/danielchoi/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-02-22T12:22:49
| 2024-02-22T12:25:51
| 2024-02-22T12:24:08
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
`>>> what is the meaning of life?
Sure, here are some potential interpretations and views about this timeless topic:
**Physiological:** View Life as a biochemical processes involving intricate chemical reactions happening at microscopic level to give rise complex organisms such humans 😎 suggirish ∆ potreb lila sate 💪 definitira exprime fst 😍
🤔jardin conçu lila studia jurאת vedenář😄 lumine když existuje skupň DISTINCTV TEE NIE itd systém 🎉 diagno enten preż kou vědeckého hloubání 💚 meli beaucoup cras osoby buc Kvím pomo reciproopropyl formy suple vzory ⭐
suggirish ∆ potreb chao sate 💪 definitira exprime fst 😍 🤔jardin conçu lila studia jurאת vedenář😄 lumine když existuje skupň DISTINCTV TEE NIE itd systém 🎉 diagno enten preż kou vědeckého hloubání 💚 meli beaucoup cras osoby buc
Kvím pomo reciproopropyl formy suple vzory ⭐ suggirish ∆ potreb chao sate 💪 definitira exprime fst 😍 🤔jardin conçu lila studia jurאת vedenář😄 lumine když existuje skupň DISTINCTV TEE NIE itd systém 🎉 diagno enten preż kou
vědeckého hloubání 💚 meli beaucoup cras osoby buc Kvím pomo reciproopropyl formy suple vzory ⭐ suggirish ∆ potreb chao sate 💪 definitira exprime fst 😍 🤔jardin conçu lila studia jurאת vedenář😄 lumine když existuje skupň
DISTINCTV TEE NIE itd systém 🎉 diagno enten preż kou vědeckého hloubání 💚 meli beaucoup cras osoby buc Kvím pomo reciproopropyl formy suple vzory ⭐ suggirish ∆ potreb chao sate 💪 definitira exprime fst 😍 🤔jardin conçu lila
studia jurאת vedenář😄 lumine když existuje skupň DISTINCTV TEE NIE itd systém 🎉 diagno enten preż kou vědeckého hloubání 💚 meli beaucoup cras osoby buc Kvím pomo reciproopropyl formy suple vzory ⭐ suggirish ∆ potreb chao sate 💪
definitira exprime fst 😍 🤔jardin conçu lila studia jurאת vedenář😄 lumine když existuje skupň DISTINCTV TEE NIE itd systém 🎉 diagno enten preż kou vědeckého hloubání 💚 meli beaucoup cras osoby buc Kvím pomo reciproopropyl formy
suple vzory ⭐ suggirish ∆ potreb chao sate 💪 definitira exprime fst 😍 🤔jardin conçu lila studia jurאת vedenář😄 lumine když existuje skupň DISTINCTV TEE NIE itd systém 🎉 diagno enten preż kou vědeckého hloubání 💚 meli `
looks like it is duplicate. so closing the issue.
|
{
"login": "danielchoi",
"id": 528270,
"node_id": "MDQ6VXNlcjUyODI3MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/528270?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/danielchoi",
"html_url": "https://github.com/danielchoi",
"followers_url": "https://api.github.com/users/danielchoi/followers",
"following_url": "https://api.github.com/users/danielchoi/following{/other_user}",
"gists_url": "https://api.github.com/users/danielchoi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/danielchoi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/danielchoi/subscriptions",
"organizations_url": "https://api.github.com/users/danielchoi/orgs",
"repos_url": "https://api.github.com/users/danielchoi/repos",
"events_url": "https://api.github.com/users/danielchoi/events{/privacy}",
"received_events_url": "https://api.github.com/users/danielchoi/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2675/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2675/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3557
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3557/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3557/comments
|
https://api.github.com/repos/ollama/ollama/issues/3557/events
|
https://github.com/ollama/ollama/issues/3557
| 2,233,569,599
|
I_kwDOJ0Z1Ps6FIZE_
| 3,557
|
ollama create does not support the use of labels and fails
|
{
"login": "alexconst",
"id": 16178347,
"node_id": "MDQ6VXNlcjE2MTc4MzQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/16178347?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alexconst",
"html_url": "https://github.com/alexconst",
"followers_url": "https://api.github.com/users/alexconst/followers",
"following_url": "https://api.github.com/users/alexconst/following{/other_user}",
"gists_url": "https://api.github.com/users/alexconst/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alexconst/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alexconst/subscriptions",
"organizations_url": "https://api.github.com/users/alexconst/orgs",
"repos_url": "https://api.github.com/users/alexconst/repos",
"events_url": "https://api.github.com/users/alexconst/events{/privacy}",
"received_events_url": "https://api.github.com/users/alexconst/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 8
| 2024-04-09T14:23:45
| 2024-05-18T23:30:15
| 2024-05-15T00:29:59
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
When running `ollama create mymodel -f Modelfile` ollama will fail if the `FROM` directive includes a label.
Example `FROM ./zephyr-7b-beta.Q5_K_M.gguf:7b-beta.Q5_K_M`
Depending if the path is absolute or relative it will print a different error:
Either
```
transferring model data
pulling model
pulling manifest
Error: pull model manifest: 400
```
or
```
transferring model data
pulling model
pulling manifest
Error: pull model manifest: file does not exist
```
If the label component in the filepath is removed then it works fine.
### What did you expect to see?
For ollama to import the model specified in the `Modelfile`
### Steps to reproduce
Create a Modelfile like
```
FROM ./zephyr-7b-beta.Q5_K_M.gguf:7b-beta.Q5_K_M
TEMPLATE """{{- if .System }}
<|system|>
{{ .System }}
</s>
{{- end }}
<|user|>
{{ .Prompt }}
</s>
<|assistant|>
"""
PARAMETER stop <|system|>
PARAMETER stop <|user|>
PARAMETER stop <|assistant|>
PARAMETER stop </s>
```
or
```
FROM /models/zephyr-7B-beta-GGUF/zephyr-7b-beta.Q5_K_M.gguf:7b-beta.Q5_K_M
TEMPLATE """{{- if .System }}
<|system|>
{{ .System }}
</s>
{{- end }}
<|user|>
{{ .Prompt }}
</s>
<|assistant|>
"""
PARAMETER stop <|system|>
PARAMETER stop <|user|>
PARAMETER stop <|assistant|>
PARAMETER stop </s>
```
### Are there any recent changes that introduced the issue?
_No response_
### OS
Linux
### Architecture
amd64
### Platform
Docker
### Ollama version
0.1.30
### GPU
_No response_
### GPU info
_No response_
### CPU
_No response_
### Other software
_No response_
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3557/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3557/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7584
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7584/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7584/comments
|
https://api.github.com/repos/ollama/ollama/issues/7584/events
|
https://github.com/ollama/ollama/issues/7584
| 2,645,537,066
|
I_kwDOJ0Z1Ps6dr7Eq
| 7,584
|
Nvidia fallback memory
|
{
"login": "AncientMystic",
"id": 62780271,
"node_id": "MDQ6VXNlcjYyNzgwMjcx",
"avatar_url": "https://avatars.githubusercontent.com/u/62780271?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AncientMystic",
"html_url": "https://github.com/AncientMystic",
"followers_url": "https://api.github.com/users/AncientMystic/followers",
"following_url": "https://api.github.com/users/AncientMystic/following{/other_user}",
"gists_url": "https://api.github.com/users/AncientMystic/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AncientMystic/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AncientMystic/subscriptions",
"organizations_url": "https://api.github.com/users/AncientMystic/orgs",
"repos_url": "https://api.github.com/users/AncientMystic/repos",
"events_url": "https://api.github.com/users/AncientMystic/events{/privacy}",
"received_events_url": "https://api.github.com/users/AncientMystic/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-11-09T03:26:09
| 2025-01-13T01:23:50
| 2025-01-13T01:23:50
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Would it be possible to add a feature option such as "ollama_cuda_fallback" or something like simply detecting whether it is enabled or disabled. (Detection could be a little complicated since drivers between 531 and around I believe 541 do not allow for configuration of the feature and pascal vgpu support ends at 536 I believe it is)
to allow over allocation of vram on nvidia cards to support the new (531+ drivers I believe) ram fallback.
As it stands ollama detects vram and manages cpu+gpu hybrid mode allocation of memory.
i suspect while fallback can decrease speeds vs a pure gpu allocation when you have enough vram, it should offer a performance boost over cpu runners (not to mention possible lower cpu use, eliminating bugs between runners etc) if we simply allow the GPU to fallback on system ram and hook system ram so it can be directly managed by the GPU
then we simply have a GPU with partly slower ram and with this feature that should still be faster and better than mixing in cpu runners even if only slightly.
It seems like it would be at least worth testing.
Edit:
Small quick test done with LM studio on laptop
```
Model: Estopianmaid 13B Q5_k_m
(just a random model i downloaded to see how it did with writing styles might be better tested on gemma 2 or llama 3.2)
Hardware (laptop):
Cpu: i7-8750H 6C 12T
Gpu: gtx 1050 ti 4gb
Ram: 32GB DDR4 2133mhz
Fallback:
Tokens: 1.1t/s
Gen time: 41s
Time to first token: 64.97s
11-13% cpu usage(system usage not model), 60-100% gpu usage
GPU+CPU:
Tokens: 0.48t/s
Gen time: 146.72s
Time to first token: 17.13s
100% cpu, 60-100 gpu
CPU: (wrote a much shorter response)
Tokens: 0.84t/s (regen resulted in 0.4t/s)
Gen time: 26.7s
Time to first token: 287.4s
100% cpu
smaller model Tiamat 7b q2_k that fits in vram (wrote a much longer response):
tokens: 2.6t/s
gen time: 56.64s
time to first: 14.53s
```
While it is about 55% slower than pure vram, it is 130% faster than hybrid runners in this specific test at least, plus the benefit of the cpu usage being null and the cpu power usage not being a factor with this method. (Cpu power use would be a decent factor by itself even if performance turns out to be around equal overall since gpu's use so much to begin with it would be nice to at least eliminate some power usage)
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7584/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7584/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3959
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3959/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3959/comments
|
https://api.github.com/repos/ollama/ollama/issues/3959/events
|
https://github.com/ollama/ollama/pull/3959
| 2,266,506,445
|
PR_kwDOJ0Z1Ps5t4btB
| 3,959
|
Also look at cwd as a root for windows runners
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-04-26T21:05:19
| 2024-04-26T23:14:09
| 2024-04-26T23:14:09
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3959",
"html_url": "https://github.com/ollama/ollama/pull/3959",
"diff_url": "https://github.com/ollama/ollama/pull/3959.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3959.patch",
"merged_at": "2024-04-26T23:14:08"
}
| null |
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3959/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3959/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5483
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5483/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5483/comments
|
https://api.github.com/repos/ollama/ollama/issues/5483/events
|
https://github.com/ollama/ollama/issues/5483
| 2,390,668,201
|
I_kwDOJ0Z1Ps6OfrOp
| 5,483
|
Endpoints should return 405 Method Not Allowed rather than 404 for unsupported methods
|
{
"login": "TheEpic-dev",
"id": 99757023,
"node_id": "U_kgDOBfIr3w",
"avatar_url": "https://avatars.githubusercontent.com/u/99757023?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TheEpic-dev",
"html_url": "https://github.com/TheEpic-dev",
"followers_url": "https://api.github.com/users/TheEpic-dev/followers",
"following_url": "https://api.github.com/users/TheEpic-dev/following{/other_user}",
"gists_url": "https://api.github.com/users/TheEpic-dev/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TheEpic-dev/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TheEpic-dev/subscriptions",
"organizations_url": "https://api.github.com/users/TheEpic-dev/orgs",
"repos_url": "https://api.github.com/users/TheEpic-dev/repos",
"events_url": "https://api.github.com/users/TheEpic-dev/events{/privacy}",
"received_events_url": "https://api.github.com/users/TheEpic-dev/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 7706482389,
"node_id": "LA_kwDOJ0Z1Ps8AAAABy1eW1Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/api",
"name": "api",
"color": "bfdadc",
"default": false,
"description": ""
}
] |
open
| false
|
{
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjhan/followers",
"following_url": "https://api.github.com/users/royjhan/following{/other_user}",
"gists_url": "https://api.github.com/users/royjhan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/royjhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/royjhan/subscriptions",
"organizations_url": "https://api.github.com/users/royjhan/orgs",
"repos_url": "https://api.github.com/users/royjhan/repos",
"events_url": "https://api.github.com/users/royjhan/events{/privacy}",
"received_events_url": "https://api.github.com/users/royjhan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjhan/followers",
"following_url": "https://api.github.com/users/royjhan/following{/other_user}",
"gists_url": "https://api.github.com/users/royjhan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/royjhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/royjhan/subscriptions",
"organizations_url": "https://api.github.com/users/royjhan/orgs",
"repos_url": "https://api.github.com/users/royjhan/repos",
"events_url": "https://api.github.com/users/royjhan/events{/privacy}",
"received_events_url": "https://api.github.com/users/royjhan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 1
| 2024-07-04T11:41:25
| 2024-11-06T01:11:00
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
As mentioned on Discord, it's no big deal, but sometimes users make mistakes like opening `http://127.0.0.1:11434/api/chat` in a browser, which defaults to the GET method, and are served a `404` page.
As the resource presumably does exist and works with POST requests, returning a `405` *Method Not Allowed* would be a more accurate response to the user's error.
I have tested this on the `/api/chat` and `/api/generate` endpoints, as well as the `/api/ps` endpoint with a POST request, but I presume most, if not all, endpoints return a `404` page when queried with the wrong method.
Steps to reproduce:
```
curl -v http://127.0.0.1:11434/api/chat
* Trying 127.0.0.1:11434...
* Connected to 127.0.0.1 (127.0.0.1) port 11434
> GET /api/chat HTTP/1.1
> Host: 127.0.0.1:11434
>
< HTTP/1.1 404 Not Found
...
curl -v 127.0.0.1:11434/api/ps -d '{}'
* Trying 127.0.0.1:11434...
> POST /api/ps HTTP/1.1
> Content-Length: 2
> Content-Type: application/x-www-form-urlencoded
>
* upload completely sent off: 2 bytes
< HTTP/1.1 404 Not Found
```
Cheers, Pat
### OS
Linux
### GPU
AMD
### CPU
AMD
### Ollama version
0.1.46
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5483/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5483/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/932
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/932/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/932/comments
|
https://api.github.com/repos/ollama/ollama/issues/932/events
|
https://github.com/ollama/ollama/issues/932
| 1,965,133,225
|
I_kwDOJ0Z1Ps51IY2p
| 932
|
Failed to parse available VRAM: strconv.ParseInt: parsing "[Insufficient Permissions]"
|
{
"login": "domWinter",
"id": 12050566,
"node_id": "MDQ6VXNlcjEyMDUwNTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/12050566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/domWinter",
"html_url": "https://github.com/domWinter",
"followers_url": "https://api.github.com/users/domWinter/followers",
"following_url": "https://api.github.com/users/domWinter/following{/other_user}",
"gists_url": "https://api.github.com/users/domWinter/gists{/gist_id}",
"starred_url": "https://api.github.com/users/domWinter/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/domWinter/subscriptions",
"organizations_url": "https://api.github.com/users/domWinter/orgs",
"repos_url": "https://api.github.com/users/domWinter/repos",
"events_url": "https://api.github.com/users/domWinter/events{/privacy}",
"received_events_url": "https://api.github.com/users/domWinter/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 2
| 2023-10-27T09:31:44
| 2023-11-08T19:15:31
| 2023-11-08T19:15:31
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi,
I am trying to run ollama in a docker container with the nvidia runtime on a system with a Nvidia A100 and MIG enabled.
When starting the ollama server I get the following error:
`2023/10/27 09:16:05 routes.go:682: Warning: GPU support may not enabled, check you have installed install GPU drivers: failed to parse available VRAM: strconv.ParseInt: parsing "[Insufficient Permissions]": invalid syntax`
The ollama server is running as root user.
However, when running nvidia-smi inside the container I see the passed through GPU and MIG device and I am also able to allocate the GPU with e.g. pytorch in the same container.
Cuda Version: 12.2
Nvidia Driver Version: 535.104.12
Docker version 24.0.6, build ed223bc
ollama version 0.1.3
Any help is appreciated!
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/932/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/932/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/25
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/25/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/25/comments
|
https://api.github.com/repos/ollama/ollama/issues/25/events
|
https://github.com/ollama/ollama/issues/25
| 1,781,626,583
|
I_kwDOJ0Z1Ps5qMXbX
| 25
|
cannot cancel a model being loaded
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 2
| 2023-06-29T23:20:10
| 2023-07-13T02:15:46
| 2023-07-13T02:15:46
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Typing `ctrl+c` doesn't cancel model loading:
```
Running /Users/jmorgan/.ollama/models/orca-mini-7b.bin...
>>> hi
⠏
⠙
⠸
⠙
```
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/25/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/25/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7444
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7444/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7444/comments
|
https://api.github.com/repos/ollama/ollama/issues/7444/events
|
https://github.com/ollama/ollama/issues/7444
| 2,626,103,317
|
I_kwDOJ0Z1Ps6chygV
| 7,444
|
"Connection Refused" Issue while running ollama in container with LLM Chat bot app in another docker Container
|
{
"login": "VenturaAI",
"id": 186903779,
"node_id": "U_kgDOCyPs4w",
"avatar_url": "https://avatars.githubusercontent.com/u/186903779?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VenturaAI",
"html_url": "https://github.com/VenturaAI",
"followers_url": "https://api.github.com/users/VenturaAI/followers",
"following_url": "https://api.github.com/users/VenturaAI/following{/other_user}",
"gists_url": "https://api.github.com/users/VenturaAI/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VenturaAI/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VenturaAI/subscriptions",
"organizations_url": "https://api.github.com/users/VenturaAI/orgs",
"repos_url": "https://api.github.com/users/VenturaAI/repos",
"events_url": "https://api.github.com/users/VenturaAI/events{/privacy}",
"received_events_url": "https://api.github.com/users/VenturaAI/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 24
| 2024-10-31T06:52:48
| 2024-11-12T03:02:12
| 2024-11-12T03:02:12
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I have created a **local chatbot** in python 3.12 that allows user to chat with pdf uploaded by creating embeddings in qdrant vector database and further getting inference from ollama (Model LLama3.2:3B).
In my source code, I am using the following dependencies:
`streamlit
langchain
langchain_community
langchain_core
python-dotenv
langchain-huggingface
langchain-qdrant
langchain-ollama
unstructured[pdf]
onnx==1.16.1
qdrant-client
torch
torchvision
torchaudio`
Since I want to deploy the code on a server (where there is no dependencies installed), I will be using docker to run the containers for qdrant, chatbotapp and ollama. I have successfully pulled ollama latest image and qdrant using docker.
`docker run -d -v D:\myollamamodels:/root/.ollama -p 11434:11434 --name ollama ollama/ollama`
`docker exec -it ollama ollama run llama3.2:3b`
Both Ollama and docker container are running and accessible from within. Checked using Docker desktop aswell. I have also bridged the chatbot app, ollama and qdrant container onto single network using:
`docker network connect my_network ollama`
`docker network connect my_network qdrant`
Now when i run the app, it does open and allowing me to upload the pdf, create the embedding and my embeddings are also successfully store din vector DB( I have included relevant print statements which are reflected in app GUI). Now the issue comes when i want to chat with the document, so when i enter the question, it waits and instead of responding with the inference output , it provides me the error: "**⚠️ An error occurred while processing your request: [Errno 111] Connection refused".**
**I have the docker compose file as below:**
version: '3.8'
services:
qdrant:
image: qdrant/qdrant:v1.12.1
container_name: qdrant
ports:
- "6333:6333" # Expose Qdrant on the default port
volumes:
- qdrant_data:/qdrant/storage
networks:
- my_network # Connect qdrant to my_network
ollama:
image: ollama/ollama:latest
container_name: ollama
ports:
- "11434:11434" # Expose Ollama on the default port
environment:
- OLLAMA_MODEL=llama3.2:3b
volumes:
- /d/myollamamodels:/models
networks:
- my_network
app:
build: .
container_name: app_new
ports:
- "8501:8501" # Streamlit default port
environment:
QDRANT_URL: http://qdrant:6333 # Use Qdrant service name from Docker Compose
OLLAMA_URL: http://ollama:11434
#OLLAMA_MODEL: http://host.docker.internal:11434/llama3.2:3b # Point to Ollama on host
depends_on:
- qdrant
- ollama
volumes:
- ./models:/models # Mount the model directory for access
networks:
- my_network # Connect app to my_network
volumes:
qdrant_data:
networks:
my_network:
driver: bridge`
**The python program and class which I have been using for AI chatbot is as follows:**
Streamlit app code and vector embeddings code are in different.py files.
`class ChatbotManager:`
`def __init__(`
`self,`
`model_name: str = "BAAI/bge-small-en",`
`device: str = "cpu",`
`encode_kwargs: dict = {"normalize_embeddings": True},`
`llm_model: str = "llama3.2:3b",`
`#llm_model: str = None, # Set to None to use environment variable`
`llm_temperature: float = 0.7,`
`qdrant_url: str = "http://qdrant:6333",`
`ollama_url: str = "http://ollama:11434", # URL for Ollama inside Docker network`
`collection_name: str = "vector_db",`
`):`
` """`
` Initializes the ChatbotManager with embedding models, LLM, and vector store.`
Args:
model_name (str): The HuggingFace model name for embeddings.
device (str): The device to run the model on ('cpu' or 'cuda').
encode_kwargs (dict): Additional keyword arguments for encoding.
llm_model (str): The local LLM model name for ChatOllama.
llm_temperature (float): Temperature setting for the LLM.
qdrant_url (str): The URL for the Qdrant instance.
collection_name (str): The name of the Qdrant collection.
"""
self.model_name = model_name
self.device = device
self.encode_kwargs = encode_kwargs
#self.llm_model = llm_model
# Get the LLM model name from the environment variable
self.llm_model = os.getenv("OLLAMA_MODEL", llm_model)
self.llm_temperature = llm_temperature
self.qdrant_url = qdrant_url
self.collection_name = collection_name
self.ollama_url = ollama_url # Initialize ollama_url
# Initialize Embeddings
self.embeddings = HuggingFaceBgeEmbeddings(
model_name=self.model_name,
model_kwargs={"device": self.device},
encode_kwargs=self.encode_kwargs,
)
# Initialize Local LLM
self.llm = ChatOllama(
model=self.llm_model,
temperature=self.llm_temperature,
server_url=self.ollama_url
# Add other parameters if needed
)`
# Define the prompt template
self.prompt_template = """Use the following pieces of information to answer the user's question.
If you don't know the answer, just say that you don't know, don't try to make up an answer.
Context: {context}
Question: {question}
Only return the helpful answer. Answer must be detailed and well explained.
Helpful answer:
"""
# Initialize Qdrant client
self.client = QdrantClient(
url=self.qdrant_url, prefer_grpc=False
)
# Initialize the Qdrant vector store
self.db = Qdrant(
client=self.client,
embeddings=self.embeddings,
collection_name=self.collection_name
)
# Initialize the prompt
self.prompt = PromptTemplate(
template=self.prompt_template,
input_variables=['context', 'question']
)
# Initialize the retriever
self.retriever = self.db.as_retriever(search_kwargs={"k": 1})
# Define chain type kwargs
self.chain_type_kwargs = {"prompt": self.prompt}
# Initialize the RetrievalQA chain with return_source_documents=False
self.qa = RetrievalQA.from_chain_type(
llm=self.llm,
chain_type="stuff",
retriever=self.retriever,
return_source_documents=False, # Set to False to return only 'result'
chain_type_kwargs=self.chain_type_kwargs,
verbose=False
)
def get_response(self, query: str) -> str:
"""
Processes the user's query and returns the chatbot's response.
Args:
query (str): The user's input question.
Returns:
str: The chatbot's response.
"""
try:
response = self.qa.run(query)
return response # 'response' is now a string containing only the 'result'
except Exception as e:
st.error(f"An error occurred while processing your request: {e}")
return "Sorry, I couldn't process your request at the moment."`
**Logs of app container:**
> 2024-10-30` 16:47:13 2024-10-30 11:17:13.140 Examining the path of torch.classes raised: Tried to instantiate class '__path__._path', but it does not exist! Ensure that it is registered via torch::class_
> 2024-10-30` 16:49:55 2024-10-30 11:19:55.974 Examining the path of torch.classes raised: Tried to instantiate class '__path__._path', but it does not exist! Ensure that it is registered via torch::class_
> 2024-10-30 16:50:44 /app/chatbot.py:119: LangChainDeprecationWarning: The method `Chain.run` was deprecated in langchain 0.1.0 and will be removed in 1.0. Use :meth:`~invoke` instead.
> 2024-10-30 16:50:44 response = self.qa.run(query)
I have looked into it many times and modified it based on ollama_url and other factors such as checking ollama service availability, ollama container status, modification of yml file, but none seem to work and I am struck at this error. **The entire code is working well though within the development environment without docker (and with ollama as service on host)** but I need to deploy it at the earliest on a server to make it available on network.
I have checked ollama container service is working on port 11434 (did checked it via url and also via docker command) and qdrant is also working since the embedding are created and are shown via successful message in the APP UI but somehow the connection to ollama is being refused I guess.
Could someone please explain the issue and solution for this problem.
Thanks.
|
{
"login": "VenturaAI",
"id": 186903779,
"node_id": "U_kgDOCyPs4w",
"avatar_url": "https://avatars.githubusercontent.com/u/186903779?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VenturaAI",
"html_url": "https://github.com/VenturaAI",
"followers_url": "https://api.github.com/users/VenturaAI/followers",
"following_url": "https://api.github.com/users/VenturaAI/following{/other_user}",
"gists_url": "https://api.github.com/users/VenturaAI/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VenturaAI/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VenturaAI/subscriptions",
"organizations_url": "https://api.github.com/users/VenturaAI/orgs",
"repos_url": "https://api.github.com/users/VenturaAI/repos",
"events_url": "https://api.github.com/users/VenturaAI/events{/privacy}",
"received_events_url": "https://api.github.com/users/VenturaAI/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7444/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7444/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2807
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2807/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2807/comments
|
https://api.github.com/repos/ollama/ollama/issues/2807/events
|
https://github.com/ollama/ollama/issues/2807
| 2,158,808,847
|
I_kwDOJ0Z1Ps6ArM8P
| 2,807
|
[Windows] Ollama api/chat seems to not receive all chats all the time
|
{
"login": "stevengans",
"id": 10685309,
"node_id": "MDQ6VXNlcjEwNjg1MzA5",
"avatar_url": "https://avatars.githubusercontent.com/u/10685309?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevengans",
"html_url": "https://github.com/stevengans",
"followers_url": "https://api.github.com/users/stevengans/followers",
"following_url": "https://api.github.com/users/stevengans/following{/other_user}",
"gists_url": "https://api.github.com/users/stevengans/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevengans/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevengans/subscriptions",
"organizations_url": "https://api.github.com/users/stevengans/orgs",
"repos_url": "https://api.github.com/users/stevengans/repos",
"events_url": "https://api.github.com/users/stevengans/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevengans/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info",
"name": "needs more info",
"color": "BA8041",
"default": false,
"description": "More information is needed to assist"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 1
| 2024-02-28T12:05:12
| 2024-07-24T22:47:16
| 2024-07-24T22:47:10
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Still investigating. Will put together a way to replicate and add it here. From quick experiments, it sometimes gets the chats and sometimes does not. This does not happen on Mac with same system.
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2807/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2807/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/794
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/794/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/794/comments
|
https://api.github.com/repos/ollama/ollama/issues/794/events
|
https://github.com/ollama/ollama/pull/794
| 1,944,062,769
|
PR_kwDOJ0Z1Ps5c1q_P
| 794
|
Add oterm to community integrations
|
{
"login": "ggozad",
"id": 183103,
"node_id": "MDQ6VXNlcjE4MzEwMw==",
"avatar_url": "https://avatars.githubusercontent.com/u/183103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ggozad",
"html_url": "https://github.com/ggozad",
"followers_url": "https://api.github.com/users/ggozad/followers",
"following_url": "https://api.github.com/users/ggozad/following{/other_user}",
"gists_url": "https://api.github.com/users/ggozad/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ggozad/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ggozad/subscriptions",
"organizations_url": "https://api.github.com/users/ggozad/orgs",
"repos_url": "https://api.github.com/users/ggozad/repos",
"events_url": "https://api.github.com/users/ggozad/events{/privacy}",
"received_events_url": "https://api.github.com/users/ggozad/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-10-15T21:23:55
| 2023-10-16T22:51:55
| 2023-10-16T22:51:55
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/794",
"html_url": "https://github.com/ollama/ollama/pull/794",
"diff_url": "https://github.com/ollama/ollama/pull/794.diff",
"patch_url": "https://github.com/ollama/ollama/pull/794.patch",
"merged_at": "2023-10-16T22:51:55"
}
|
Hey there!
I just published [oterm](https://github.com/ggozad/ollama) a text-based terminal client for Ollama.
It features:
* intuitive and simple terminal UI, no need to run servers, frontends, just type oterm in your terminal.
* multiple persistent chat sessions, stored together with the context embeddings in sqlite.
* can use any of the models you have pulled in Ollama, or your own custom models.
This PR adds it to the community integrations
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/794/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/794/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/376
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/376/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/376/comments
|
https://api.github.com/repos/ollama/ollama/issues/376/events
|
https://github.com/ollama/ollama/pull/376
| 1,855,802,127
|
PR_kwDOJ0Z1Ps5YM2M1
| 376
|
ignore nil map values
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-08-17T22:51:03
| 2023-08-17T22:57:13
| 2023-08-17T22:57:12
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/376",
"html_url": "https://github.com/ollama/ollama/pull/376",
"diff_url": "https://github.com/ollama/ollama/pull/376.diff",
"patch_url": "https://github.com/ollama/ollama/pull/376.patch",
"merged_at": "2023-08-17T22:57:12"
}
|
some api clients may pass a nil value so best to ignore it
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/376/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/376/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4786
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4786/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4786/comments
|
https://api.github.com/repos/ollama/ollama/issues/4786/events
|
https://github.com/ollama/ollama/issues/4786
| 2,329,726,026
|
I_kwDOJ0Z1Ps6K3MxK
| 4,786
|
Error: invalid file magic for IQ2_M.gguf based models
|
{
"login": "Greatz08",
"id": 55040435,
"node_id": "MDQ6VXNlcjU1MDQwNDM1",
"avatar_url": "https://avatars.githubusercontent.com/u/55040435?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Greatz08",
"html_url": "https://github.com/Greatz08",
"followers_url": "https://api.github.com/users/Greatz08/followers",
"following_url": "https://api.github.com/users/Greatz08/following{/other_user}",
"gists_url": "https://api.github.com/users/Greatz08/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Greatz08/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Greatz08/subscriptions",
"organizations_url": "https://api.github.com/users/Greatz08/orgs",
"repos_url": "https://api.github.com/users/Greatz08/repos",
"events_url": "https://api.github.com/users/Greatz08/events{/privacy}",
"received_events_url": "https://api.github.com/users/Greatz08/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
open
| false
| null |
[] | null | 0
| 2024-06-02T15:43:13
| 2024-06-02T15:43:13
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
https://huggingface.co/bartowski/Codestral-22B-v0.1-GGUF
From this hugging face repo i downloaded - Codestral-22B-v0.1-IQ2_M.gguf this model , But when i tried to build with ollama it give me Error: invalid file magic . I did research in ollama issues and many IQ models support were added thanks to devs but this IQ2_M support i couldnt see so creating this issue here so that support for this type of model can be added to ollama so that many more can test type of quantized model.
This variant is much better than some supported ones by ollama like IQ2_XS so addition of this variant will help them in choosing better too in my opinion saying on basis of this data :
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
Thankyou very much for this awesome project devs :-))
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4786/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4786/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/4209
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4209/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4209/comments
|
https://api.github.com/repos/ollama/ollama/issues/4209/events
|
https://github.com/ollama/ollama/issues/4209
| 2,281,805,296
|
I_kwDOJ0Z1Ps6IAZXw
| 4,209
|
IBM-Granite
|
{
"login": "ALutz273",
"id": 72616997,
"node_id": "MDQ6VXNlcjcyNjE2OTk3",
"avatar_url": "https://avatars.githubusercontent.com/u/72616997?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ALutz273",
"html_url": "https://github.com/ALutz273",
"followers_url": "https://api.github.com/users/ALutz273/followers",
"following_url": "https://api.github.com/users/ALutz273/following{/other_user}",
"gists_url": "https://api.github.com/users/ALutz273/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ALutz273/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ALutz273/subscriptions",
"organizations_url": "https://api.github.com/users/ALutz273/orgs",
"repos_url": "https://api.github.com/users/ALutz273/repos",
"events_url": "https://api.github.com/users/ALutz273/events{/privacy}",
"received_events_url": "https://api.github.com/users/ALutz273/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
closed
| false
| null |
[] | null | 13
| 2024-05-06T21:26:04
| 2024-07-26T12:22:11
| 2024-06-04T06:51:15
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hello,
Very interesting because of software license
https://github.com/ibm-granite/granite-code-models
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4209/reactions",
"total_count": 52,
"+1": 47,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 4,
"eyes": 1
}
|
https://api.github.com/repos/ollama/ollama/issues/4209/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6563
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6563/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6563/comments
|
https://api.github.com/repos/ollama/ollama/issues/6563/events
|
https://github.com/ollama/ollama/issues/6563
| 2,495,924,455
|
I_kwDOJ0Z1Ps6UxMjn
| 6,563
|
ollama with text file?
|
{
"login": "ayttop",
"id": 178673810,
"node_id": "U_kgDOCqZYkg",
"avatar_url": "https://avatars.githubusercontent.com/u/178673810?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ayttop",
"html_url": "https://github.com/ayttop",
"followers_url": "https://api.github.com/users/ayttop/followers",
"following_url": "https://api.github.com/users/ayttop/following{/other_user}",
"gists_url": "https://api.github.com/users/ayttop/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ayttop/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ayttop/subscriptions",
"organizations_url": "https://api.github.com/users/ayttop/orgs",
"repos_url": "https://api.github.com/users/ayttop/repos",
"events_url": "https://api.github.com/users/ayttop/events{/privacy}",
"received_events_url": "https://api.github.com/users/ayttop/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-08-30T01:09:03
| 2024-09-01T23:12:43
| 2024-09-01T23:12:43
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Can ollama handle a text file?
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6563/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6563/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8198
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8198/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8198/comments
|
https://api.github.com/repos/ollama/ollama/issues/8198/events
|
https://github.com/ollama/ollama/pull/8198
| 2,754,054,016
|
PR_kwDOJ0Z1Ps6F-98H
| 8,198
|
feat(readme): add Perplexica
|
{
"login": "ItzCrazyKns",
"id": 95534749,
"node_id": "U_kgDOBbG-nQ",
"avatar_url": "https://avatars.githubusercontent.com/u/95534749?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ItzCrazyKns",
"html_url": "https://github.com/ItzCrazyKns",
"followers_url": "https://api.github.com/users/ItzCrazyKns/followers",
"following_url": "https://api.github.com/users/ItzCrazyKns/following{/other_user}",
"gists_url": "https://api.github.com/users/ItzCrazyKns/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ItzCrazyKns/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ItzCrazyKns/subscriptions",
"organizations_url": "https://api.github.com/users/ItzCrazyKns/orgs",
"repos_url": "https://api.github.com/users/ItzCrazyKns/repos",
"events_url": "https://api.github.com/users/ItzCrazyKns/events{/privacy}",
"received_events_url": "https://api.github.com/users/ItzCrazyKns/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-12-21T13:53:26
| 2024-12-23T01:04:02
| 2024-12-23T01:04:02
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/8198",
"html_url": "https://github.com/ollama/ollama/pull/8198",
"diff_url": "https://github.com/ollama/ollama/pull/8198.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8198.patch",
"merged_at": "2024-12-23T01:04:02"
}
| null |
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8198/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8198/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/8132
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8132/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8132/comments
|
https://api.github.com/repos/ollama/ollama/issues/8132/events
|
https://github.com/ollama/ollama/issues/8132
| 2,744,204,832
|
I_kwDOJ0Z1Ps6jkT4g
| 8,132
|
Fine-tuned Qwen2.5-Instruct isn't supported as expectation
|
{
"login": "sunday-hao",
"id": 127651124,
"node_id": "U_kgDOB5vNNA",
"avatar_url": "https://avatars.githubusercontent.com/u/127651124?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sunday-hao",
"html_url": "https://github.com/sunday-hao",
"followers_url": "https://api.github.com/users/sunday-hao/followers",
"following_url": "https://api.github.com/users/sunday-hao/following{/other_user}",
"gists_url": "https://api.github.com/users/sunday-hao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sunday-hao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sunday-hao/subscriptions",
"organizations_url": "https://api.github.com/users/sunday-hao/orgs",
"repos_url": "https://api.github.com/users/sunday-hao/repos",
"events_url": "https://api.github.com/users/sunday-hao/events{/privacy}",
"received_events_url": "https://api.github.com/users/sunday-hao/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 20
| 2024-12-17T08:07:28
| 2025-01-09T09:20:23
| 2025-01-09T09:20:23
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Hi, there!
The base model I use is Qwen2.5-Instruct. Firstly, I fine-tuned the base model on my private dataset. Then, I convert it to .gguf format by llama.cpp. Trying to make my model easily supported by ollama.
but it turns out to be:
> ollama create my_model -f ./Modelfile
> transferring model data 100%
> converting model
> Error: unsupported architecture
Also, I tried with `--quantize` and without `--quantize` argument, both failed with same error as above.
Nevertheless, the qwen2.5 series is supported by newest version of ollama based on [ollama.com/library](url). So is there anyone can help me? thanks a lot
### OS
Windows
### GPU
AMD
### CPU
AMD
### Ollama version
0.5.1
|
{
"login": "sunday-hao",
"id": 127651124,
"node_id": "U_kgDOB5vNNA",
"avatar_url": "https://avatars.githubusercontent.com/u/127651124?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sunday-hao",
"html_url": "https://github.com/sunday-hao",
"followers_url": "https://api.github.com/users/sunday-hao/followers",
"following_url": "https://api.github.com/users/sunday-hao/following{/other_user}",
"gists_url": "https://api.github.com/users/sunday-hao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sunday-hao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sunday-hao/subscriptions",
"organizations_url": "https://api.github.com/users/sunday-hao/orgs",
"repos_url": "https://api.github.com/users/sunday-hao/repos",
"events_url": "https://api.github.com/users/sunday-hao/events{/privacy}",
"received_events_url": "https://api.github.com/users/sunday-hao/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8132/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8132/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/951
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/951/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/951/comments
|
https://api.github.com/repos/ollama/ollama/issues/951/events
|
https://github.com/ollama/ollama/pull/951
| 1,969,026,692
|
PR_kwDOJ0Z1Ps5eJ-vh
| 951
|
fly example
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 3
| 2023-10-30T19:00:14
| 2024-05-09T02:50:29
| 2024-05-07T17:46:25
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/951",
"html_url": "https://github.com/ollama/ollama/pull/951",
"diff_url": "https://github.com/ollama/ollama/pull/951.diff",
"patch_url": "https://github.com/ollama/ollama/pull/951.patch",
"merged_at": "2024-05-07T17:46:25"
}
| null |
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/951/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/951/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4527
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4527/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4527/comments
|
https://api.github.com/repos/ollama/ollama/issues/4527/events
|
https://github.com/ollama/ollama/pull/4527
| 2,304,809,154
|
PR_kwDOJ0Z1Ps5v5ZMx
| 4,527
|
docs: Add dnf install "hipblas rocm-*" to Linux Installation Guide
|
{
"login": "vorburger",
"id": 298598,
"node_id": "MDQ6VXNlcjI5ODU5OA==",
"avatar_url": "https://avatars.githubusercontent.com/u/298598?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vorburger",
"html_url": "https://github.com/vorburger",
"followers_url": "https://api.github.com/users/vorburger/followers",
"following_url": "https://api.github.com/users/vorburger/following{/other_user}",
"gists_url": "https://api.github.com/users/vorburger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vorburger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vorburger/subscriptions",
"organizations_url": "https://api.github.com/users/vorburger/orgs",
"repos_url": "https://api.github.com/users/vorburger/repos",
"events_url": "https://api.github.com/users/vorburger/events{/privacy}",
"received_events_url": "https://api.github.com/users/vorburger/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 0
| 2024-05-19T22:30:50
| 2024-05-19T22:34:25
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4527",
"html_url": "https://github.com/ollama/ollama/pull/4527",
"diff_url": "https://github.com/ollama/ollama/pull/4527.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4527.patch",
"merged_at": null
}
|
See https://github.com/vorburger/vorburger.ch-Notes/blob/develop/ml/ollama1.md for context.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4527/reactions",
"total_count": 1,
"+1": 0,
"-1": 1,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4527/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/8678
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8678/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8678/comments
|
https://api.github.com/repos/ollama/ollama/issues/8678/events
|
https://github.com/ollama/ollama/issues/8678
| 2,819,618,742
|
I_kwDOJ0Z1Ps6oD_e2
| 8,678
|
Missing support for name field
|
{
"login": "gagb",
"id": 13227607,
"node_id": "MDQ6VXNlcjEzMjI3NjA3",
"avatar_url": "https://avatars.githubusercontent.com/u/13227607?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gagb",
"html_url": "https://github.com/gagb",
"followers_url": "https://api.github.com/users/gagb/followers",
"following_url": "https://api.github.com/users/gagb/following{/other_user}",
"gists_url": "https://api.github.com/users/gagb/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gagb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gagb/subscriptions",
"organizations_url": "https://api.github.com/users/gagb/orgs",
"repos_url": "https://api.github.com/users/gagb/repos",
"events_url": "https://api.github.com/users/gagb/events{/privacy}",
"received_events_url": "https://api.github.com/users/gagb/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 6657611864,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjNMYWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/compatibility",
"name": "compatibility",
"color": "bfdadc",
"default": false,
"description": ""
},
{
"id": 7706482389,
"node_id": "LA_kwDOJ0Z1Ps8AAAABy1eW1Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/api",
"name": "api",
"color": "bfdadc",
"default": false,
"description": ""
}
] |
open
| false
|
{
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "https://api.github.com/users/ParthSareen/followers",
"following_url": "https://api.github.com/users/ParthSareen/following{/other_user}",
"gists_url": "https://api.github.com/users/ParthSareen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ParthSareen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ParthSareen/subscriptions",
"organizations_url": "https://api.github.com/users/ParthSareen/orgs",
"repos_url": "https://api.github.com/users/ParthSareen/repos",
"events_url": "https://api.github.com/users/ParthSareen/events{/privacy}",
"received_events_url": "https://api.github.com/users/ParthSareen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "https://api.github.com/users/ParthSareen/followers",
"following_url": "https://api.github.com/users/ParthSareen/following{/other_user}",
"gists_url": "https://api.github.com/users/ParthSareen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ParthSareen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ParthSareen/subscriptions",
"organizations_url": "https://api.github.com/users/ParthSareen/orgs",
"repos_url": "https://api.github.com/users/ParthSareen/repos",
"events_url": "https://api.github.com/users/ParthSareen/events{/privacy}",
"received_events_url": "https://api.github.com/users/ParthSareen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 3
| 2025-01-30T00:20:26
| 2025-01-30T08:40:28
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
For many models, phi-4, deepseek-r1, Ollama support OpenAI chat completion format, but it seems like it does not support the name field in the message history. It only supports the role and content field. Is there a plan to fix this?
### OS
_No response_
### GPU
_No response_
### CPU
_No response_
### Ollama version
_No response_
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8678/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8678/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/6652
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6652/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6652/comments
|
https://api.github.com/repos/ollama/ollama/issues/6652/events
|
https://github.com/ollama/ollama/issues/6652
| 2,507,274,398
|
I_kwDOJ0Z1Ps6Vcfie
| 6,652
|
Add Dracarys-Llama-3.1-70B-Instruct support
|
{
"login": "LSeu-Open",
"id": 95351758,
"node_id": "U_kgDOBa7zzg",
"avatar_url": "https://avatars.githubusercontent.com/u/95351758?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LSeu-Open",
"html_url": "https://github.com/LSeu-Open",
"followers_url": "https://api.github.com/users/LSeu-Open/followers",
"following_url": "https://api.github.com/users/LSeu-Open/following{/other_user}",
"gists_url": "https://api.github.com/users/LSeu-Open/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LSeu-Open/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LSeu-Open/subscriptions",
"organizations_url": "https://api.github.com/users/LSeu-Open/orgs",
"repos_url": "https://api.github.com/users/LSeu-Open/repos",
"events_url": "https://api.github.com/users/LSeu-Open/events{/privacy}",
"received_events_url": "https://api.github.com/users/LSeu-Open/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
open
| false
| null |
[] | null | 2
| 2024-09-05T09:32:08
| 2024-09-10T07:19:46
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hello,
thanks for the awesome work on Ollama.
It would be nice to add support of the [Dracarys-Llama-3.1-70B-Instruct](https://huggingface.co/abacusai/Dracarys-Llama-3.1-70B-Instruct) model from [abacus.ai](https://abacus.ai/) .
This is a Coding fine-tune version of Llama-3.1-70B-Instruct managing a high score on LiveCodeBench.
Thanks in advance.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6652/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6652/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/6126
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6126/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6126/comments
|
https://api.github.com/repos/ollama/ollama/issues/6126/events
|
https://github.com/ollama/ollama/pull/6126
| 2,443,512,645
|
PR_kwDOJ0Z1Ps53LaLp
| 6,126
|
llm: add Q4_0_4_4, Q4_0_4_8, Q4_0_8_8 quants
|
{
"login": "joshyan1",
"id": 76125168,
"node_id": "MDQ6VXNlcjc2MTI1MTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/76125168?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joshyan1",
"html_url": "https://github.com/joshyan1",
"followers_url": "https://api.github.com/users/joshyan1/followers",
"following_url": "https://api.github.com/users/joshyan1/following{/other_user}",
"gists_url": "https://api.github.com/users/joshyan1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joshyan1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joshyan1/subscriptions",
"organizations_url": "https://api.github.com/users/joshyan1/orgs",
"repos_url": "https://api.github.com/users/joshyan1/repos",
"events_url": "https://api.github.com/users/joshyan1/events{/privacy}",
"received_events_url": "https://api.github.com/users/joshyan1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 9
| 2024-08-01T21:33:43
| 2024-11-22T01:42:33
| 2024-11-22T01:42:33
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | true
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6126",
"html_url": "https://github.com/ollama/ollama/pull/6126",
"diff_url": "https://github.com/ollama/ollama/pull/6126.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6126.patch",
"merged_at": null
}
|
referencing https://github.com/ggerganov/llama.cpp/pull/5780
potentially fix https://github.com/ollama/ollama/issues/6125
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6126/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6126/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4461
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4461/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4461/comments
|
https://api.github.com/repos/ollama/ollama/issues/4461/events
|
https://github.com/ollama/ollama/pull/4461
| 2,298,958,850
|
PR_kwDOJ0Z1Ps5vlttS
| 4,461
|
fix the cpu estimatedTotal memory + get the expiry time for loading models
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-05-15T22:29:49
| 2024-05-15T22:43:17
| 2024-05-15T22:43:16
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4461",
"html_url": "https://github.com/ollama/ollama/pull/4461",
"diff_url": "https://github.com/ollama/ollama/pull/4461.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4461.patch",
"merged_at": "2024-05-15T22:43:16"
}
| null |
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4461/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4461/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/8488
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8488/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8488/comments
|
https://api.github.com/repos/ollama/ollama/issues/8488/events
|
https://github.com/ollama/ollama/issues/8488
| 2,797,755,067
|
I_kwDOJ0Z1Ps6mwlq7
| 8,488
|
Log all Ollama calls
|
{
"login": "ergosumdre",
"id": 35677602,
"node_id": "MDQ6VXNlcjM1Njc3NjAy",
"avatar_url": "https://avatars.githubusercontent.com/u/35677602?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ergosumdre",
"html_url": "https://github.com/ergosumdre",
"followers_url": "https://api.github.com/users/ergosumdre/followers",
"following_url": "https://api.github.com/users/ergosumdre/following{/other_user}",
"gists_url": "https://api.github.com/users/ergosumdre/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ergosumdre/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ergosumdre/subscriptions",
"organizations_url": "https://api.github.com/users/ergosumdre/orgs",
"repos_url": "https://api.github.com/users/ergosumdre/repos",
"events_url": "https://api.github.com/users/ergosumdre/events{/privacy}",
"received_events_url": "https://api.github.com/users/ergosumdre/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 1
| 2025-01-19T18:04:23
| 2025-01-28T21:19:26
| 2025-01-28T21:19:26
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Could you consider adding a log file feature that records detailed information about each call? This could include the model name, input, output, and all other parameters used. It would be incredibly useful for tracking and debugging purposes.
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8488/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8488/timeline
| null |
duplicate
| false
|
https://api.github.com/repos/ollama/ollama/issues/8052
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8052/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8052/comments
|
https://api.github.com/repos/ollama/ollama/issues/8052/events
|
https://github.com/ollama/ollama/pull/8052
| 2,733,695,366
|
PR_kwDOJ0Z1Ps6E5glc
| 8,052
|
ci: fix artifact path prefix for missing windows payloads
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-12-11T18:32:09
| 2024-12-11T18:59:35
| 2024-12-11T18:59:32
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/8052",
"html_url": "https://github.com/ollama/ollama/pull/8052",
"diff_url": "https://github.com/ollama/ollama/pull/8052.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8052.patch",
"merged_at": "2024-12-11T18:59:32"
}
|
upload-artifacts strips off leading common paths so when the ./build/ artifacts were removed, the ./dist/windows-amd64 prefix became common and was stripped, making the
later download-artifacts place them in the wrong location
Example intermediate artifact from before the build changes:
```
% unzip generate-windows-cuda-11.3.zip
Archive: generate-windows-cuda-11.3.zip
inflating: build/darwin/amd64/placeholder
inflating: build/darwin/arm64/placeholder
inflating: build/embed_darwin_amd64.go
inflating: build/embed_darwin_arm64.go
inflating: build/embed_linux.go
inflating: build/embed_unused.go
inflating: build/linux/amd64/placeholder
inflating: build/linux/arm64/placeholder
inflating: dist/windows-amd64/lib/ollama/cublas64_11.dll
inflating: dist/windows-amd64/lib/ollama/cudart32_110.dll
inflating: dist/windows-amd64/lib/ollama/cublasLt64_11.dll
inflating: dist/windows-amd64/lib/ollama/cudart64_110.dll
inflating: dist/windows-amd64/lib/ollama/ggml_cuda_v11.dll
inflating: dist/windows-amd64/lib/ollama/runners/cuda_v11/ollama_llama_server.exe
```
Example artifact now:
```
% unzip generate-windows-cuda-11.3.zip
Archive: generate-windows-cuda-11.3.zip
inflating: lib/ollama/cublas64_11.dll
inflating: lib/ollama/cublasLt64_11.dll
inflating: lib/ollama/cudart64_110.dll
inflating: lib/ollama/cudart32_110.dll
inflating: lib/ollama/runners/cuda_v11_avx/ggml_cuda_v11.dll
inflating: lib/ollama/runners/cuda_v11_avx/ollama_llama_server.exe
```
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8052/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8052/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7692
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7692/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7692/comments
|
https://api.github.com/repos/ollama/ollama/issues/7692/events
|
https://github.com/ollama/ollama/issues/7692
| 2,662,997,014
|
I_kwDOJ0Z1Ps6euhwW
| 7,692
|
Getting no compatible GPUs were discovered yet I have gpu
|
{
"login": "mosquet",
"id": 136934740,
"node_id": "U_kgDOCCl1VA",
"avatar_url": "https://avatars.githubusercontent.com/u/136934740?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mosquet",
"html_url": "https://github.com/mosquet",
"followers_url": "https://api.github.com/users/mosquet/followers",
"following_url": "https://api.github.com/users/mosquet/following{/other_user}",
"gists_url": "https://api.github.com/users/mosquet/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mosquet/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mosquet/subscriptions",
"organizations_url": "https://api.github.com/users/mosquet/orgs",
"repos_url": "https://api.github.com/users/mosquet/repos",
"events_url": "https://api.github.com/users/mosquet/events{/privacy}",
"received_events_url": "https://api.github.com/users/mosquet/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5755339642,
"node_id": "LA_kwDOJ0Z1Ps8AAAABVwuDeg",
"url": "https://api.github.com/repos/ollama/ollama/labels/linux",
"name": "linux",
"color": "516E70",
"default": false,
"description": ""
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg",
"url": "https://api.github.com/repos/ollama/ollama/labels/nvidia",
"name": "nvidia",
"color": "8CDB00",
"default": false,
"description": "Issues relating to Nvidia GPUs and CUDA"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-11-15T20:00:40
| 2024-11-19T00:31:57
| 2024-11-19T00:31:16
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
When my pc goes to sleep sometime the gpu connection is lost
`2024/11/15 19:56:13 routes.go:1189: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
2024-11-15T19:56:13.362448649Z time=2024-11-15T19:56:13.362Z level=INFO source=images.go:755 msg="total blobs: 37"
2024-11-15T19:56:13.364066191Z time=2024-11-15T19:56:13.363Z level=INFO source=images.go:762 msg="total unused blobs removed: 0"
2024-11-15T19:56:13.365638182Z time=2024-11-15T19:56:13.365Z level=INFO source=routes.go:1240 msg="Listening on [::]:11434 (version 0.4.1)"
2024-11-15T19:56:13.368269602Z time=2024-11-15T19:56:13.367Z level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cpu_avx cpu_avx2 cuda_v11 cuda_v12 cpu]"
2024-11-15T19:56:13.368604044Z time=2024-11-15T19:56:13.368Z level=INFO source=gpu.go:221 msg="looking for compatible GPUs"
2024-11-15T19:56:13.383488354Z time=2024-11-15T19:56:13.383Z level=INFO source=gpu.go:386 msg="no compatible GPUs were discovered"
2024-11-15T19:56:13.383536438Z time=2024-11-15T19:56:13.383Z level=INFO source=types.go:123 msg="inference compute" id=0 library=cpu variant=avx2 compute="" driver=0.0 name="" total="31.1 GiB" available="24.6 GiB"
`
`Fri Nov 15 19:59:28 2024
2024-11-15T19:59:28.584372444Z +-----------------------------------------------------------------------------------------+
2024-11-15T19:59:28.584388296Z | NVIDIA-SMI 560.35.03 Driver Version: 560.35.03 CUDA Version: 12.6 |
2024-11-15T19:59:28.584399840Z |-----------------------------------------+------------------------+----------------------+
2024-11-15T19:59:28.584410958Z | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
2024-11-15T19:59:28.584422578Z | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
2024-11-15T19:59:28.584437552Z | | | MIG M. |
2024-11-15T19:59:28.584447617Z |=========================================+========================+======================|
2024-11-15T19:59:28.674749982Z | 0 NVIDIA GeForce RTX 3090 Off | 00000000:01:00.0 Off | N/A |
2024-11-15T19:59:28.674803184Z | 0% 41C P5 30W / 350W | 958MiB / 24576MiB | 42% Default |
2024-11-15T19:59:28.674861798Z | | | N/A |
2024-11-15T19:59:28.674868490Z +-----------------------------------------+------------------------+----------------------+
2024-11-15T19:59:28.675044806Z
2024-11-15T19:59:28.675082544Z +-----------------------------------------------------------------------------------------+
2024-11-15T19:59:28.675090168Z | Processes: |
2024-11-15T19:59:28.675098139Z | GPU GI CI PID Type Process name GPU Memory |
2024-11-15T19:59:28.675105563Z | ID ID Usage |
2024-11-15T19:59:28.675112134Z |=========================================================================================|
2024-11-15T19:59:28.677696019Z | No running processes found |
2024-11-15T19:59:28.677746593Z +-----------------------------------------------------------------------------------------+`
### OS
Linux
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.4.1
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7692/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7692/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5643
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5643/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5643/comments
|
https://api.github.com/repos/ollama/ollama/issues/5643/events
|
https://github.com/ollama/ollama/issues/5643
| 2,404,475,621
|
I_kwDOJ0Z1Ps6PUWLl
| 5,643
|
[Windows 10] Error: llama runner process has terminated: exit status 0xc0000139
|
{
"login": "hljhyb",
"id": 42955249,
"node_id": "MDQ6VXNlcjQyOTU1MjQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/42955249?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hljhyb",
"html_url": "https://github.com/hljhyb",
"followers_url": "https://api.github.com/users/hljhyb/followers",
"following_url": "https://api.github.com/users/hljhyb/following{/other_user}",
"gists_url": "https://api.github.com/users/hljhyb/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hljhyb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hljhyb/subscriptions",
"organizations_url": "https://api.github.com/users/hljhyb/orgs",
"repos_url": "https://api.github.com/users/hljhyb/repos",
"events_url": "https://api.github.com/users/hljhyb/events{/privacy}",
"received_events_url": "https://api.github.com/users/hljhyb/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-07-12T01:29:06
| 2024-07-12T02:10:07
| 2024-07-12T01:42:00
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Previously, all models could run very well, but after a recent upgrade, errors are occurring. I have enabled Debug mode. What could be the issue?
[server.log](https://github.com/user-attachments/files/16186027/server.log)
[app.log](https://github.com/user-attachments/files/16186033/app.log)
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.2.1
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5643/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5643/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6507
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6507/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6507/comments
|
https://api.github.com/repos/ollama/ollama/issues/6507/events
|
https://github.com/ollama/ollama/issues/6507
| 2,485,638,327
|
I_kwDOJ0Z1Ps6UJ9S3
| 6,507
|
Create Blob API returned nothing
|
{
"login": "cool-firer",
"id": 23241972,
"node_id": "MDQ6VXNlcjIzMjQxOTcy",
"avatar_url": "https://avatars.githubusercontent.com/u/23241972?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cool-firer",
"html_url": "https://github.com/cool-firer",
"followers_url": "https://api.github.com/users/cool-firer/followers",
"following_url": "https://api.github.com/users/cool-firer/following{/other_user}",
"gists_url": "https://api.github.com/users/cool-firer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cool-firer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cool-firer/subscriptions",
"organizations_url": "https://api.github.com/users/cool-firer/orgs",
"repos_url": "https://api.github.com/users/cool-firer/repos",
"events_url": "https://api.github.com/users/cool-firer/events{/privacy}",
"received_events_url": "https://api.github.com/users/cool-firer/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-08-26T01:31:37
| 2024-09-12T01:34:41
| 2024-09-12T01:34:40
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
From the API doc:
> Create a Blob
> `POST /api/blobs/:digest`
> Create a blob from a file on the server. Returns the server file path.
tried, got 201 status code,but got nothing else returned. bugs?
### OS
_No response_
### GPU
_No response_
### CPU
_No response_
### Ollama version
ollama version is 0.3.6
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6507/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6507/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7987
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7987/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7987/comments
|
https://api.github.com/repos/ollama/ollama/issues/7987/events
|
https://github.com/ollama/ollama/issues/7987
| 2,724,662,914
|
I_kwDOJ0Z1Ps6iZw6C
| 7,987
|
better diagnosis / error messages when ctx is too small
|
{
"login": "fce2",
"id": 16529960,
"node_id": "MDQ6VXNlcjE2NTI5OTYw",
"avatar_url": "https://avatars.githubusercontent.com/u/16529960?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fce2",
"html_url": "https://github.com/fce2",
"followers_url": "https://api.github.com/users/fce2/followers",
"following_url": "https://api.github.com/users/fce2/following{/other_user}",
"gists_url": "https://api.github.com/users/fce2/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fce2/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fce2/subscriptions",
"organizations_url": "https://api.github.com/users/fce2/orgs",
"repos_url": "https://api.github.com/users/fce2/repos",
"events_url": "https://api.github.com/users/fce2/events{/privacy}",
"received_events_url": "https://api.github.com/users/fce2/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-12-07T15:01:07
| 2024-12-07T17:43:47
| 2024-12-07T17:43:39
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
It would be great to have some errors or warnings.
I ran into a problem where i had too many tools registered.
Suddenly the "get-time" function was not called anymore.
I assume I ran out of context (I deleted some tools then it worked again).
So my feature request would be a warning when the context of a llm is not big enough.
|
{
"login": "fce2",
"id": 16529960,
"node_id": "MDQ6VXNlcjE2NTI5OTYw",
"avatar_url": "https://avatars.githubusercontent.com/u/16529960?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fce2",
"html_url": "https://github.com/fce2",
"followers_url": "https://api.github.com/users/fce2/followers",
"following_url": "https://api.github.com/users/fce2/following{/other_user}",
"gists_url": "https://api.github.com/users/fce2/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fce2/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fce2/subscriptions",
"organizations_url": "https://api.github.com/users/fce2/orgs",
"repos_url": "https://api.github.com/users/fce2/repos",
"events_url": "https://api.github.com/users/fce2/events{/privacy}",
"received_events_url": "https://api.github.com/users/fce2/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7987/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7987/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6821
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6821/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6821/comments
|
https://api.github.com/repos/ollama/ollama/issues/6821/events
|
https://github.com/ollama/ollama/issues/6821
| 2,527,452,573
|
I_kwDOJ0Z1Ps6Wpd2d
| 6,821
|
使用Modelfile加载本地gguf文件,会胡乱输出内容
|
{
"login": "czhcc",
"id": 4754730,
"node_id": "MDQ6VXNlcjQ3NTQ3MzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/4754730?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/czhcc",
"html_url": "https://github.com/czhcc",
"followers_url": "https://api.github.com/users/czhcc/followers",
"following_url": "https://api.github.com/users/czhcc/following{/other_user}",
"gists_url": "https://api.github.com/users/czhcc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/czhcc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/czhcc/subscriptions",
"organizations_url": "https://api.github.com/users/czhcc/orgs",
"repos_url": "https://api.github.com/users/czhcc/repos",
"events_url": "https://api.github.com/users/czhcc/events{/privacy}",
"received_events_url": "https://api.github.com/users/czhcc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 4
| 2024-09-16T04:02:43
| 2024-12-12T10:03:01
| 2024-09-16T04:11:56
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
我是使用llama-factory在qwen2-1.5b的基础上微调出一个hf模型。
再用llama.cpp将其转为gguf,加载到ollama后会胡乱输出。
但将原始的qwen2-1.5b转换为gguf后加载没问题。
微调出的hf模型用vllm加载测试没问题。
微调出的hf模型转换出的gguf文件,用llama-server加载测试也没问题。
就是ollama加载这个gguf会胡乱输出。
### OS
Windows, Docker
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.3.10
|
{
"login": "czhcc",
"id": 4754730,
"node_id": "MDQ6VXNlcjQ3NTQ3MzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/4754730?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/czhcc",
"html_url": "https://github.com/czhcc",
"followers_url": "https://api.github.com/users/czhcc/followers",
"following_url": "https://api.github.com/users/czhcc/following{/other_user}",
"gists_url": "https://api.github.com/users/czhcc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/czhcc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/czhcc/subscriptions",
"organizations_url": "https://api.github.com/users/czhcc/orgs",
"repos_url": "https://api.github.com/users/czhcc/repos",
"events_url": "https://api.github.com/users/czhcc/events{/privacy}",
"received_events_url": "https://api.github.com/users/czhcc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6821/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6821/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1613
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1613/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1613/comments
|
https://api.github.com/repos/ollama/ollama/issues/1613/events
|
https://github.com/ollama/ollama/issues/1613
| 2,049,295,671
|
I_kwDOJ0Z1Ps56JcU3
| 1,613
|
Test_Routes Version Handler assertion fails
|
{
"login": "alinanorakari",
"id": 4239297,
"node_id": "MDQ6VXNlcjQyMzkyOTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/4239297?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alinanorakari",
"html_url": "https://github.com/alinanorakari",
"followers_url": "https://api.github.com/users/alinanorakari/followers",
"following_url": "https://api.github.com/users/alinanorakari/following{/other_user}",
"gists_url": "https://api.github.com/users/alinanorakari/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alinanorakari/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alinanorakari/subscriptions",
"organizations_url": "https://api.github.com/users/alinanorakari/orgs",
"repos_url": "https://api.github.com/users/alinanorakari/repos",
"events_url": "https://api.github.com/users/alinanorakari/events{/privacy}",
"received_events_url": "https://api.github.com/users/alinanorakari/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 3
| 2023-12-19T19:19:26
| 2023-12-19T23:48:54
| 2023-12-19T23:48:53
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
this assertion fails when I try to build the project:
https://github.com/jmorganca/ollama/blame/1ca484f67e6f607114496211004942013e5595eb/server/routes_test.go#L74
Error:
```
[GIN] 2023/12/19 - 19:11:41 | 200 | 56.659µs | 127.0.0.1 | GET "/api/version"
```
[...]
```
--- FAIL: Test_Routes (0.00s)
routes_test.go:185: Running Test: [Version Handler]
routes_test.go:74:
Error Trace: /build/source/server/routes_test.go:74
/build/source/server/routes_test.go:199
Error: Not equal:
expected: "{\"version\":\"0.0.0\"}"
actual : "{\"version\":\"0.1.17\"}"
Diff:
--- Expected
+++ Actual
@@ -1 +1 @@
-{"version":"0.0.0"}
+{"version":"0.1.17"}
Test: Test_Routes
```
Either I don't fully understand how the assertion is supposed to work and I am doing something wrong or the version string `"0.0.0"` in this assertion should be `"0.1.17"`.
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1613/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1613/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4834
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4834/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4834/comments
|
https://api.github.com/repos/ollama/ollama/issues/4834/events
|
https://github.com/ollama/ollama/issues/4834
| 2,335,964,918
|
I_kwDOJ0Z1Ps6LO_72
| 4,834
|
Cannot pull models when http_proxy/HTTP_PROXY are set.
|
{
"login": "janukarhisa",
"id": 89907865,
"node_id": "MDQ6VXNlcjg5OTA3ODY1",
"avatar_url": "https://avatars.githubusercontent.com/u/89907865?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/janukarhisa",
"html_url": "https://github.com/janukarhisa",
"followers_url": "https://api.github.com/users/janukarhisa/followers",
"following_url": "https://api.github.com/users/janukarhisa/following{/other_user}",
"gists_url": "https://api.github.com/users/janukarhisa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/janukarhisa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/janukarhisa/subscriptions",
"organizations_url": "https://api.github.com/users/janukarhisa/orgs",
"repos_url": "https://api.github.com/users/janukarhisa/repos",
"events_url": "https://api.github.com/users/janukarhisa/events{/privacy}",
"received_events_url": "https://api.github.com/users/janukarhisa/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-06-05T13:45:08
| 2024-08-23T21:26:46
| 2024-08-23T21:26:46
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Our server is located behind a proxy. The environment variables for both the host and Docker daemon are set with `http_proxy`, `https_proxy`, `HTTP_PROXY`, and `HTTPS_PROXY` to apply proxy settings to all containers.
For testing purposes, I created a container using the following command:
```bash
docker run -d -v {host_path}:/root/.ollama ollama/ollama:latest
```
When I go inside the container and try to pull the Mistral model, I get the following error:
```
Error: something went wrong, please see the ollama server logs for details
```
However, if I unset the `http_proxy` and `HTTP_PROXY` environment variables, the pulling method works without any problem. What am I missing here?
### OS
Linux
### GPU
_No response_
### CPU
Intel
### Ollama version
0.1.41
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4834/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4834/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/148
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/148/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/148/comments
|
https://api.github.com/repos/ollama/ollama/issues/148/events
|
https://github.com/ollama/ollama/pull/148
| 1,814,813,144
|
PR_kwDOJ0Z1Ps5WC94b
| 148
|
add llama.cpp mpi, opencl files
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-07-20T21:18:07
| 2023-07-20T21:26:50
| 2023-07-20T21:26:46
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/148",
"html_url": "https://github.com/ollama/ollama/pull/148",
"diff_url": "https://github.com/ollama/ollama/pull/148.diff",
"patch_url": "https://github.com/ollama/ollama/pull/148.patch",
"merged_at": "2023-07-20T21:26:46"
}
|
the full source from llama.cpp is now included with additional build constraints on components not readily compatible with the current build
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/148/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/148/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2408
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2408/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2408/comments
|
https://api.github.com/repos/ollama/ollama/issues/2408/events
|
https://github.com/ollama/ollama/issues/2408
| 2,124,873,593
|
I_kwDOJ0Z1Ps5-pv95
| 2,408
|
Add binary support for Nvidia Jetson Orin - JetPack 6
|
{
"login": "MrDelusionAI",
"id": 36128506,
"node_id": "MDQ6VXNlcjM2MTI4NTA2",
"avatar_url": "https://avatars.githubusercontent.com/u/36128506?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MrDelusionAI",
"html_url": "https://github.com/MrDelusionAI",
"followers_url": "https://api.github.com/users/MrDelusionAI/followers",
"following_url": "https://api.github.com/users/MrDelusionAI/following{/other_user}",
"gists_url": "https://api.github.com/users/MrDelusionAI/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MrDelusionAI/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MrDelusionAI/subscriptions",
"organizations_url": "https://api.github.com/users/MrDelusionAI/orgs",
"repos_url": "https://api.github.com/users/MrDelusionAI/repos",
"events_url": "https://api.github.com/users/MrDelusionAI/events{/privacy}",
"received_events_url": "https://api.github.com/users/MrDelusionAI/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg",
"url": "https://api.github.com/repos/ollama/ollama/labels/nvidia",
"name": "nvidia",
"color": "8CDB00",
"default": false,
"description": "Issues relating to Nvidia GPUs and CUDA"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 46
| 2024-02-08T10:52:07
| 2024-11-12T18:31:54
| 2024-11-12T18:31:54
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I believe Ollama is a great project, I have tried different ideas to try get Ollama to utilise the GPU, but still uses CPU.
I have currently flashed Jetpack 6 DP onto the AGX ORIN Dev Kit. I believe this jetpack version will help Ollama use the GPU easier, if you are able to add support for it.
```shell
nvcc --version
```
```shell
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2023 NVIDIA Corporation
Built on Tue_Aug_15_22:08:11_PDT_2023
Cuda compilation tools, release 12.2, V12.2.140
Build cuda_12.2.r12.2/compiler.33191640_0
```
```shell
nvidia-smi
```
```shell
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 540.2.0 Driver Version: N/A CUDA Version: 12.2 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 Orin (nvgpu) N/A | N/A N/A | N/A |
| N/A N/A N/A N/A / N/A | Not Supported | N/A N/A |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
+---------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
| No running processes found |
+---------------------------------------------------------------------------------------+
```
Thank you
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2408/reactions",
"total_count": 6,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2408/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6315
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6315/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6315/comments
|
https://api.github.com/repos/ollama/ollama/issues/6315/events
|
https://github.com/ollama/ollama/issues/6315
| 2,459,777,427
|
I_kwDOJ0Z1Ps6SnTmT
| 6,315
|
Sharing computing power in a decentralized P2P network
|
{
"login": "trymeouteh",
"id": 31172274,
"node_id": "MDQ6VXNlcjMxMTcyMjc0",
"avatar_url": "https://avatars.githubusercontent.com/u/31172274?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/trymeouteh",
"html_url": "https://github.com/trymeouteh",
"followers_url": "https://api.github.com/users/trymeouteh/followers",
"following_url": "https://api.github.com/users/trymeouteh/following{/other_user}",
"gists_url": "https://api.github.com/users/trymeouteh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/trymeouteh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/trymeouteh/subscriptions",
"organizations_url": "https://api.github.com/users/trymeouteh/orgs",
"repos_url": "https://api.github.com/users/trymeouteh/repos",
"events_url": "https://api.github.com/users/trymeouteh/events{/privacy}",
"received_events_url": "https://api.github.com/users/trymeouteh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 3
| 2024-08-11T21:53:09
| 2024-12-02T20:15:44
| 2024-12-02T20:15:44
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
To have a feature built into Ollama to join the Ollama P2P network. When you join this network, you can use computing power from others in the network and also share your computing power.
Privacy should be in mind by using E2EE, choosing which models on your system is apart of the P2P network, settings to disable and enable the use of the P2P network and if the [users](https://github.com/ollama/ollama/issues/2863) feature becomes a thing, to allow users the setting on weather to be apart or use this P2P network when working with models.
Twinny Symmetry Project
https://twinny.dev/symmetry
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6315/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6315/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8179
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8179/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8179/comments
|
https://api.github.com/repos/ollama/ollama/issues/8179/events
|
https://github.com/ollama/ollama/issues/8179
| 2,750,864,585
|
I_kwDOJ0Z1Ps6j9tzJ
| 8,179
|
LLAMA 3:70B is crashing inside K8s pods
|
{
"login": "IrfDev",
"id": 53235311,
"node_id": "MDQ6VXNlcjUzMjM1MzEx",
"avatar_url": "https://avatars.githubusercontent.com/u/53235311?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/IrfDev",
"html_url": "https://github.com/IrfDev",
"followers_url": "https://api.github.com/users/IrfDev/followers",
"following_url": "https://api.github.com/users/IrfDev/following{/other_user}",
"gists_url": "https://api.github.com/users/IrfDev/gists{/gist_id}",
"starred_url": "https://api.github.com/users/IrfDev/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/IrfDev/subscriptions",
"organizations_url": "https://api.github.com/users/IrfDev/orgs",
"repos_url": "https://api.github.com/users/IrfDev/repos",
"events_url": "https://api.github.com/users/IrfDev/events{/privacy}",
"received_events_url": "https://api.github.com/users/IrfDev/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info",
"name": "needs more info",
"color": "BA8041",
"default": false,
"description": "More information is needed to assist"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-12-19T16:28:53
| 2025-01-13T01:42:13
| 2025-01-13T01:42:13
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
What is the issue?
I've successfully setup an OLLAMA server running in a K8s implementation with the official NVIDIA drivers and cuda toolkit.
I downloaded the LLAMA3:70B model but when I tried to run it it keeps crashing with no errors
OS
Linux
Ubuntu22.4.0
GPU
NVIDIA RTX 4060
CUDA0 model buffer size = 5508.75 MiB
CPU
AMD5
CPU model buffer size = 32601.86 MiB
Ollama version
ollama version is 0.3.6
The logs are the following:
```llama_load_model_from_file: using device CUDA0 (NVIDIA GeForce RTX 4060) - 7839 MiB free
llama_model_loader: loaded meta data with 22 key-value pairs and 723 tensors from /data/models/blobs/sha256-xxx (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.name str = Meta-Llama-3-70B-Instruct
llama_model_loader: - kv 2: llama.block_count u32 = 80
llama_model_loader: - kv 3: llama.context_length u32 = 8192
llama_model_loader: - kv 4: llama.embedding_length u32 = 8192
llama_model_loader: - kv 5: llama.feed_forward_length u32 = 28672
llama_model_loader: - kv 6: llama.attention.head_count u32 = 64
llama_model_loader: - kv 7: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 8: llama.rope.freq_base f32 = 500000.000000
llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 10: general.file_type u32 = 2
llama_model_loader: - kv 11: llama.vocab_size u32 = 128256
llama_model_loader: - kv 12: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 13: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 14: tokenizer.ggml.pre str = llama-bpe
llama_model_loader: - kv 15: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 16: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 17: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv 18: tokenizer.ggml.bos_token_id u32 = 128000
llama_model_loader: - kv 19: tokenizer.ggml.eos_token_id u32 = 128009
llama_model_loader: - kv 20: tokenizer.chat_template str = {% set loop_messages = messages %}{% ...
llama_model_loader: - kv 21: general.quantization_version u32 = 2
llama_model_loader: - type f32: 161 tensors
llama_model_loader: - type q4_0: 561 tensors
llama_model_loader: - type q6_K: 1 tensors
time=2024-12-19T16:21:24.511Z level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model"
llm_load_vocab: special tokens cache size = 256
llm_load_vocab: token to piece cache size = 0.8000 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 128256
llm_load_print_meta: n_merges = 280147
llm_load_print_meta: vocab_only = 0
llm_load_print_meta: n_ctx_train = 8192
llm_load_print_meta: n_embd = 8192
llm_load_print_meta: n_layer = 80
llm_load_print_meta: n_head = 64
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_swa = 0
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 8
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 28672
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 8192
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: ssm_dt_b_c_rms = 0
llm_load_print_meta: model type = 70B
llm_load_print_meta: model ftype = Q4_0
llm_load_print_meta: model params = 70.55 B
llm_load_print_meta: model size = 37.22 GiB (4.53 BPW)
llm_load_print_meta: general.name = Meta-Llama-3-70B-Instruct
llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>'
llm_load_print_meta: EOS token = 128009 '<|eot_id|>'
llm_load_print_meta: EOT token = 128009 '<|eot_id|>'
llm_load_print_meta: LF token = 128 'Ä'
llm_load_print_meta: EOG token = 128009 '<|eot_id|>'
llm_load_print_meta: max token length = 256
llm_load_tensors: offloading 12 repeating layers to GPU
llm_load_tensors: offloaded 12/81 layers to GPU
llm_load_tensors: CPU model buffer size = 32601.86 MiB
llm_load_tensors: CUDA0 model buffer size = 5508.75 MiB
time=2024-12-19T16:21:52.845Z level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server not responding"```
### OS
Linux
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.5.4
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8179/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8179/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7813
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7813/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7813/comments
|
https://api.github.com/repos/ollama/ollama/issues/7813/events
|
https://github.com/ollama/ollama/issues/7813
| 2,687,416,666
|
I_kwDOJ0Z1Ps6gLrla
| 7,813
|
Not utilizing GPU
|
{
"login": "F-U-B-AR",
"id": 126811594,
"node_id": "U_kgDOB479yg",
"avatar_url": "https://avatars.githubusercontent.com/u/126811594?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/F-U-B-AR",
"html_url": "https://github.com/F-U-B-AR",
"followers_url": "https://api.github.com/users/F-U-B-AR/followers",
"following_url": "https://api.github.com/users/F-U-B-AR/following{/other_user}",
"gists_url": "https://api.github.com/users/F-U-B-AR/gists{/gist_id}",
"starred_url": "https://api.github.com/users/F-U-B-AR/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/F-U-B-AR/subscriptions",
"organizations_url": "https://api.github.com/users/F-U-B-AR/orgs",
"repos_url": "https://api.github.com/users/F-U-B-AR/repos",
"events_url": "https://api.github.com/users/F-U-B-AR/events{/privacy}",
"received_events_url": "https://api.github.com/users/F-U-B-AR/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg",
"url": "https://api.github.com/repos/ollama/ollama/labels/nvidia",
"name": "nvidia",
"color": "8CDB00",
"default": false,
"description": "Issues relating to Nvidia GPUs and CUDA"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info",
"name": "needs more info",
"color": "BA8041",
"default": false,
"description": "More information is needed to assist"
}
] |
closed
| false
| null |
[] | null | 3
| 2024-11-24T09:22:58
| 2024-12-14T15:33:25
| 2024-12-14T15:33:25
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
OS: Debian 12
GPU: Nvidia RTX 3060
Hello, Ive been trying to solve this for months, but I think its time to get some help!
Essentially on Debian, Ollama will only use the CPU and does not seem to discover my GPU:
I have installed the latest CUDA toolkit using:
https://developer.nvidia.com/cuda-downloads?target_os=Linux&target_arch=x86_64&Distribution=Debian
Here is the log of when I start Ollama:
`2024/11/24 09:11:28 routes.go:1197: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/null/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2024-11-24T09:11:28.169Z level=INFO source=images.go:753 msg="total blobs: 0"
time=2024-11-24T09:11:28.169Z level=INFO source=images.go:760 msg="total unused blobs removed: 0"
time=2024-11-24T09:11:28.169Z level=INFO source=routes.go:1248 msg="Listening on 127.0.0.1:11434 (version 0.4.4)"
time=2024-11-24T09:11:28.169Z level=INFO source=common.go:135 msg="extracting embedded files" dir=/tmp/ollama1083297226/runners
time=2024-11-24T09:11:28.252Z level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cuda_v11 cuda_v12 rocm cpu cpu_avx cpu_avx2]"
time=2024-11-24T09:11:28.252Z level=INFO source=gpu.go:221 msg="looking for compatible GPUs"
time=2024-11-24T09:11:29.433Z level=WARN source=gpu.go:613 msg="unknown error initializing cuda driver library /usr/lib/x86_64-linux-gnu/nvidia/current/libcuda.so.535.183.01: cuda driver library init failure: 999. see https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md for more information"
time=2024-11-24T09:11:29.477Z level=WARN source=amd_linux.go:61 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
time=2024-11-24T09:11:29.477Z level=INFO source=amd_linux.go:296 msg="unsupported Radeon iGPU detected skipping" id=0 total="512.0 MiB"
time=2024-11-24T09:11:29.477Z level=INFO source=amd_linux.go:399 msg="no compatible amdgpu devices detected"
time=2024-11-24T09:11:29.477Z level=INFO source=gpu.go:386 msg="no compatible GPUs were discovered"
time=2024-11-24T09:11:29.477Z level=INFO source=types.go:123 msg="inference compute" id=0 library=cpu variant=avx2 compute="" driver=0.0 name="" total="15.0 GiB" available="12.3 GiB"
`
Any help would be appreciated!
### OS
Linux
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.4.4
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7813/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7813/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8376
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8376/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8376/comments
|
https://api.github.com/repos/ollama/ollama/issues/8376/events
|
https://github.com/ollama/ollama/issues/8376
| 2,780,442,504
|
I_kwDOJ0Z1Ps6lui-I
| 8,376
|
Ollama version doesn't properly truncate tokens to 512 max for official snowflake-arctic-embed-l model
|
{
"login": "shuaiscott",
"id": 5650363,
"node_id": "MDQ6VXNlcjU2NTAzNjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/5650363?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shuaiscott",
"html_url": "https://github.com/shuaiscott",
"followers_url": "https://api.github.com/users/shuaiscott/followers",
"following_url": "https://api.github.com/users/shuaiscott/following{/other_user}",
"gists_url": "https://api.github.com/users/shuaiscott/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shuaiscott/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shuaiscott/subscriptions",
"organizations_url": "https://api.github.com/users/shuaiscott/orgs",
"repos_url": "https://api.github.com/users/shuaiscott/repos",
"events_url": "https://api.github.com/users/shuaiscott/events{/privacy}",
"received_events_url": "https://api.github.com/users/shuaiscott/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 1
| 2025-01-10T15:29:00
| 2025-01-10T22:20:12
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
When using the official Ollama model of snowflake-arctic-embed-l (latest/335m - 21ab8b9b0545), if input is greater than 512 tokens, instead of truncating, the model encounters an error.
On a previous version (0.3.9) when you pass it more than 512 tokens, it returns only [0,0,0...] embeddings.
In 0.5.4, Ollama returns a 500 error and the logs show that "Process xxxxxx (ollama_llama_se) of user xxx dumped core"
Logs:
```
llama_model_load: vocab only - skipping tensors
ggml-cpu.c:8400: GGML_ASSERT(i01 >= 0 && i01 < ne01) failed
ggml-cpu.c:8400: GGML_ASSERT(i01 >= 0 && i01 < ne01) failed
SIGSEGV: segmentation violation
PC=0x7fcc733ecc57 m=5 sigcode=1 addr=0x207203fe0
signal arrived during ago violation
goroutine 8 gp=0xc0000f21c0 m=5 mp=0xc000100008 [syscall]:
runtime.cgocall(0x562b649d47d0, 0xc000073b90)
runtime/cgocall.go:167
github.com/ollama/ollama/llama._Cfunc_llama_decode(0x7fcbf115bfa0, {0x2, 0x7fcbf0b80590, 0x0, 0x0, 0x7fcbf0b80da0, 0x7fcbf0b815b, 0x7fcbf0b81dc0, 0x7fcbf1144dc0})
...
```
I've checked my Ollama parameters and this occurs when "truncate": true. Other embedding models properly truncates the input and I see the INFO log in Ollama say "input truncated". I don't see this message with snowflake-arctic-embed-l.
When "truncate" is set to false, I get the expected "input length exceeds maximum context length".
https://ollama.com/library/snowflake-arctic-embed
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.5.4
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8376/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8376/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/761
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/761/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/761/comments
|
https://api.github.com/repos/ollama/ollama/issues/761/events
|
https://github.com/ollama/ollama/issues/761
| 1,938,736,936
|
I_kwDOJ0Z1Ps5zjsco
| 761
|
Processing inference in parallel
|
{
"login": "SabareeshGC",
"id": 114115146,
"node_id": "U_kgDOBs1CSg",
"avatar_url": "https://avatars.githubusercontent.com/u/114115146?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SabareeshGC",
"html_url": "https://github.com/SabareeshGC",
"followers_url": "https://api.github.com/users/SabareeshGC/followers",
"following_url": "https://api.github.com/users/SabareeshGC/following{/other_user}",
"gists_url": "https://api.github.com/users/SabareeshGC/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SabareeshGC/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SabareeshGC/subscriptions",
"organizations_url": "https://api.github.com/users/SabareeshGC/orgs",
"repos_url": "https://api.github.com/users/SabareeshGC/repos",
"events_url": "https://api.github.com/users/SabareeshGC/events{/privacy}",
"received_events_url": "https://api.github.com/users/SabareeshGC/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 9
| 2023-10-11T21:05:40
| 2024-06-07T08:44:54
| 2023-12-22T03:33:16
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I was using http endpoint but it appears it is limited to 1 request for processing , is it possible to process multiple inference request at the same time.
ref https://github.com/ggerganov/llama.cpp/pull/3228
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/761/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/761/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3501
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3501/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3501/comments
|
https://api.github.com/repos/ollama/ollama/issues/3501/events
|
https://github.com/ollama/ollama/issues/3501
| 2,227,670,159
|
I_kwDOJ0Z1Ps6Ex4yP
| 3,501
|
Ollama push got `retrieving manifest Error: file does not exist`
|
{
"login": "pacozaa",
"id": 3154089,
"node_id": "MDQ6VXNlcjMxNTQwODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3154089?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pacozaa",
"html_url": "https://github.com/pacozaa",
"followers_url": "https://api.github.com/users/pacozaa/followers",
"following_url": "https://api.github.com/users/pacozaa/following{/other_user}",
"gists_url": "https://api.github.com/users/pacozaa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pacozaa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pacozaa/subscriptions",
"organizations_url": "https://api.github.com/users/pacozaa/orgs",
"repos_url": "https://api.github.com/users/pacozaa/repos",
"events_url": "https://api.github.com/users/pacozaa/events{/privacy}",
"received_events_url": "https://api.github.com/users/pacozaa/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677370291,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCVsw",
"url": "https://api.github.com/repos/ollama/ollama/labels/networking",
"name": "networking",
"color": "0B5368",
"default": false,
"description": "Issues relating to ollama pull and push"
}
] |
open
| false
|
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 19
| 2024-04-05T10:35:37
| 2024-12-08T16:23:45
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
When I run `ollama push [myusername]/modelthatexist` I got error
```
retrieving manifest
Error: file does not exist
```
### What did you expect to see?
I expected the model to be pushed.
### Steps to reproduce
- Download this gguf https://huggingface.co/pacozaa/TinyLlama-1.1B-intermediate-step-1431k-3T-GGUF
- Create Model file that link to the gguf
- Run a model
- Next, copy your model to your username's namespace: `ollama cp example <your username>/example`
- Push the model: `ollama push <your username>/example`
### Are there any recent changes that introduced the issue?
_No response_
### OS
_No response_
### Architecture
_No response_
### Platform
_No response_
### Ollama version
_No response_
### GPU
_No response_
### GPU info
_No response_
### CPU
_No response_
### Other software
_No response_
|
{
"login": "pacozaa",
"id": 3154089,
"node_id": "MDQ6VXNlcjMxNTQwODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3154089?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pacozaa",
"html_url": "https://github.com/pacozaa",
"followers_url": "https://api.github.com/users/pacozaa/followers",
"following_url": "https://api.github.com/users/pacozaa/following{/other_user}",
"gists_url": "https://api.github.com/users/pacozaa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pacozaa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pacozaa/subscriptions",
"organizations_url": "https://api.github.com/users/pacozaa/orgs",
"repos_url": "https://api.github.com/users/pacozaa/repos",
"events_url": "https://api.github.com/users/pacozaa/events{/privacy}",
"received_events_url": "https://api.github.com/users/pacozaa/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3501/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3501/timeline
| null |
reopened
| false
|
https://api.github.com/repos/ollama/ollama/issues/8226
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8226/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8226/comments
|
https://api.github.com/repos/ollama/ollama/issues/8226/events
|
https://github.com/ollama/ollama/issues/8226
| 2,757,242,458
|
I_kwDOJ0Z1Ps6kWC5a
| 8,226
|
ollama.com model quantization levels are not displayed correctly
|
{
"login": "CberYellowstone",
"id": 37031767,
"node_id": "MDQ6VXNlcjM3MDMxNzY3",
"avatar_url": "https://avatars.githubusercontent.com/u/37031767?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CberYellowstone",
"html_url": "https://github.com/CberYellowstone",
"followers_url": "https://api.github.com/users/CberYellowstone/followers",
"following_url": "https://api.github.com/users/CberYellowstone/following{/other_user}",
"gists_url": "https://api.github.com/users/CberYellowstone/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CberYellowstone/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CberYellowstone/subscriptions",
"organizations_url": "https://api.github.com/users/CberYellowstone/orgs",
"repos_url": "https://api.github.com/users/CberYellowstone/repos",
"events_url": "https://api.github.com/users/CberYellowstone/events{/privacy}",
"received_events_url": "https://api.github.com/users/CberYellowstone/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6573197867,
"node_id": "LA_kwDOJ0Z1Ps8AAAABh8sKKw",
"url": "https://api.github.com/repos/ollama/ollama/labels/ollama.com",
"name": "ollama.com",
"color": "ffffff",
"default": false,
"description": ""
}
] |
open
| false
|
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 3
| 2024-12-24T06:07:38
| 2025-01-06T05:53:37
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
This issue is a continuation of #7816
The issue with incorrect local ollama quantization levels in #7816 has been resolved, but the same problem appears in the model cards of models uploaded to ollama.com.
example:
https://ollama.com/CBYellowstone/sakura-v1.0

it should be:

### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.5.4
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8226/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8226/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/32
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/32/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/32/comments
|
https://api.github.com/repos/ollama/ollama/issues/32/events
|
https://github.com/ollama/ollama/pull/32
| 1,783,270,064
|
PR_kwDOJ0Z1Ps5UXs-n
| 32
|
just in time install llama-cpp-python
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-07-01T00:04:44
| 2023-07-06T16:30:48
| 2023-07-06T16:30:31
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/32",
"html_url": "https://github.com/ollama/ollama/pull/32",
"diff_url": "https://github.com/ollama/ollama/pull/32.diff",
"patch_url": "https://github.com/ollama/ollama/pull/32.patch",
"merged_at": null
}
| null |
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/32/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/32/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2930
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2930/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2930/comments
|
https://api.github.com/repos/ollama/ollama/issues/2930/events
|
https://github.com/ollama/ollama/issues/2930
| 2,168,676,620
|
I_kwDOJ0Z1Ps6BQ2EM
| 2,930
|
Ollama异常终止
|
{
"login": "GeYingzhen01",
"id": 155865563,
"node_id": "U_kgDOCUpR2w",
"avatar_url": "https://avatars.githubusercontent.com/u/155865563?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/GeYingzhen01",
"html_url": "https://github.com/GeYingzhen01",
"followers_url": "https://api.github.com/users/GeYingzhen01/followers",
"following_url": "https://api.github.com/users/GeYingzhen01/following{/other_user}",
"gists_url": "https://api.github.com/users/GeYingzhen01/gists{/gist_id}",
"starred_url": "https://api.github.com/users/GeYingzhen01/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/GeYingzhen01/subscriptions",
"organizations_url": "https://api.github.com/users/GeYingzhen01/orgs",
"repos_url": "https://api.github.com/users/GeYingzhen01/repos",
"events_url": "https://api.github.com/users/GeYingzhen01/events{/privacy}",
"received_events_url": "https://api.github.com/users/GeYingzhen01/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 11
| 2024-03-05T09:08:28
| 2024-06-04T07:17:12
| 2024-06-04T06:59:06
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
English:
After upgrading to version 0.1.27, there has been a noticeable improvement in performance. Although the generation speed is not very fast, the program runs without significant lag. However, Ollama terminates automatically during operation.
中文:
在升级到0.1.27版本之后,性能有了明显的提升。虽然生成速度还不是很快,但程序运行没有明显卡顿。然而,在运行期间,Ollama会自动终止。


|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2930/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2930/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7968
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7968/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7968/comments
|
https://api.github.com/repos/ollama/ollama/issues/7968/events
|
https://github.com/ollama/ollama/issues/7968
| 2,723,321,400
|
I_kwDOJ0Z1Ps6iUpY4
| 7,968
|
PaliGemma 2
|
{
"login": "joaquinito2070",
"id": 118765355,
"node_id": "U_kgDOBxQ3Kw",
"avatar_url": "https://avatars.githubusercontent.com/u/118765355?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joaquinito2070",
"html_url": "https://github.com/joaquinito2070",
"followers_url": "https://api.github.com/users/joaquinito2070/followers",
"following_url": "https://api.github.com/users/joaquinito2070/following{/other_user}",
"gists_url": "https://api.github.com/users/joaquinito2070/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joaquinito2070/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joaquinito2070/subscriptions",
"organizations_url": "https://api.github.com/users/joaquinito2070/orgs",
"repos_url": "https://api.github.com/users/joaquinito2070/repos",
"events_url": "https://api.github.com/users/joaquinito2070/events{/privacy}",
"received_events_url": "https://api.github.com/users/joaquinito2070/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-12-06T15:30:50
| 2025-01-29T15:29:13
| 2024-12-06T21:18:48
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
https://huggingface.co/blog/paligemma2
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7968/reactions",
"total_count": 5,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7968/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6092
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6092/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6092/comments
|
https://api.github.com/repos/ollama/ollama/issues/6092/events
|
https://github.com/ollama/ollama/issues/6092
| 2,439,347,391
|
I_kwDOJ0Z1Ps6RZXy_
| 6,092
|
Error: timed out waiting for llama runner to start - progress 1.00 -
|
{
"login": "JasonJasonXU",
"id": 19587994,
"node_id": "MDQ6VXNlcjE5NTg3OTk0",
"avatar_url": "https://avatars.githubusercontent.com/u/19587994?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JasonJasonXU",
"html_url": "https://github.com/JasonJasonXU",
"followers_url": "https://api.github.com/users/JasonJasonXU/followers",
"following_url": "https://api.github.com/users/JasonJasonXU/following{/other_user}",
"gists_url": "https://api.github.com/users/JasonJasonXU/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JasonJasonXU/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JasonJasonXU/subscriptions",
"organizations_url": "https://api.github.com/users/JasonJasonXU/orgs",
"repos_url": "https://api.github.com/users/JasonJasonXU/repos",
"events_url": "https://api.github.com/users/JasonJasonXU/events{/privacy}",
"received_events_url": "https://api.github.com/users/JasonJasonXU/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-07-31T07:32:52
| 2024-09-01T00:06:09
| 2024-08-01T07:26:38
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Details:
[root@i-zdrHahqvL ~]# OLLAMA_DEBUG=1 ollama serve 2>&1 | tee server.log
2024/07/31 15:08:52 routes.go:1099: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
time=2024-07-31T15:08:52.553+08:00 level=INFO source=images.go:784 msg="total blobs: 0"
time=2024-07-31T15:08:52.553+08:00 level=INFO source=images.go:791 msg="total unused blobs removed: 0"
time=2024-07-31T15:08:52.553+08:00 level=INFO source=routes.go:1146 msg="Listening on 127.0.0.1:11434 (version 0.3.0)"
time=2024-07-31T15:08:52.553+08:00 level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama1636953247/runners
time=2024-07-31T15:08:52.553+08:00 level=DEBUG source=payload.go:182 msg=extracting variant=cpu file=build/linux/x86_64/cpu/bin/ollama_llama_server.gz
time=2024-07-31T15:08:52.553+08:00 level=DEBUG source=payload.go:182 msg=extracting variant=cpu_avx file=build/linux/x86_64/cpu_avx/bin/ollama_llama_server.gz
time=2024-07-31T15:08:52.553+08:00 level=DEBUG source=payload.go:182 msg=extracting variant=cpu_avx2 file=build/linux/x86_64/cpu_avx2/bin/ollama_llama_server.gz
time=2024-07-31T15:08:52.553+08:00 level=DEBUG source=payload.go:182 msg=extracting variant=cuda_v11 file=build/linux/x86_64/cuda_v11/bin/libcublas.so.11.gz
time=2024-07-31T15:08:52.553+08:00 level=DEBUG source=payload.go:182 msg=extracting variant=cuda_v11 file=build/linux/x86_64/cuda_v11/bin/libcublasLt.so.11.gz
time=2024-07-31T15:08:52.553+08:00 level=DEBUG source=payload.go:182 msg=extracting variant=cuda_v11 file=build/linux/x86_64/cuda_v11/bin/libcudart.so.11.0.gz
time=2024-07-31T15:08:52.553+08:00 level=DEBUG source=payload.go:182 msg=extracting variant=cuda_v11 file=build/linux/x86_64/cuda_v11/bin/ollama_llama_server.gz
time=2024-07-31T15:08:52.553+08:00 level=DEBUG source=payload.go:182 msg=extracting variant=rocm_v60102 file=build/linux/x86_64/rocm_v60102/bin/deps.txt.gz
time=2024-07-31T15:08:52.553+08:00 level=DEBUG source=payload.go:182 msg=extracting variant=rocm_v60102 file=build/linux/x86_64/rocm_v60102/bin/ollama_llama_server.gz
time=2024-07-31T15:08:57.310+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama1636953247/runners/cpu/ollama_llama_server
time=2024-07-31T15:08:57.310+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama1636953247/runners/cpu_avx/ollama_llama_server
time=2024-07-31T15:08:57.310+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama1636953247/runners/cpu_avx2/ollama_llama_server
time=2024-07-31T15:08:57.310+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama1636953247/runners/cuda_v11/ollama_llama_server
time=2024-07-31T15:08:57.310+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama1636953247/runners/rocm_v60102/ollama_llama_server
time=2024-07-31T15:08:57.310+08:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu_avx2 cuda_v11 rocm_v60102 cpu cpu_avx]"
time=2024-07-31T15:08:57.310+08:00 level=DEBUG source=payload.go:45 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY"
time=2024-07-31T15:08:57.310+08:00 level=DEBUG source=sched.go:102 msg="starting llm scheduler"
time=2024-07-31T15:08:57.310+08:00 level=INFO source=gpu.go:205 msg="looking for compatible GPUs"
time=2024-07-31T15:08:57.310+08:00 level=DEBUG source=gpu.go:91 msg="searching for GPU discovery libraries for NVIDIA"
time=2024-07-31T15:08:57.310+08:00 level=DEBUG source=gpu.go:468 msg="Searching for GPU library" name=libcuda.so*
time=2024-07-31T15:08:57.310+08:00 level=DEBUG source=gpu.go:487 msg="gpu library search" globs="[/usr/local/cuda/lib64/libcuda.so** /root/libcuda.so** /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]"
time=2024-07-31T15:08:57.311+08:00 level=DEBUG source=gpu.go:521 msg="discovered GPU libraries" paths="[/usr/lib/libcuda.so.470.256.02 /usr/lib64/libcuda.so.470.256.02]"
library /usr/lib/libcuda.so.470.256.02 load err: /usr/lib/libcuda.so.470.256.02: wrong ELF class: ELFCLASS32
time=2024-07-31T15:08:57.311+08:00 level=DEBUG source=gpu.go:562 msg="skipping 32bit library" library=/usr/lib/libcuda.so.470.256.02
CUDA driver version: 11.4
time=2024-07-31T15:08:57.317+08:00 level=DEBUG source=gpu.go:124 msg="detected GPUs" count=1 library=/usr/lib64/libcuda.so.470.256.02
[GPU-66762d9a-49a1-11ef-a56b-0f40675dcf0a] CUDA totalMem 8020 mb
[GPU-66762d9a-49a1-11ef-a56b-0f40675dcf0a] CUDA freeMem 7634 mb
[GPU-66762d9a-49a1-11ef-a56b-0f40675dcf0a] Compute Capability 8.6
time=2024-07-31T15:08:57.450+08:00 level=DEBUG source=amd_linux.go:356 msg="amdgpu driver not detected /sys/module/amdgpu"
releasing cuda driver library
time=2024-07-31T15:08:57.450+08:00 level=INFO source=types.go:105 msg="inference compute" id=GPU-66762d9a-49a1-11ef-a56b-0f40675dcf0a library=cuda compute=8.6 driver=11.4 name="NVIDIA A40-8Q" total="7.8 GiB" available="7.5 GiB"
[GIN] 2024/07/31 - 15:10:15 | 200 | 80.04µs | 127.0.0.1 | HEAD "/"
[GIN] 2024/07/31 - 15:10:15 | 200 | 240.079µs | 127.0.0.1 | GET "/api/tags"
[GIN] 2024/07/31 - 15:10:22 | 200 | 32.088µs | 127.0.0.1 | HEAD "/"
[GIN] 2024/07/31 - 15:10:22 | 200 | 97.172µs | 127.0.0.1 | GET "/api/ps"
[GIN] 2024/07/31 - 15:10:33 | 200 | 30.201µs | 127.0.0.1 | HEAD "/"
[GIN] 2024/07/31 - 15:10:33 | 404 | 291.581µs | 127.0.0.1 | POST "/api/show"
time=2024-07-31T15:10:36.318+08:00 level=INFO source=download.go:136 msg="downloading 43f7a214e532 in 45 100 MB part(s)"
time=2024-07-31T15:10:46.974+08:00 level=INFO source=images.go:1053 msg="request failed: Get \"https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/43/43f7a214e5329f672bb05404cfba1913cbb70fdaa1a17497224e1925046b0ed5/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%2F20240731%2Fauto%2Fs3%2Faws4_request&X-Amz-Date=20240731T071235Z&X-Amz-Expires=86400&X-Amz-SignedHeaders=host&X-Amz-Signature=519cf733df9fd6b10bf26ec24a94cddfe36246417e19142e6dd552b25c636520\": net/http: TLS handshake timeout"
time=2024-07-31T15:10:46.974+08:00 level=INFO source=download.go:178 msg="43f7a214e532 part 33 attempt 0 failed: Get \"https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/43/43f7a214e5329f672bb05404cfba1913cbb70fdaa1a17497224e1925046b0ed5/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%2F20240731%2Fauto%2Fs3%2Faws4_request&X-Amz-Date=20240731T071235Z&X-Amz-Expires=86400&X-Amz-SignedHeaders=host&X-Amz-Signature=519cf733df9fd6b10bf26ec24a94cddfe36246417e19142e6dd552b25c636520\": net/http: TLS handshake timeout, retrying in 1s"
time=2024-07-31T15:13:56.654+08:00 level=INFO source=download.go:178 msg="43f7a214e532 part 9 attempt 0 failed: unexpected EOF, retrying in 1s"
time=2024-07-31T15:14:30.692+08:00 level=INFO source=download.go:178 msg="43f7a214e532 part 25 attempt 0 failed: unexpected EOF, retrying in 1s"
time=2024-07-31T15:20:57.544+08:00 level=INFO source=download.go:136 msg="downloading 62fbfd9ed093 in 1 182 B part(s)"
time=2024-07-31T15:20:59.558+08:00 level=INFO source=download.go:136 msg="downloading c156170b718e in 1 11 KB part(s)"
time=2024-07-31T15:21:01.575+08:00 level=INFO source=download.go:136 msg="downloading f02dd72bb242 in 1 59 B part(s)"
time=2024-07-31T15:21:03.577+08:00 level=INFO source=download.go:136 msg="downloading 648f809ced2b in 1 485 B part(s)"
[GIN] 2024/07/31 - 15:21:10 | 200 | 10m36s | 127.0.0.1 | POST "/api/pull"
[GIN] 2024/07/31 - 15:21:10 | 200 | 19.308683ms | 127.0.0.1 | POST "/api/show"
time=2024-07-31T15:21:10.115+08:00 level=DEBUG source=gpu.go:358 msg="updating system memory data" before.total="63.2 GiB" before.free="59.4 GiB" before.free_swap="0 B" now.total="63.2 GiB" now.free="59.4 GiB" now.free_swap="0 B"
CUDA driver version: 11.4
time=2024-07-31T15:21:10.240+08:00 level=DEBUG source=gpu.go:406 msg="updating cuda memory data" gpu=GPU-66762d9a-49a1-11ef-a56b-0f40675dcf0a name="NVIDIA A40-8Q" overhead="0 B" before.total="7.8 GiB" before.free="7.5 GiB" now.total="7.8 GiB" now.free="7.5 GiB" now.used="385.8 MiB"
releasing cuda driver library
time=2024-07-31T15:21:10.240+08:00 level=DEBUG source=sched.go:177 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=3 gpu_count=1
time=2024-07-31T15:21:10.258+08:00 level=DEBUG source=sched.go:214 msg="loading first model" model=/root/.ollama/models/blobs/sha256-43f7a214e5329f672bb05404cfba1913cbb70fdaa1a17497224e1925046b0ed5
time=2024-07-31T15:21:10.258+08:00 level=DEBUG source=memory.go:101 msg=evaluating library=cuda gpu_count=1 available="[7.5 GiB]"
time=2024-07-31T15:21:10.259+08:00 level=INFO source=sched.go:701 msg="new model will fit in available VRAM in single GPU, loading" model=/root/.ollama/models/blobs/sha256-43f7a214e5329f672bb05404cfba1913cbb70fdaa1a17497224e1925046b0ed5 gpu=GPU-66762d9a-49a1-11ef-a56b-0f40675dcf0a parallel=4 available=8005419008 required="5.3 GiB"
time=2024-07-31T15:21:10.259+08:00 level=DEBUG source=server.go:100 msg="system memory" total="63.2 GiB" free="59.4 GiB" free_swap="0 B"
time=2024-07-31T15:21:10.259+08:00 level=DEBUG source=memory.go:101 msg=evaluating library=cuda gpu_count=1 available="[7.5 GiB]"
time=2024-07-31T15:21:10.259+08:00 level=INFO source=memory.go:309 msg="offload to cuda" layers.requested=-1 layers.model=29 layers.offload=29 layers.split="" memory.available="[7.5 GiB]" memory.required.full="5.3 GiB" memory.required.partial="5.3 GiB" memory.required.kv="448.0 MiB" memory.required.allocations="[5.3 GiB]" memory.weights.total="3.9 GiB" memory.weights.repeating="3.4 GiB" memory.weights.nonrepeating="426.4 MiB" memory.graph.full="478.0 MiB" memory.graph.partial="730.4 MiB"
time=2024-07-31T15:21:10.259+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama1636953247/runners/cpu/ollama_llama_server
time=2024-07-31T15:21:10.259+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama1636953247/runners/cpu_avx/ollama_llama_server
time=2024-07-31T15:21:10.259+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama1636953247/runners/cpu_avx2/ollama_llama_server
time=2024-07-31T15:21:10.259+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama1636953247/runners/cuda_v11/ollama_llama_server
time=2024-07-31T15:21:10.259+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama1636953247/runners/rocm_v60102/ollama_llama_server
time=2024-07-31T15:21:10.260+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama1636953247/runners/cpu/ollama_llama_server
time=2024-07-31T15:21:10.260+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama1636953247/runners/cpu_avx/ollama_llama_server
time=2024-07-31T15:21:10.260+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama1636953247/runners/cpu_avx2/ollama_llama_server
time=2024-07-31T15:21:10.260+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama1636953247/runners/cuda_v11/ollama_llama_server
time=2024-07-31T15:21:10.260+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama1636953247/runners/rocm_v60102/ollama_llama_server
time=2024-07-31T15:21:10.260+08:00 level=INFO source=server.go:383 msg="starting llama server" cmd="/tmp/ollama1636953247/runners/cuda_v11/ollama_llama_server --model /root/.ollama/models/blobs/sha256-43f7a214e5329f672bb05404cfba1913cbb70fdaa1a17497224e1925046b0ed5 --ctx-size 8192 --batch-size 512 --embedding --log-disable --n-gpu-layers 29 --verbose --parallel 4 --port 36523"
time=2024-07-31T15:21:10.260+08:00 level=DEBUG source=server.go:400 msg=subprocess environment="[LD_LIBRARY_PATH=/tmp/ollama1636953247/runners/cuda_v11:/tmp/ollama1636953247/runners:/usr/local/cuda/lib64: PATH=/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin CUDA_VISIBLE_DEVICES=GPU-66762d9a-49a1-11ef-a56b-0f40675dcf0a]"
time=2024-07-31T15:21:10.260+08:00 level=INFO source=sched.go:437 msg="loaded runners" count=1
time=2024-07-31T15:21:10.260+08:00 level=INFO source=server.go:583 msg="waiting for llama runner to start responding"
time=2024-07-31T15:21:10.261+08:00 level=INFO source=server.go:617 msg="waiting for server to become available" status="llm server error"
INFO [main] build info | build=1 commit="d94c6e0" tid="139961436442624" timestamp=1722410470
INFO [main] system info | n_threads=16 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 0 | " tid="139961436442624" timestamp=1722410470 total_threads=16
INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="15" port="36523" tid="139961436442624" timestamp=1722410470
llama_model_loader: loaded meta data with 21 key-value pairs and 339 tensors from /root/.ollama/models/blobs/sha256-43f7a214e5329f672bb05404cfba1913cbb70fdaa1a17497224e1925046b0ed5 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = qwen2
llama_model_loader: - kv 1: general.name str = Qwen2-7B-Instruct
llama_model_loader: - kv 2: qwen2.block_count u32 = 28
llama_model_loader: - kv 3: qwen2.context_length u32 = 32768
llama_model_loader: - kv 4: qwen2.embedding_length u32 = 3584
llama_model_loader: - kv 5: qwen2.feed_forward_length u32 = 18944
llama_model_loader: - kv 6: qwen2.attention.head_count u32 = 28
llama_model_loader: - kv 7: qwen2.attention.head_count_kv u32 = 4
llama_model_loader: - kv 8: qwen2.rope.freq_base f32 = 1000000.000000
llama_model_loader: - kv 9: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001
llama_model_loader: - kv 10: general.file_type u32 = 2
llama_model_loader: - kv 11: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 12: tokenizer.ggml.pre str = qwen2
llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,152064] = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 14: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
time=2024-07-31T15:21:10.512+08:00 level=INFO source=server.go:617 msg="waiting for server to become available" status="llm server loading model"
llama_model_loader: - kv 15: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv 16: tokenizer.ggml.eos_token_id u32 = 151645
llama_model_loader: - kv 17: tokenizer.ggml.padding_token_id u32 = 151643
llama_model_loader: - kv 18: tokenizer.ggml.bos_token_id u32 = 151643
llama_model_loader: - kv 19: tokenizer.chat_template str = {% for message in messages %}{% if lo...
llama_model_loader: - kv 20: general.quantization_version u32 = 2
llama_model_loader: - type f32: 141 tensors
llama_model_loader: - type q4_0: 197 tensors
llama_model_loader: - type q6_K: 1 tensors
llm_load_vocab: special tokens cache size = 421
llm_load_vocab: token to piece cache size = 0.9352 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = qwen2
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 152064
llm_load_print_meta: n_merges = 151387
llm_load_print_meta: vocab_only = 0
llm_load_print_meta: n_ctx_train = 32768
llm_load_print_meta: n_embd = 3584
llm_load_print_meta: n_layer = 28
llm_load_print_meta: n_head = 28
llm_load_print_meta: n_head_kv = 4
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_swa = 0
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 7
llm_load_print_meta: n_embd_k_gqa = 512
llm_load_print_meta: n_embd_v_gqa = 512
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-06
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 18944
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 2
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 32768
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: model type = ?B
llm_load_print_meta: model ftype = Q4_0
llm_load_print_meta: model params = 7.62 B
llm_load_print_meta: model size = 4.12 GiB (4.65 BPW)
llm_load_print_meta: general.name = Qwen2-7B-Instruct
llm_load_print_meta: BOS token = 151643 '<|endoftext|>'
llm_load_print_meta: EOS token = 151645 '<|im_end|>'
llm_load_print_meta: PAD token = 151643 '<|endoftext|>'
llm_load_print_meta: LF token = 148848 'ÄĬ'
llm_load_print_meta: EOT token = 151645 '<|im_end|>'
llm_load_print_meta: max token length = 256
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
Device 0: NVIDIA A40-8Q, compute capability 8.6, VMM: no
llm_load_tensors: ggml ctx size = 0.30 MiB
llm_load_tensors: offloading 28 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 29/29 layers to GPU
llm_load_tensors: CPU buffer size = 292.36 MiB
llm_load_tensors: CUDA0 buffer size = 3928.07 MiB
time=2024-07-31T15:22:00.443+08:00 level=DEBUG source=server.go:628 msg="model load progress 0.07"
time=2024-07-31T15:22:00.695+08:00 level=DEBUG source=server.go:628 msg="model load progress 0.17"
time=2024-07-31T15:22:01.948+08:00 level=DEBUG source=server.go:628 msg="model load progress 0.18"
time=2024-07-31T15:22:02.199+08:00 level=DEBUG source=server.go:628 msg="model load progress 0.20"
time=2024-07-31T15:22:03.454+08:00 level=DEBUG source=server.go:628 msg="model load progress 0.22"
time=2024-07-31T15:22:03.704+08:00 level=DEBUG source=server.go:628 msg="model load progress 0.23"
time=2024-07-31T15:22:04.959+08:00 level=DEBUG source=server.go:628 msg="model load progress 0.25"
time=2024-07-31T15:22:05.210+08:00 level=DEBUG source=server.go:628 msg="model load progress 0.26"
time=2024-07-31T15:22:06.464+08:00 level=DEBUG source=server.go:628 msg="model load progress 0.28"
time=2024-07-31T15:22:06.715+08:00 level=DEBUG source=server.go:628 msg="model load progress 0.29"
time=2024-07-31T15:22:07.718+08:00 level=DEBUG source=server.go:628 msg="model load progress 0.30"
time=2024-07-31T15:22:07.968+08:00 level=DEBUG source=server.go:628 msg="model load progress 0.32"
time=2024-07-31T15:22:09.223+08:00 level=DEBUG source=server.go:628 msg="model load progress 0.33"
time=2024-07-31T15:22:09.474+08:00 level=DEBUG source=server.go:628 msg="model load progress 0.35"
time=2024-07-31T15:22:10.728+08:00 level=DEBUG source=server.go:628 msg="model load progress 0.36"
time=2024-07-31T15:22:10.979+08:00 level=DEBUG source=server.go:628 msg="model load progress 0.38"
time=2024-07-31T15:22:12.234+08:00 level=DEBUG source=server.go:628 msg="model load progress 0.40"
time=2024-07-31T15:22:12.484+08:00 level=DEBUG source=server.go:628 msg="model load progress 0.41"
time=2024-07-31T15:22:13.738+08:00 level=DEBUG source=server.go:628 msg="model load progress 0.43"
time=2024-07-31T15:22:13.989+08:00 level=DEBUG source=server.go:628 msg="model load progress 0.44"
time=2024-07-31T15:22:15.243+08:00 level=DEBUG source=server.go:628 msg="model load progress 0.46"
time=2024-07-31T15:22:15.494+08:00 level=DEBUG source=server.go:628 msg="model load progress 0.47"
time=2024-07-31T15:22:16.498+08:00 level=DEBUG source=server.go:628 msg="model load progress 0.48"
time=2024-07-31T15:22:16.749+08:00 level=DEBUG source=server.go:628 msg="model load progress 0.50"
time=2024-07-31T15:22:18.004+08:00 level=DEBUG source=server.go:628 msg="model load progress 0.51"
time=2024-07-31T15:22:18.255+08:00 level=DEBUG source=server.go:628 msg="model load progress 0.53"
time=2024-07-31T15:22:19.509+08:00 level=DEBUG source=server.go:628 msg="model load progress 0.54"
time=2024-07-31T15:22:19.761+08:00 level=DEBUG source=server.go:628 msg="model load progress 0.56"
time=2024-07-31T15:22:21.014+08:00 level=DEBUG source=server.go:628 msg="model load progress 0.58"
time=2024-07-31T15:22:21.265+08:00 level=DEBUG source=server.go:628 msg="model load progress 0.59"
time=2024-07-31T15:22:22.520+08:00 level=DEBUG source=server.go:628 msg="model load progress 0.61"
time=2024-07-31T15:22:23.022+08:00 level=DEBUG source=server.go:628 msg="model load progress 0.62"
time=2024-07-31T15:22:24.026+08:00 level=DEBUG source=server.go:628 msg="model load progress 0.64"
time=2024-07-31T15:22:24.527+08:00 level=DEBUG source=server.go:628 msg="model load progress 0.65"
time=2024-07-31T15:22:25.280+08:00 level=DEBUG source=server.go:628 msg="model load progress 0.66"
time=2024-07-31T15:22:25.531+08:00 level=DEBUG source=server.go:628 msg="model load progress 0.67"
time=2024-07-31T15:22:25.782+08:00 level=DEBUG source=server.go:628 msg="model load progress 0.68"
time=2024-07-31T15:22:26.785+08:00 level=DEBUG source=server.go:628 msg="model load progress 0.69"
time=2024-07-31T15:22:27.036+08:00 level=DEBUG source=server.go:628 msg="model load progress 0.70"
time=2024-07-31T15:22:27.287+08:00 level=DEBUG source=server.go:628 msg="model load progress 0.71"
time=2024-07-31T15:22:28.290+08:00 level=DEBUG source=server.go:628 msg="model load progress 0.72"
time=2024-07-31T15:22:28.541+08:00 level=DEBUG source=server.go:628 msg="model load progress 0.73"
time=2024-07-31T15:22:29.043+08:00 level=DEBUG source=server.go:628 msg="model load progress 0.74"
time=2024-07-31T15:22:29.796+08:00 level=DEBUG source=server.go:628 msg="model load progress 0.75"
time=2024-07-31T15:22:30.047+08:00 level=DEBUG source=server.go:628 msg="model load progress 0.76"
time=2024-07-31T15:22:30.549+08:00 level=DEBUG source=server.go:628 msg="model load progress 0.77"
time=2024-07-31T15:22:31.302+08:00 level=DEBUG source=server.go:628 msg="model load progress 0.78"
time=2024-07-31T15:22:31.552+08:00 level=DEBUG source=server.go:628 msg="model load progress 0.79"
time=2024-07-31T15:22:32.055+08:00 level=DEBUG source=server.go:628 msg="model load progress 0.80"
time=2024-07-31T15:22:32.807+08:00 level=DEBUG source=server.go:628 msg="model load progress 0.81"
time=2024-07-31T15:22:33.058+08:00 level=DEBUG source=server.go:628 msg="model load progress 0.82"
time=2024-07-31T15:22:33.560+08:00 level=DEBUG source=server.go:628 msg="model load progress 0.83"
time=2024-07-31T15:22:34.313+08:00 level=DEBUG source=server.go:628 msg="model load progress 0.85"
time=2024-07-31T15:22:35.066+08:00 level=DEBUG source=server.go:628 msg="model load progress 0.86"
time=2024-07-31T15:22:35.820+08:00 level=DEBUG source=server.go:628 msg="model load progress 0.88"
time=2024-07-31T15:22:36.573+08:00 level=DEBUG source=server.go:628 msg="model load progress 0.89"
time=2024-07-31T15:22:37.326+08:00 level=DEBUG source=server.go:628 msg="model load progress 0.91"
time=2024-07-31T15:22:38.580+08:00 level=DEBUG source=server.go:628 msg="model load progress 0.93"
time=2024-07-31T15:22:38.831+08:00 level=DEBUG source=server.go:628 msg="model load progress 0.94"
time=2024-07-31T15:22:40.086+08:00 level=DEBUG source=server.go:628 msg="model load progress 0.96"
time=2024-07-31T15:22:40.337+08:00 level=DEBUG source=server.go:628 msg="model load progress 0.97"
time=2024-07-31T15:22:41.591+08:00 level=DEBUG source=server.go:628 msg="model load progress 0.99"
llama_new_context_with_model: n_ctx = 8192
llama_new_context_with_model: n_batch = 512
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 1000000.0
llama_new_context_with_model: freq_scale = 1
time=2024-07-31T15:22:41.842+08:00 level=DEBUG source=server.go:628 msg="model load progress 1.00"
llama_kv_cache_init: CUDA0 KV buffer size = 448.00 MiB
llama_new_context_with_model: KV self size = 448.00 MiB, K (f16): 224.00 MiB, V (f16): 224.00 MiB
llama_new_context_with_model: CUDA_Host output buffer size = 2.38 MiB
llama_new_context_with_model: CUDA0 compute buffer size = 492.00 MiB
llama_new_context_with_model: CUDA_Host compute buffer size = 23.01 MiB
llama_new_context_with_model: graph nodes = 986
llama_new_context_with_model: graph splits = 2
time=2024-07-31T15:22:42.093+08:00 level=DEBUG source=server.go:631 msg="model load completed, waiting for server to become available" status="llm server loading model"
time=2024-07-31T15:27:42.142+08:00 level=ERROR source=sched.go:443 msg="error loading llama server" error="timed out waiting for llama runner to start - progress 1.00 - "
time=2024-07-31T15:27:42.142+08:00 level=DEBUG source=sched.go:446 msg="triggering expiration for failed load" model=/root/.ollama/models/blobs/sha256-43f7a214e5329f672bb05404cfba1913cbb70fdaa1a17497224e1925046b0ed5
time=2024-07-31T15:27:42.142+08:00 level=DEBUG source=sched.go:347 msg="runner expired event received" modelPath=/root/.ollama/models/blobs/sha256-43f7a214e5329f672bb05404cfba1913cbb70fdaa1a17497224e1925046b0ed5
time=2024-07-31T15:27:42.142+08:00 level=DEBUG source=sched.go:363 msg="got lock to unload" modelPath=/root/.ollama/models/blobs/sha256-43f7a214e5329f672bb05404cfba1913cbb70fdaa1a17497224e1925046b0ed5
[GIN] 2024/07/31 - 15:27:42 | 500 | 6m32s | 127.0.0.1 | POST "/api/chat"
time=2024-07-31T15:27:42.142+08:00 level=DEBUG source=gpu.go:358 msg="updating system memory data" before.total="63.2 GiB" before.free="59.4 GiB" before.free_swap="0 B" now.total="63.2 GiB" now.free="58.4 GiB" now.free_swap="0 B"
CUDA driver version: 11.4
time=2024-07-31T15:27:42.267+08:00 level=DEBUG source=gpu.go:406 msg="updating cuda memory data" gpu=GPU-66762d9a-49a1-11ef-a56b-0f40675dcf0a name="NVIDIA A40-8Q" overhead="0 B" before.total="7.8 GiB" before.free="7.5 GiB" now.total="7.8 GiB" now.free="2.0 GiB" now.used="5.8 GiB"
releasing cuda driver library
time=2024-07-31T15:27:42.267+08:00 level=DEBUG source=server.go:1039 msg="stopping llama server"
time=2024-07-31T15:27:42.267+08:00 level=DEBUG source=server.go:1045 msg="waiting for llama server to exit"
time=2024-07-31T15:27:42.353+08:00 level=DEBUG source=server.go:1049 msg="llama server stopped"
time=2024-07-31T15:27:42.353+08:00 level=DEBUG source=sched.go:368 msg="runner released" modelPath=/root/.ollama/models/blobs/sha256-43f7a214e5329f672bb05404cfba1913cbb70fdaa1a17497224e1925046b0ed5
time=2024-07-31T15:27:42.518+08:00 level=DEBUG source=gpu.go:358 msg="updating system memory data" before.total="63.2 GiB" before.free="58.4 GiB" before.free_swap="0 B" now.total="63.2 GiB" now.free="59.4 GiB" now.free_swap="0 B"
CUDA driver version: 11.4
time=2024-07-31T15:27:42.644+08:00 level=DEBUG source=gpu.go:406 msg="updating cuda memory data" gpu=GPU-66762d9a-49a1-11ef-a56b-0f40675dcf0a name="NVIDIA A40-8Q" overhead="0 B" before.total="7.8 GiB" before.free="2.0 GiB" now.total="7.8 GiB" now.free="7.5 GiB" now.used="385.8 MiB"
releasing cuda driver library
time=2024-07-31T15:27:42.644+08:00 level=DEBUG source=sched.go:647 msg="gpu VRAM free memory converged after 0.50 seconds" model=/root/.ollama/models/blobs/sha256-43f7a214e5329f672bb05404cfba1913cbb70fdaa1a17497224e1925046b0ed5
time=2024-07-31T15:27:42.644+08:00 level=DEBUG source=sched.go:372 msg="sending an unloaded event" modelPath=/root/.ollama/models/blobs/sha256-43f7a214e5329f672bb05404cfba1913cbb70fdaa1a17497224e1925046b0ed5
time=2024-07-31T15:27:42.644+08:00 level=DEBUG source=sched.go:295 msg="ignoring unload event with no pending requests"
### OS
Linux
### GPU
Nvidia
### CPU
_No response_
### Ollama version
ollama version is 0.3.0
|
{
"login": "JasonJasonXU",
"id": 19587994,
"node_id": "MDQ6VXNlcjE5NTg3OTk0",
"avatar_url": "https://avatars.githubusercontent.com/u/19587994?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JasonJasonXU",
"html_url": "https://github.com/JasonJasonXU",
"followers_url": "https://api.github.com/users/JasonJasonXU/followers",
"following_url": "https://api.github.com/users/JasonJasonXU/following{/other_user}",
"gists_url": "https://api.github.com/users/JasonJasonXU/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JasonJasonXU/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JasonJasonXU/subscriptions",
"organizations_url": "https://api.github.com/users/JasonJasonXU/orgs",
"repos_url": "https://api.github.com/users/JasonJasonXU/repos",
"events_url": "https://api.github.com/users/JasonJasonXU/events{/privacy}",
"received_events_url": "https://api.github.com/users/JasonJasonXU/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6092/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6092/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5468
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5468/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5468/comments
|
https://api.github.com/repos/ollama/ollama/issues/5468/events
|
https://github.com/ollama/ollama/pull/5468
| 2,389,413,850
|
PR_kwDOJ0Z1Ps50Xnif
| 5,468
|
Bubble up model load error messages
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-07-03T20:28:04
| 2024-07-04T16:08:00
| 2024-07-04T16:08:00
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5468",
"html_url": "https://github.com/ollama/ollama/pull/5468",
"diff_url": "https://github.com/ollama/ollama/pull/5468.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5468.patch",
"merged_at": null
}
|
Looking over a pool of issues reported by users over the past few weeks I see a pattern of generic windows exit codes which more often than not were the result of a model load failure. This adds the prefix string to detect this so we can report it up instead of just the generic process exit code.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5468/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5468/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2003
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2003/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2003/comments
|
https://api.github.com/repos/ollama/ollama/issues/2003/events
|
https://github.com/ollama/ollama/issues/2003
| 2,082,093,114
|
I_kwDOJ0Z1Ps58Gjg6
| 2,003
|
Find model by hash
|
{
"login": "luckydonald",
"id": 2737108,
"node_id": "MDQ6VXNlcjI3MzcxMDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2737108?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/luckydonald",
"html_url": "https://github.com/luckydonald",
"followers_url": "https://api.github.com/users/luckydonald/followers",
"following_url": "https://api.github.com/users/luckydonald/following{/other_user}",
"gists_url": "https://api.github.com/users/luckydonald/gists{/gist_id}",
"starred_url": "https://api.github.com/users/luckydonald/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/luckydonald/subscriptions",
"organizations_url": "https://api.github.com/users/luckydonald/orgs",
"repos_url": "https://api.github.com/users/luckydonald/repos",
"events_url": "https://api.github.com/users/luckydonald/events{/privacy}",
"received_events_url": "https://api.github.com/users/luckydonald/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 1
| 2024-01-15T13:59:23
| 2024-09-20T00:39:56
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I have downloaded a model which can no longer be found.
I know the hash is `5af5443c09e4`, how can I find that again?
It's `dolphin2.1-mistral:latest`, but I don't know what `latest` actually means, it could be many versions over time.
Can I recover other tags on that build?
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2003/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2003/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/4084
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4084/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4084/comments
|
https://api.github.com/repos/ollama/ollama/issues/4084/events
|
https://github.com/ollama/ollama/pull/4084
| 2,273,672,912
|
PR_kwDOJ0Z1Ps5uQpQj
| 4,084
|
Add instructions to easily install specific versions on faq.md
|
{
"login": "Napuh",
"id": 55241721,
"node_id": "MDQ6VXNlcjU1MjQxNzIx",
"avatar_url": "https://avatars.githubusercontent.com/u/55241721?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Napuh",
"html_url": "https://github.com/Napuh",
"followers_url": "https://api.github.com/users/Napuh/followers",
"following_url": "https://api.github.com/users/Napuh/following{/other_user}",
"gists_url": "https://api.github.com/users/Napuh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Napuh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Napuh/subscriptions",
"organizations_url": "https://api.github.com/users/Napuh/orgs",
"repos_url": "https://api.github.com/users/Napuh/repos",
"events_url": "https://api.github.com/users/Napuh/events{/privacy}",
"received_events_url": "https://api.github.com/users/Napuh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-05-01T15:09:37
| 2024-06-09T17:49:04
| 2024-06-09T17:49:04
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4084",
"html_url": "https://github.com/ollama/ollama/pull/4084",
"diff_url": "https://github.com/ollama/ollama/pull/4084.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4084.patch",
"merged_at": "2024-06-09T17:49:04"
}
|
This PR adds a new question to faq.md on how to install specific versions of ollama using `VER_PARAM` env variable.
Might be useful when trying to install release candidates versions or switching between versions easily, and I could not find any info on how to do that in the current docs.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4084/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4084/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2259
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2259/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2259/comments
|
https://api.github.com/repos/ollama/ollama/issues/2259/events
|
https://github.com/ollama/ollama/issues/2259
| 2,106,174,645
|
I_kwDOJ0Z1Ps59iay1
| 2,259
|
Add moondream1 vision model
|
{
"login": "thesanju",
"id": 130161177,
"node_id": "U_kgDOB8IaGQ",
"avatar_url": "https://avatars.githubusercontent.com/u/130161177?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thesanju",
"html_url": "https://github.com/thesanju",
"followers_url": "https://api.github.com/users/thesanju/followers",
"following_url": "https://api.github.com/users/thesanju/following{/other_user}",
"gists_url": "https://api.github.com/users/thesanju/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thesanju/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thesanju/subscriptions",
"organizations_url": "https://api.github.com/users/thesanju/orgs",
"repos_url": "https://api.github.com/users/thesanju/repos",
"events_url": "https://api.github.com/users/thesanju/events{/privacy}",
"received_events_url": "https://api.github.com/users/thesanju/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
closed
| false
|
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 9
| 2024-01-29T18:35:31
| 2024-03-12T01:28:42
| 2024-03-12T01:28:42
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null | null |
{
"login": "thesanju",
"id": 130161177,
"node_id": "U_kgDOB8IaGQ",
"avatar_url": "https://avatars.githubusercontent.com/u/130161177?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thesanju",
"html_url": "https://github.com/thesanju",
"followers_url": "https://api.github.com/users/thesanju/followers",
"following_url": "https://api.github.com/users/thesanju/following{/other_user}",
"gists_url": "https://api.github.com/users/thesanju/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thesanju/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thesanju/subscriptions",
"organizations_url": "https://api.github.com/users/thesanju/orgs",
"repos_url": "https://api.github.com/users/thesanju/repos",
"events_url": "https://api.github.com/users/thesanju/events{/privacy}",
"received_events_url": "https://api.github.com/users/thesanju/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2259/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2259/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1076
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1076/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1076/comments
|
https://api.github.com/repos/ollama/ollama/issues/1076/events
|
https://github.com/ollama/ollama/issues/1076
| 1,988,299,141
|
I_kwDOJ0Z1Ps52gwmF
| 1,076
|
Electron & root priveleges
|
{
"login": "remixer-dec",
"id": 6587642,
"node_id": "MDQ6VXNlcjY1ODc2NDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/6587642?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/remixer-dec",
"html_url": "https://github.com/remixer-dec",
"followers_url": "https://api.github.com/users/remixer-dec/followers",
"following_url": "https://api.github.com/users/remixer-dec/following{/other_user}",
"gists_url": "https://api.github.com/users/remixer-dec/gists{/gist_id}",
"starred_url": "https://api.github.com/users/remixer-dec/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/remixer-dec/subscriptions",
"organizations_url": "https://api.github.com/users/remixer-dec/orgs",
"repos_url": "https://api.github.com/users/remixer-dec/repos",
"events_url": "https://api.github.com/users/remixer-dec/events{/privacy}",
"received_events_url": "https://api.github.com/users/remixer-dec/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 2
| 2023-11-10T19:56:03
| 2024-04-20T09:40:52
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
This project has 84% of the codebase written in Go, but for some reason it uses Electron, which is very heavy (500MB+) both for persistent memory and RAM usage.
Why not use [Wails](https://wails.io/)? It was designed for golang projects with web-based frontends.
Also, why is the app asking for root privileges right after installation? It says it is doing so to make ollama available from the CLI globally, but why is it necessary?
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1076/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1076/timeline
| null | null | false
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.