url
stringlengths 51
54
| repository_url
stringclasses 1
value | labels_url
stringlengths 65
68
| comments_url
stringlengths 60
63
| events_url
stringlengths 58
61
| html_url
stringlengths 39
44
| id
int64 1.78B
2.82B
| node_id
stringlengths 18
19
| number
int64 1
8.69k
| title
stringlengths 1
382
| user
dict | labels
listlengths 0
5
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
2
| milestone
null | comments
int64 0
323
| created_at
timestamp[s] | updated_at
timestamp[s] | closed_at
timestamp[s] | author_association
stringclasses 4
values | sub_issues_summary
dict | active_lock_reason
null | draft
bool 2
classes | pull_request
dict | body
stringlengths 2
118k
⌀ | closed_by
dict | reactions
dict | timeline_url
stringlengths 60
63
| performed_via_github_app
null | state_reason
stringclasses 4
values | is_pull_request
bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/ollama/ollama/issues/1254
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1254/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1254/comments
|
https://api.github.com/repos/ollama/ollama/issues/1254/events
|
https://github.com/ollama/ollama/issues/1254
| 2,008,016,076
|
I_kwDOJ0Z1Ps53r-TM
| 1,254
|
"Model" not found, try pulling it first
|
{
"login": "rehberim360",
"id": 144798027,
"node_id": "U_kgDOCKFxSw",
"avatar_url": "https://avatars.githubusercontent.com/u/144798027?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rehberim360",
"html_url": "https://github.com/rehberim360",
"followers_url": "https://api.github.com/users/rehberim360/followers",
"following_url": "https://api.github.com/users/rehberim360/following{/other_user}",
"gists_url": "https://api.github.com/users/rehberim360/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rehberim360/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rehberim360/subscriptions",
"organizations_url": "https://api.github.com/users/rehberim360/orgs",
"repos_url": "https://api.github.com/users/rehberim360/repos",
"events_url": "https://api.github.com/users/rehberim360/events{/privacy}",
"received_events_url": "https://api.github.com/users/rehberim360/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 6
| 2023-11-23T11:17:39
| 2024-05-02T07:05:34
| 2024-01-03T17:39:45
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hello everyone. I host Ollama in google VM. All firewall settings etc. have been made. I am connecting remotely via API.

I pulled my models while in Ollama service start.

But no matter which model I pulled,

|
{
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.github.com/users/technovangelist/followers",
"following_url": "https://api.github.com/users/technovangelist/following{/other_user}",
"gists_url": "https://api.github.com/users/technovangelist/gists{/gist_id}",
"starred_url": "https://api.github.com/users/technovangelist/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/technovangelist/subscriptions",
"organizations_url": "https://api.github.com/users/technovangelist/orgs",
"repos_url": "https://api.github.com/users/technovangelist/repos",
"events_url": "https://api.github.com/users/technovangelist/events{/privacy}",
"received_events_url": "https://api.github.com/users/technovangelist/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1254/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1254/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6062
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6062/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6062/comments
|
https://api.github.com/repos/ollama/ollama/issues/6062/events
|
https://github.com/ollama/ollama/pull/6062
| 2,436,420,429
|
PR_kwDOJ0Z1Ps52zPwX
| 6,062
|
server: OLLAMA in modelfile and manifests
|
{
"login": "joshyan1",
"id": 76125168,
"node_id": "MDQ6VXNlcjc2MTI1MTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/76125168?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joshyan1",
"html_url": "https://github.com/joshyan1",
"followers_url": "https://api.github.com/users/joshyan1/followers",
"following_url": "https://api.github.com/users/joshyan1/following{/other_user}",
"gists_url": "https://api.github.com/users/joshyan1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joshyan1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joshyan1/subscriptions",
"organizations_url": "https://api.github.com/users/joshyan1/orgs",
"repos_url": "https://api.github.com/users/joshyan1/repos",
"events_url": "https://api.github.com/users/joshyan1/events{/privacy}",
"received_events_url": "https://api.github.com/users/joshyan1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-07-29T21:34:00
| 2024-08-07T22:26:25
| 2024-08-07T22:26:25
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6062",
"html_url": "https://github.com/ollama/ollama/pull/6062",
"diff_url": "https://github.com/ollama/ollama/pull/6062.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6062.patch",
"merged_at": null
}
|
new optional parameter `OLLAMA` in modelfile to specify minimum version of ollama to run this model:
`ollama create newmodel`
```
FROM mymodel.gguf
OLLAMA 0.2.3
```
using another model with the `FROM` command defaults to the version if they specify it right now. otherwise, you can set it yourself
```
FROM newmodel
```
this will inherit the `OLLAMA 0.2.3` requirement
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6062/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6062/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/8131
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8131/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8131/comments
|
https://api.github.com/repos/ollama/ollama/issues/8131/events
|
https://github.com/ollama/ollama/pull/8131
| 2,744,160,590
|
PR_kwDOJ0Z1Ps6Fc7_F
| 8,131
|
scripts: sign renamed macOS binary
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-12-17T07:41:17
| 2024-12-18T02:03:51
| 2024-12-18T02:03:49
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/8131",
"html_url": "https://github.com/ollama/ollama/pull/8131",
"diff_url": "https://github.com/ollama/ollama/pull/8131.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8131.patch",
"merged_at": "2024-12-18T02:03:49"
}
| null |
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8131/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8131/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7904
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7904/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7904/comments
|
https://api.github.com/repos/ollama/ollama/issues/7904/events
|
https://github.com/ollama/ollama/issues/7904
| 2,710,401,505
|
I_kwDOJ0Z1Ps6hjXHh
| 7,904
|
fatal error: index out of range
|
{
"login": "prubinst",
"id": 136655984,
"node_id": "U_kgDOCCU0cA",
"avatar_url": "https://avatars.githubusercontent.com/u/136655984?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/prubinst",
"html_url": "https://github.com/prubinst",
"followers_url": "https://api.github.com/users/prubinst/followers",
"following_url": "https://api.github.com/users/prubinst/following{/other_user}",
"gists_url": "https://api.github.com/users/prubinst/gists{/gist_id}",
"starred_url": "https://api.github.com/users/prubinst/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/prubinst/subscriptions",
"organizations_url": "https://api.github.com/users/prubinst/orgs",
"repos_url": "https://api.github.com/users/prubinst/repos",
"events_url": "https://api.github.com/users/prubinst/events{/privacy}",
"received_events_url": "https://api.github.com/users/prubinst/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 0
| 2024-12-02T04:14:35
| 2024-12-02T04:14:35
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I'm running a dspy script that uses model `llama3-instruct:latest`. Script starts properly but after some ~20 minutes I get an exception like this:
```
...
[GIN] 2024/12/02 - 01:05:51 | 404 | 476.079µs | 127.0.0.1 | POST "/api/show"
fatal error: index out of range
runtime stack:
runtime.throw({0x12ed114?, 0x4026e8c0?})
runtime/panic.go:1023 +0x5c fp=0x7f3e47ffea30 sp=0x7f3e47ffea00 pc=0x46f13c
runtime.panicCheck1(0xc000742c80?, {0x12ed114, 0x12})
runtime/panic.go:58 +0x94 fp=0x7f3e47ffea50 sp=0x7f3e47ffea30 pc=0x46d094
runtime.goPanicIndex(0x832334, 0x60c7)
runtime/panic.go:113 +0x2e fp=0x7f3e47ffea90 sp=0x7f3e47ffea50 pc=0x46d14e
runtime.findfunc(0x7f3e47ffeb60?)
runtime/symtab.go:791 +0x119 fp=0x7f3e47ffeab0 sp=0x7f3e47ffea90 pc=0x48ebb9
runtime.(*unwinder).next(0x7f3e47ffeb60)
runtime/traceback.go:449 +0x4c fp=0x7f3e47ffeb28 sp=0x7f3e47ffeab0 pc=0x495d8c
runtime.copystack(0xc000547880, 0x800000002?)
runtime/stack.go:930 +0x2f4 fp=0x7f3e47ffec20 sp=0x7f3e47ffeb28 pc=0x48a174
runtime.newstack()
runtime/stack.go:1112 +0x489 fp=0x7f3e47ffedd0 sp=0x7f3e47ffec20 pc=0x48a749
runtime.morestack()
runtime/asm_amd64.s:616 +0x7a fp=0x7f3e47ffedd8 sp=0x7f3e47ffedd0 pc=0x4a30ba
goroutine 10797 gp=0xc000547880 m=8 mp=0xc000100808 [copystack]:
github.com/ollama/ollama/server.(*Server).GenerateHandler(0xc00071ea20, 0xc00061c100)
github.com/ollama/ollama/server/routes.go:111 +0x22aa fp=0xc0004ed660 sp=0xc0004ed658 pc=0xf5500a
github.com/ollama/ollama/server.(*Server).GenerateHandler-fm(0x0?)
<autogenerated>:1 +0x26 fp=0xc0004ed680 sp=0xc0004ed660 pc=0xf745a6
github.com/gin-gonic/gin.(*Context).Next(0xc00061c100)
github.com/gin-gonic/gin@v1.10.0/context.go:185 +0x2b fp=0xc0004ed6a0 sp=0xc0004ed680 pc=0xf1736b
github.com/ollama/ollama/server.(*Server).GenerateRoutes.allowedHostsMiddleware.func3(0xc00061c100)
github.com/ollama/ollama/server/routes.go:1087 +0x170 fp=0xc0004ed6f8 sp=0xc0004ed6a0 pc=0xf601b0
github.com/gin-gonic/gin.(*Context).Next(...)
github.com/gin-gonic/gin@v1.10.0/context.go:185
github.com/gin-gonic/gin.CustomRecoveryWithWriter.func1(0xc00061c100)
github.com/gin-gonic/gin@v1.10.0/recovery.go:102 +0x7a fp=0xc0004ed748 sp=0xc0004ed6f8 pc=0xf2533a
github.com/gin-gonic/gin.(*Context).Next(...)
github.com/gin-gonic/gin@v1.10.0/context.go:185
github.com/gin-gonic/gin.LoggerWithConfig.func1(0xc00061c100)
github.com/gin-gonic/gin@v1.10.0/logger.go:249 +0xe5 fp=0xc0004ed900 sp=0xc0004ed748 pc=0xf24465
github.com/gin-gonic/gin.(*Context).Next(...)
github.com/gin-gonic/gin@v1.10.0/context.go:185
github.com/gin-gonic/gin.(*Engine).handleHTTPRequest(0xc000742b60, 0xc00061c100)
github.com/gin-gonic/gin@v1.10.0/gin.go:633 +0x892 fp=0xc0004edad8 sp=0xc0004ed900 pc=0xf23852
github.com/gin-gonic/gin.(*Engine).ServeHTTP(0xc000742b60, {0x3fb0b428, 0xc0001920e0}, 0xc00059e000)
github.com/gin-gonic/gin@v1.10.0/gin.go:589 +0x1b2 fp=0xc0004edb10 sp=0xc0004edad8 pc=0xf22df2
net/http.(*ServeMux).ServeHTTP(0x445305?, {0x3fb0b428, 0xc0001920e0}, 0xc00059e000)
net/http/server.go:2688 +0x1ad fp=0xc0004edb60 sp=0xc0004edb10 pc=0x78f74d
net/http.serverHandler.ServeHTTP({0x3fb08890?}, {0x3fb0b428?, 0xc0001920e0?}, 0x6?)
net/http/server.go:3142 +0x8e fp=0xc0004edb90 sp=0xc0004edb60 pc=0x790f4e
fatal error: index out of range
panic during panic
runtime stack:
runtime.throw({0x12ed114?, 0x7f3e47ffe2c0?})
runtime/panic.go:1023 +0x5c fp=0x7f3e47ffe280 sp=0x7f3e47ffe250 pc=0x46f13c
runtime.panicCheck1(0x2?, {0x12ed114, 0x12})
runtime/panic.go:58 +0x94 fp=0x7f3e47ffe2a0 sp=0x7f3e47ffe280 pc=0x46d094
runtime.goPanicIndex(0x832334, 0x60c7)
runtime/panic.go:113 +0x2e fp=0x7f3e47ffe2e0 sp=0x7f3e47ffe2a0 pc=0x46d14e
runtime.findfunc(0x471674?)
runtime/symtab.go:791 +0x119 fp=0x7f3e47ffe300 sp=0x7f3e47ffe2e0 pc=0x48ebb9
runtime.(*unwinder).next(0x7f3e47ffe6c8)
runtime/traceback.go:449 +0x4c fp=0x7f3e47ffe378 sp=0x7f3e47ffe300 pc=0x495d8c
runtime.traceback2(0x7f3e47ffe6c8, 0x0, 0x0, 0x25)
runtime/traceback.go:981 +0x125 fp=0x7f3e47ffe5d8 sp=0x7f3e47ffe378 pc=0x497745
runtime.traceback1.func1(0x0)
runtime/traceback.go:917 +0x66 fp=0x7f3e47ffe6a0 sp=0x7f3e47ffe5d8 pc=0x4974e6
runtime.traceback1(0xc000547880?, 0x7f3e47ffe900?, 0x471674?, 0xc000547880, 0x80?)
runtime/traceback.go:940 +0x20f fp=0x7f3e47ffe8a8 sp=0x7f3e47ffe6a0 pc=0x49734f
runtime.traceback(...)
runtime/traceback.go:817
runtime.tracebackothers(0xc000102a80)
runtime/traceback.go:1235 +0x92 fp=0x7f3e47ffe910 sp=0x7f3e47ffe8a8 pc=0x498c72
runtime.dopanic_m(0xc000102a80, 0x46f13c, 0x7f3e47ffea00)
runtime/panic.go:1345 +0x29e fp=0x7f3e47ffe980 sp=0x7f3e47ffe910 pc=0x46fbfe
runtime.fatalthrow.func1()
runtime/panic.go:1199 +0x6b fp=0x7f3e47ffe9c0 sp=0x7f3e47ffe980 pc=0x46f62b
runtime.fatalthrow(0x47ffea08?)
runtime/panic.go:1192 +0x65 fp=0x7f3e47ffea00 sp=0x7f3e47ffe9c0 pc=0x46f585
runtime.throw({0x12ed114?, 0x4026e8c0?})
runtime/panic.go:1023 +0x5c fp=0x7f3e47ffea30 sp=0x7f3e47ffea00 pc=0x46f13c
runtime.panicCheck1(0xc000742c80?, {0x12ed114, 0x12})
runtime/panic.go:58 +0x94 fp=0x7f3e47ffea50 sp=0x7f3e47ffea30 pc=0x46d094
runtime.goPanicIndex(0x832334, 0x60c7)
runtime/panic.go:113 +0x2e fp=0x7f3e47ffea90 sp=0x7f3e47ffea50 pc=0x46d14e
runtime.findfunc(0x7f3e47ffeb60?)
runtime/symtab.go:791 +0x119 fp=0x7f3e47ffeab0 sp=0x7f3e47ffea90 pc=0x48ebb9
runtime.(*unwinder).next(0x7f3e47ffeb60)
runtime/traceback.go:449 +0x4c fp=0x7f3e47ffeb28 sp=0x7f3e47ffeab0 pc=0x495d8c
runtime.copystack(0xc000547880, 0x800000002?)
runtime/stack.go:930 +0x2f4 fp=0x7f3e47ffec20 sp=0x7f3e47ffeb28 pc=0x48a174
runtime.newstack()
runtime/stack.go:1112 +0x489 fp=0x7f3e47ffedd0 sp=0x7f3e47ffec20 pc=0x48a749
runtime.morestack()
runtime/asm_amd64.s:616 +0x7a fp=0x7f3e47ffedd8 sp=0x7f3e47ffedd0 pc=0x4a30ba
```
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.3.12
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7904/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7904/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/5687
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5687/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5687/comments
|
https://api.github.com/repos/ollama/ollama/issues/5687/events
|
https://github.com/ollama/ollama/issues/5687
| 2,407,312,180
|
I_kwDOJ0Z1Ps6PfKs0
| 5,687
|
/api/chat role Enum became case sensitive.
|
{
"login": "wkr1337",
"id": 28607631,
"node_id": "MDQ6VXNlcjI4NjA3NjMx",
"avatar_url": "https://avatars.githubusercontent.com/u/28607631?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wkr1337",
"html_url": "https://github.com/wkr1337",
"followers_url": "https://api.github.com/users/wkr1337/followers",
"following_url": "https://api.github.com/users/wkr1337/following{/other_user}",
"gists_url": "https://api.github.com/users/wkr1337/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wkr1337/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wkr1337/subscriptions",
"organizations_url": "https://api.github.com/users/wkr1337/orgs",
"repos_url": "https://api.github.com/users/wkr1337/repos",
"events_url": "https://api.github.com/users/wkr1337/events{/privacy}",
"received_events_url": "https://api.github.com/users/wkr1337/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-07-14T07:46:55
| 2024-07-15T20:55:58
| 2024-07-15T20:55:58
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I updated from version 0.1.48 to version 0.2.5.
After the update, the /api/chat endpoint changed. The `role` object inside the `messages `object became case **sensitive**.
Here is an example request that used to work before the update:
`{
"model": "llama3",
"messages": [
{
"role": "USER",
"content": "Why is the sky blue?"
}
],
"options": {},
"stream": false
}`
But when I run the request now I get what looks like a random response:
`{
"model": "llama3",
"created_at": "2024-07-14T07:41:45.0417467Z",
"message": {
"role": "assistant",
"content": "I'd be happy to help you with your question. Can you please provide more context or clarify what you mean by \"What is the most effective way to learn English\"? Are you looking for tips on how to improve your grammar, vocabulary, listening, speaking, reading, or writing skills? Or are you wondering about language learning apps, courses, or resources that can help you achieve your goals?\n\nHere are some general tips that might be helpful:\n\n1. **Set specific goals**: Identify what you want to achieve in English, whether it's for personal or professional reasons.\n2. **Practice consistently**: Make time each day to practice speaking, listening, reading, and writing in English.\n3. **Focus on grammar and vocabulary**: Understanding the rules of grammar and building your vocabulary are essential for effective communication.\n4. **Immerse yourself in the language**: Listen to podcasts, watch TV shows or movies, read books or articles, and engage with native speakers to improve your listening and speaking skills.\n5. **Use language learning apps and resources**: There are many great apps, websites, and courses that can help you learn English, such as Duolingo, Coursera, or edX.\n\nLet me know if you have any specific questions or areas of concern, and I'll do my best to assist you!"
},
"done_reason": "stop",
"done": true,
"total_duration": 2718834100,
"load_duration": 10797100,
"prompt_eval_count": 5,
"prompt_eval_duration": 95640000,
"eval_count": 266,
"eval_duration": 2611441000
}`
Then when I change the `role` "`USER`" to: "`user`" it works again.
I was unable to find the change in the release notes.
I'm using langChain4j to communicate with Ollama, and the langChain4j library sends the request with all capital case letters.
If this is not an issue in Ollama, I will create an issue for the langChain4j library.
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.2.5
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5687/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5687/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4320
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4320/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4320/comments
|
https://api.github.com/repos/ollama/ollama/issues/4320/events
|
https://github.com/ollama/ollama/pull/4320
| 2,290,273,418
|
PR_kwDOJ0Z1Ps5vILX6
| 4,320
|
add phi2 mem
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-05-10T19:13:55
| 2024-05-10T19:35:09
| 2024-05-10T19:35:08
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4320",
"html_url": "https://github.com/ollama/ollama/pull/4320",
"diff_url": "https://github.com/ollama/ollama/pull/4320.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4320.patch",
"merged_at": "2024-05-10T19:35:08"
}
| null |
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4320/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4320/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1999
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1999/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1999/comments
|
https://api.github.com/repos/ollama/ollama/issues/1999/events
|
https://github.com/ollama/ollama/pull/1999
| 2,081,695,499
|
PR_kwDOJ0Z1Ps5kEtz8
| 1,999
|
Fix CPU-only build under Android Termux enviornment.
|
{
"login": "lainedfles",
"id": 126992880,
"node_id": "U_kgDOB5HB8A",
"avatar_url": "https://avatars.githubusercontent.com/u/126992880?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lainedfles",
"html_url": "https://github.com/lainedfles",
"followers_url": "https://api.github.com/users/lainedfles/followers",
"following_url": "https://api.github.com/users/lainedfles/following{/other_user}",
"gists_url": "https://api.github.com/users/lainedfles/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lainedfles/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lainedfles/subscriptions",
"organizations_url": "https://api.github.com/users/lainedfles/orgs",
"repos_url": "https://api.github.com/users/lainedfles/repos",
"events_url": "https://api.github.com/users/lainedfles/events{/privacy}",
"received_events_url": "https://api.github.com/users/lainedfles/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-01-15T10:13:07
| 2024-01-28T23:04:20
| 2024-01-19T01:16:54
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1999",
"html_url": "https://github.com/ollama/ollama/pull/1999",
"diff_url": "https://github.com/ollama/ollama/pull/1999.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1999.patch",
"merged_at": "2024-01-19T01:16:54"
}
|
Update gpu.go initGPUHandles() to declare gpuHandles variable before reading it. This resolves an "invalid memory address or nil pointer dereference" error.
Update dyn_ext_server.c to avoid setting the RTLD_DEEPBIND flag under __TERMUX__ (Android).
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1999/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1999/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/8587
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8587/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8587/comments
|
https://api.github.com/repos/ollama/ollama/issues/8587/events
|
https://github.com/ollama/ollama/pull/8587
| 2,811,285,359
|
PR_kwDOJ0Z1Ps6I_qRI
| 8,587
|
llm: update library lookup logic now that there is one runner
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2025-01-26T02:49:27
| 2025-01-29T05:07:50
| 2025-01-29T05:07:49
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/8587",
"html_url": "https://github.com/ollama/ollama/pull/8587",
"diff_url": "https://github.com/ollama/ollama/pull/8587.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8587.patch",
"merged_at": "2025-01-29T05:07:49"
}
|
This removes the `runners` package now that there is only a single runner executable built (the `ollama` binary itself!). It tries to minimize changes to `discover` and `gpu` where possible.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8587/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8587/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/487
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/487/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/487/comments
|
https://api.github.com/repos/ollama/ollama/issues/487/events
|
https://github.com/ollama/ollama/pull/487
| 1,886,532,209
|
PR_kwDOJ0Z1Ps5Z0QgK
| 487
|
update dockerignore
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-09-07T20:36:33
| 2023-09-07T21:16:18
| 2023-09-07T21:16:17
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/487",
"html_url": "https://github.com/ollama/ollama/pull/487",
"diff_url": "https://github.com/ollama/ollama/pull/487.diff",
"patch_url": "https://github.com/ollama/ollama/pull/487.patch",
"merged_at": "2023-09-07T21:16:17"
}
| null |
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/487/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/487/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7467
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7467/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7467/comments
|
https://api.github.com/repos/ollama/ollama/issues/7467/events
|
https://github.com/ollama/ollama/pull/7467
| 2,629,985,838
|
PR_kwDOJ0Z1Ps6ArXdL
| 7,467
|
Align rocm compiler flags
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-11-01T22:51:17
| 2024-11-07T18:20:53
| 2024-11-07T18:20:51
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7467",
"html_url": "https://github.com/ollama/ollama/pull/7467",
"diff_url": "https://github.com/ollama/ollama/pull/7467.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7467.patch",
"merged_at": "2024-11-07T18:20:51"
}
|
Bring consistency with the old generate script behavior
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7467/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7467/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4086
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4086/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4086/comments
|
https://api.github.com/repos/ollama/ollama/issues/4086/events
|
https://github.com/ollama/ollama/pull/4086
| 2,274,041,399
|
PR_kwDOJ0Z1Ps5uR5_v
| 4,086
|
Add preflight OPTIONS handling and update CORS config
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-05-01T19:14:45
| 2024-05-08T20:14:01
| 2024-05-08T20:14:00
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4086",
"html_url": "https://github.com/ollama/ollama/pull/4086",
"diff_url": "https://github.com/ollama/ollama/pull/4086.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4086.patch",
"merged_at": "2024-05-08T20:14:00"
}
|
Couple of tweaks to our CORS configuration and how we handle `OPTIONS` requests. This update is geared towards making our service more compatible with clients originally designed to work with OpenAI, where sending an `Authorization` header is common.
#### Details of Changes
1. **Handling OPTIONS Requests**: I added a quick return for `OPTIONS` requests in our `allowedHostsMiddleware`. This means we're now ending these preflight requests with a 204 (No Content) status right off the bat.
2. **Updating CORS for Authorization Headers**: Since some of the Ollama clients automatically send an `Authorization` header (because they're set up for OpenAI), I've updated our CORS config to accept these headers. This is needed for making sure these clients can interact with our service without hitting CORS.
#### Security
Since we're not currently using the `Authorization` header for our own authentication, allowing this header doesn't open us up to new security risks as long as we don't have auth.
Enabling the `OPTIONS` method is mainly about letting browsers do their preflight check when they see that `Authorization` header. It's pretty standard and doesn't pose a direct risk by itself as far as I am aware.
resolves #4001
resolves #3983
resolves https://github.com/ollama/ollama-js/issues/80
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4086/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4086/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6183
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6183/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6183/comments
|
https://api.github.com/repos/ollama/ollama/issues/6183/events
|
https://github.com/ollama/ollama/issues/6183
| 2,449,023,581
|
I_kwDOJ0Z1Ps6R-SJd
| 6,183
|
LINE FEED problems in recent commit
|
{
"login": "FellowTraveler",
"id": 339191,
"node_id": "MDQ6VXNlcjMzOTE5MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/339191?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/FellowTraveler",
"html_url": "https://github.com/FellowTraveler",
"followers_url": "https://api.github.com/users/FellowTraveler/followers",
"following_url": "https://api.github.com/users/FellowTraveler/following{/other_user}",
"gists_url": "https://api.github.com/users/FellowTraveler/gists{/gist_id}",
"starred_url": "https://api.github.com/users/FellowTraveler/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/FellowTraveler/subscriptions",
"organizations_url": "https://api.github.com/users/FellowTraveler/orgs",
"repos_url": "https://api.github.com/users/FellowTraveler/repos",
"events_url": "https://api.github.com/users/FellowTraveler/events{/privacy}",
"received_events_url": "https://api.github.com/users/FellowTraveler/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 5
| 2024-08-05T16:55:13
| 2024-08-11T06:24:34
| 2024-08-11T06:24:34
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Since I grabbed the latest code, it IMMEDIATELY tells me that I have unstaged changes and won't let me checkout other branches. Also makes it impossible to rebase, etc. Git stash doesn't fix it. There is some kind of line feed issue probably in a very recent merge.
**I can't be the only one who has noticed this problem.**
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
LATEST main branch
|
{
"login": "FellowTraveler",
"id": 339191,
"node_id": "MDQ6VXNlcjMzOTE5MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/339191?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/FellowTraveler",
"html_url": "https://github.com/FellowTraveler",
"followers_url": "https://api.github.com/users/FellowTraveler/followers",
"following_url": "https://api.github.com/users/FellowTraveler/following{/other_user}",
"gists_url": "https://api.github.com/users/FellowTraveler/gists{/gist_id}",
"starred_url": "https://api.github.com/users/FellowTraveler/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/FellowTraveler/subscriptions",
"organizations_url": "https://api.github.com/users/FellowTraveler/orgs",
"repos_url": "https://api.github.com/users/FellowTraveler/repos",
"events_url": "https://api.github.com/users/FellowTraveler/events{/privacy}",
"received_events_url": "https://api.github.com/users/FellowTraveler/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6183/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6183/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7452
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7452/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7452/comments
|
https://api.github.com/repos/ollama/ollama/issues/7452/events
|
https://github.com/ollama/ollama/issues/7452
| 2,627,583,362
|
I_kwDOJ0Z1Ps6cnb2C
| 7,452
|
makefiles should verify compiler before trying to build GPU target
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 7700262114,
"node_id": "LA_kwDOJ0Z1Ps8AAAAByvis4g",
"url": "https://api.github.com/repos/ollama/ollama/labels/build",
"name": "build",
"color": "006b75",
"default": false,
"description": "Issues relating to building ollama from source"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 0
| 2024-10-31T18:42:01
| 2024-12-10T17:47:21
| 2024-12-10T17:47:21
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
If you have the GPU libraries present, but not the compiler, we'll try to build and fail with strange errors from ccache since no compiler command was passed in.
```
/usr/bin/ccache -c -fPIC -D_GNU_SOURCE -fPIC -Wno-unused-function -std=gnu++11 -mavx -parallel-jobs=2 -c -O3 -DGGML_USE_CUDA -DGGML_BUILD=1 -DGGML_SHARED=1 -DGGML_CUDA_DMMV_X=32 -DGGML_CUDA_MMV_Y=1 -DGGML_SCHED_MAX_COPIES=4 -DGGML_USE_HIPBLAS -DGGML_USE_LLAMAFILE -DHIP_FAST_MATH -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -DNDEBUG -DK_QUANTS_PER_ITERATION=2 -D_CRT_SECURE_NO_WARNINGS -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -mllvm=-amdgpu-early-inline-all=true -mllvm=-amdgpu-function-calls=false -Wno-expansion-to-defined -Wno-invalid-noreturn -Wno-ignored-attributes -Wno-pass-failed -Wno-deprecated-declarations -Wno-unused-result -I. --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -o /home/mike/ollama/llama/build/linux-amd64/ggml-cuda.rocm.o ggml-cuda.cu
/usr/bin/ccache: invalid option -- 'f'
make[2]: *** [make/gpu.make:75: /home/mike/ollama/llama/build/linux-amd64/ggml-cuda.rocm.o] Error 1
```
### OS
Linux, Windows
### GPU
Nvidia, AMD
### CPU
_No response_
### Ollama version
0.4.0
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7452/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7452/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1454
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1454/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1454/comments
|
https://api.github.com/repos/ollama/ollama/issues/1454/events
|
https://github.com/ollama/ollama/issues/1454
| 2,034,331,723
|
I_kwDOJ0Z1Ps55QXBL
| 1,454
|
Repeated output during use
|
{
"login": "duyaofei",
"id": 6417789,
"node_id": "MDQ6VXNlcjY0MTc3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6417789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/duyaofei",
"html_url": "https://github.com/duyaofei",
"followers_url": "https://api.github.com/users/duyaofei/followers",
"following_url": "https://api.github.com/users/duyaofei/following{/other_user}",
"gists_url": "https://api.github.com/users/duyaofei/gists{/gist_id}",
"starred_url": "https://api.github.com/users/duyaofei/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/duyaofei/subscriptions",
"organizations_url": "https://api.github.com/users/duyaofei/orgs",
"repos_url": "https://api.github.com/users/duyaofei/repos",
"events_url": "https://api.github.com/users/duyaofei/events{/privacy}",
"received_events_url": "https://api.github.com/users/duyaofei/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 2
| 2023-12-10T11:31:10
| 2024-03-12T21:18:55
| 2024-03-12T21:18:54
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Today I am running yi:34b chat q4 using Ollama_ K_ When encountering repetitive output from the repeating machine during M, I entered the same issue on the official webpage and the output was normal. It is speculated that the problem arose from the output of invisible control characters.
Thank you for your hard work. I hope to have time to solve this problem.
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1454/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1454/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5813
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5813/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5813/comments
|
https://api.github.com/repos/ollama/ollama/issues/5813/events
|
https://github.com/ollama/ollama/issues/5813
| 2,420,953,395
|
I_kwDOJ0Z1Ps6QTNEz
| 5,813
|
Bug: ToolCall issue
|
{
"login": "KSemenenko",
"id": 4385716,
"node_id": "MDQ6VXNlcjQzODU3MTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4385716?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KSemenenko",
"html_url": "https://github.com/KSemenenko",
"followers_url": "https://api.github.com/users/KSemenenko/followers",
"following_url": "https://api.github.com/users/KSemenenko/following{/other_user}",
"gists_url": "https://api.github.com/users/KSemenenko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KSemenenko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KSemenenko/subscriptions",
"organizations_url": "https://api.github.com/users/KSemenenko/orgs",
"repos_url": "https://api.github.com/users/KSemenenko/repos",
"events_url": "https://api.github.com/users/KSemenenko/events{/privacy}",
"received_events_url": "https://api.github.com/users/KSemenenko/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-07-20T15:54:00
| 2024-07-20T16:09:04
| 2024-07-20T16:09:04
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
**Describe the bug**
Tool Calling parameters
**To Reproduce**
I use Ollama witn OpenAI API.
I used this model
https://ollama.com/library/llama3-groq-tool-use
and Im doung function calling (for gpt4o this code works perfectly)
But I see this as text asnwer from model.
```
<tool_call> {“id”: 0, “name”: “createReservation-Answer”, “arguments”: {“amount”: 1, “roomName”: “Single Room”, “startDate”: “2024-07-21”, “endDate”: “2024-07-23”}} </tool_call>
```
and the function doesn't called.
**Expected behavior**
I expected my function will called.
maybe related https://github.com/microsoft/semantic-kernel/issues/7376
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.2.7
|
{
"login": "KSemenenko",
"id": 4385716,
"node_id": "MDQ6VXNlcjQzODU3MTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4385716?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KSemenenko",
"html_url": "https://github.com/KSemenenko",
"followers_url": "https://api.github.com/users/KSemenenko/followers",
"following_url": "https://api.github.com/users/KSemenenko/following{/other_user}",
"gists_url": "https://api.github.com/users/KSemenenko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KSemenenko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KSemenenko/subscriptions",
"organizations_url": "https://api.github.com/users/KSemenenko/orgs",
"repos_url": "https://api.github.com/users/KSemenenko/repos",
"events_url": "https://api.github.com/users/KSemenenko/events{/privacy}",
"received_events_url": "https://api.github.com/users/KSemenenko/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5813/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5813/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/463
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/463/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/463/comments
|
https://api.github.com/repos/ollama/ollama/issues/463/events
|
https://github.com/ollama/ollama/pull/463
| 1,879,245,066
|
PR_kwDOJ0Z1Ps5ZbhuY
| 463
|
fix not forwarding last token
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-09-03T21:48:36
| 2023-09-05T16:01:33
| 2023-09-05T16:01:32
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/463",
"html_url": "https://github.com/ollama/ollama/pull/463",
"diff_url": "https://github.com/ollama/ollama/pull/463.diff",
"patch_url": "https://github.com/ollama/ollama/pull/463.patch",
"merged_at": "2023-09-05T16:01:32"
}
|
llama.cpp server serves the last token along with `stop: true`
also remove unused fields
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/463/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/463/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4234
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4234/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4234/comments
|
https://api.github.com/repos/ollama/ollama/issues/4234/events
|
https://github.com/ollama/ollama/issues/4234
| 2,283,996,971
|
I_kwDOJ0Z1Ps6IIwcr
| 4,234
|
Customized LLaVA Setup
|
{
"login": "zhangry868",
"id": 6694822,
"node_id": "MDQ6VXNlcjY2OTQ4MjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/6694822?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhangry868",
"html_url": "https://github.com/zhangry868",
"followers_url": "https://api.github.com/users/zhangry868/followers",
"following_url": "https://api.github.com/users/zhangry868/following{/other_user}",
"gists_url": "https://api.github.com/users/zhangry868/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhangry868/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhangry868/subscriptions",
"organizations_url": "https://api.github.com/users/zhangry868/orgs",
"repos_url": "https://api.github.com/users/zhangry868/repos",
"events_url": "https://api.github.com/users/zhangry868/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhangry868/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-05-07T18:42:46
| 2024-05-07T23:56:05
| 2024-05-07T23:56:04
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I wonder whether there is a guideline on hosting customized LLaVA model. I have both mmprojector and base models gguf files. Feel free to point me any related materials/links.
Many thanks,
Rui
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4234/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4234/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2146
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2146/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2146/comments
|
https://api.github.com/repos/ollama/ollama/issues/2146/events
|
https://github.com/ollama/ollama/pull/2146
| 2,094,810,743
|
PR_kwDOJ0Z1Ps5kxPTJ
| 2,146
|
add keep_alive to generate/chat/embedding api endpoints
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 14
| 2024-01-22T21:47:04
| 2024-08-11T21:04:54
| 2024-01-26T22:28:02
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/2146",
"html_url": "https://github.com/ollama/ollama/pull/2146",
"diff_url": "https://github.com/ollama/ollama/pull/2146.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2146.patch",
"merged_at": "2024-01-26T22:28:02"
}
|
This change adds a new `keep_alive` parameter to `/api/generate` which can control the duration for how long a model is loaded and left in memory. There are three cases:
1. if `keep_alive` is not set, the model will stay loaded for the default value (5 minutes);
2. if `keep_alive` is set to a positive duration (e.g. "20m"), it will stay loaded for the duration;
3. if `keep_alive` is set to a negative duration (e.g. "-1m"), it will stay loaded indefinitely
If you wish the model to be loaded immediately after generation, you can set it to "0m", or even just `0`. Also, maybe *most importantly*, subsequent calls to the `/api/generate` will change the load duration, so even if you called it once with a negative value and the next caller omits it, it will still only stay in memory for 5 minutes after the second call.
Note that this change only applies to the `/api/generate`. We can either layer on the changes for `/api/chat` on top of this change, or push it as a separate PR.
resolves #1339
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2146/reactions",
"total_count": 19,
"+1": 19,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2146/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1149
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1149/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1149/comments
|
https://api.github.com/repos/ollama/ollama/issues/1149/events
|
https://github.com/ollama/ollama/issues/1149
| 1,996,069,334
|
I_kwDOJ0Z1Ps52-ZnW
| 1,149
|
No such host no matter what model I pull
|
{
"login": "chnsh",
"id": 7926657,
"node_id": "MDQ6VXNlcjc5MjY2NTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7926657?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chnsh",
"html_url": "https://github.com/chnsh",
"followers_url": "https://api.github.com/users/chnsh/followers",
"following_url": "https://api.github.com/users/chnsh/following{/other_user}",
"gists_url": "https://api.github.com/users/chnsh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chnsh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chnsh/subscriptions",
"organizations_url": "https://api.github.com/users/chnsh/orgs",
"repos_url": "https://api.github.com/users/chnsh/repos",
"events_url": "https://api.github.com/users/chnsh/events{/privacy}",
"received_events_url": "https://api.github.com/users/chnsh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 6
| 2023-11-16T05:06:12
| 2023-11-27T07:07:52
| 2023-11-27T07:07:52
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hello 👋
Thank you so much for developing this project. I am excited to use it in my day-to-day work and when I pull any model - as an example `ollama pull codellama:7b-instruct` I get an error like so. This is true for all models. I am wondering if I am missing any steps.
I installed this app from https://ollama.ai/download
```
pulling manifest
Error: Head "https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/3a/3a43f93b78ec50f7c4e4dc8bd1cb3fff5a900e7d574c51a6f7495e48486e0dac/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%!F(MISSING)20231116%!F(MISSING)auto%!F(MISSING)s3%!F(MISSING)aws4_request&X-Amz-Date=20231116T045920Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=173709c13b6930d42a44fb994a03ce05da90590a40150253ae5240158c43ce84": dial tcp: lookup dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com: no such host
```
|
{
"login": "chnsh",
"id": 7926657,
"node_id": "MDQ6VXNlcjc5MjY2NTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7926657?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chnsh",
"html_url": "https://github.com/chnsh",
"followers_url": "https://api.github.com/users/chnsh/followers",
"following_url": "https://api.github.com/users/chnsh/following{/other_user}",
"gists_url": "https://api.github.com/users/chnsh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chnsh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chnsh/subscriptions",
"organizations_url": "https://api.github.com/users/chnsh/orgs",
"repos_url": "https://api.github.com/users/chnsh/repos",
"events_url": "https://api.github.com/users/chnsh/events{/privacy}",
"received_events_url": "https://api.github.com/users/chnsh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1149/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1149/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2966
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2966/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2966/comments
|
https://api.github.com/repos/ollama/ollama/issues/2966/events
|
https://github.com/ollama/ollama/pull/2966
| 2,172,742,801
|
PR_kwDOJ0Z1Ps5o6ABG
| 2,966
|
Add ROCm support to linux install script
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-03-07T01:10:38
| 2024-03-15T01:00:17
| 2024-03-15T01:00:16
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/2966",
"html_url": "https://github.com/ollama/ollama/pull/2966",
"diff_url": "https://github.com/ollama/ollama/pull/2966.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2966.patch",
"merged_at": "2024-03-15T01:00:16"
}
|
Merge after #2885 and the release is out to avoid users with rocm failing to install due to the dependency file not being available yet.
This depends on corresponding path changes in PR #3008
Prior to merging this, folks who want to install the pre-release on Radeon systems can use the following:
```
curl -fsSL https://raw.githubusercontent.com/dhiltgen/ollama/rocm_install/scripts/install.sh | OLLAMA_VERSION="0.1.29" sh
```
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2966/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2966/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4601
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4601/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4601/comments
|
https://api.github.com/repos/ollama/ollama/issues/4601/events
|
https://github.com/ollama/ollama/issues/4601
| 2,314,208,616
|
I_kwDOJ0Z1Ps6J8AVo
| 4,601
|
Error: llama runner process has terminated: signal: segmentation fault
|
{
"login": "guiniao",
"id": 44078253,
"node_id": "MDQ6VXNlcjQ0MDc4MjUz",
"avatar_url": "https://avatars.githubusercontent.com/u/44078253?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/guiniao",
"html_url": "https://github.com/guiniao",
"followers_url": "https://api.github.com/users/guiniao/followers",
"following_url": "https://api.github.com/users/guiniao/following{/other_user}",
"gists_url": "https://api.github.com/users/guiniao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/guiniao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/guiniao/subscriptions",
"organizations_url": "https://api.github.com/users/guiniao/orgs",
"repos_url": "https://api.github.com/users/guiniao/repos",
"events_url": "https://api.github.com/users/guiniao/events{/privacy}",
"received_events_url": "https://api.github.com/users/guiniao/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 8
| 2024-05-24T02:33:24
| 2024-08-27T14:03:30
| 2024-05-24T23:05:48
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
ollama run codellama:34b
error occurred:
pulling manifest
pulling f36b668ebcd3... 100% ▕████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 19 GB
pulling 2e0493f67d0c... 100% ▕████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 59 B
pulling c60122cb2728... 100% ▕████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 132 B
pulling d5981b4f8e77... 100% ▕████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 382 B
verifying sha256 digest
writing manifest
removing any unused layers
success
Error: llama runner process has terminated: signal: segmentation fault
codellama:70b and codellama:13b successful
### OS
Linux
### GPU
Nvidia
### CPU
_No response_
### Ollama version
0.1.38
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4601/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4601/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3391
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3391/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3391/comments
|
https://api.github.com/repos/ollama/ollama/issues/3391/events
|
https://github.com/ollama/ollama/pull/3391
| 2,213,932,616
|
PR_kwDOJ0Z1Ps5rF4p9
| 3,391
|
Update troubleshooting link
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-03-28T19:05:29
| 2024-03-28T20:15:57
| 2024-03-28T20:15:57
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3391",
"html_url": "https://github.com/ollama/ollama/pull/3391",
"diff_url": "https://github.com/ollama/ollama/pull/3391.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3391.patch",
"merged_at": "2024-03-28T20:15:57"
}
| null |
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3391/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3391/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3077
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3077/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3077/comments
|
https://api.github.com/repos/ollama/ollama/issues/3077/events
|
https://github.com/ollama/ollama/pull/3077
| 2,181,546,460
|
PR_kwDOJ0Z1Ps5pX5tX
| 3,077
|
fix gpu_info_cuda.c compile warning
|
{
"login": "mofanke",
"id": 54242816,
"node_id": "MDQ6VXNlcjU0MjQyODE2",
"avatar_url": "https://avatars.githubusercontent.com/u/54242816?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mofanke",
"html_url": "https://github.com/mofanke",
"followers_url": "https://api.github.com/users/mofanke/followers",
"following_url": "https://api.github.com/users/mofanke/following{/other_user}",
"gists_url": "https://api.github.com/users/mofanke/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mofanke/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mofanke/subscriptions",
"organizations_url": "https://api.github.com/users/mofanke/orgs",
"repos_url": "https://api.github.com/users/mofanke/repos",
"events_url": "https://api.github.com/users/mofanke/events{/privacy}",
"received_events_url": "https://api.github.com/users/mofanke/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-03-12T12:50:59
| 2024-03-12T18:08:41
| 2024-03-12T18:08:40
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3077",
"html_url": "https://github.com/ollama/ollama/pull/3077",
"diff_url": "https://github.com/ollama/ollama/pull/3077.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3077.patch",
"merged_at": "2024-03-12T18:08:40"
}
|
fix compile warning
`gpu_info_cuda.c: In function ‘cuda_check_vram’:
gpu_info_cuda.c:158:20: warning: format ‘%ld’ expects argument of type ‘long int’, but argument 4 has type ‘long long unsigned int’`
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3077/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3077/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5104
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5104/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5104/comments
|
https://api.github.com/repos/ollama/ollama/issues/5104/events
|
https://github.com/ollama/ollama/issues/5104
| 2,358,269,569
|
I_kwDOJ0Z1Ps6MkFaB
| 5,104
|
Model requests Tiamat 7B & chronomaid 13B
|
{
"login": "AncientMystic",
"id": 62780271,
"node_id": "MDQ6VXNlcjYyNzgwMjcx",
"avatar_url": "https://avatars.githubusercontent.com/u/62780271?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AncientMystic",
"html_url": "https://github.com/AncientMystic",
"followers_url": "https://api.github.com/users/AncientMystic/followers",
"following_url": "https://api.github.com/users/AncientMystic/following{/other_user}",
"gists_url": "https://api.github.com/users/AncientMystic/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AncientMystic/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AncientMystic/subscriptions",
"organizations_url": "https://api.github.com/users/AncientMystic/orgs",
"repos_url": "https://api.github.com/users/AncientMystic/repos",
"events_url": "https://api.github.com/users/AncientMystic/events{/privacy}",
"received_events_url": "https://api.github.com/users/AncientMystic/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
open
| false
| null |
[] | null | 0
| 2024-06-17T21:13:16
| 2024-06-17T21:13:16
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Tiamat 7B and chronomaid 13B are two of the best models i have found both mostly uncensored and quite good at fairly articulate responses on a wide range of topics.
From all the models i have tried for their size these two are the best and have the widest range and balance.
They will do general discussion, roleplaying/ERP, chat, general instructions, storytelling, etc
Originals:
https://huggingface.co/Gryphe/Tiamat-7b
https://huggingface.co/NyxKrage/Chronomaid-Storytelling-13b
Gguf:
https://huggingface.co/TheBloke/Tiamat-7B-GGUF
https://huggingface.co/TheBloke/Chronomaid-Storytelling-13B-GGUF
Good alternate versions of tiamat:
https://huggingface.co/TheBloke/Tiamat-7B-1.1-DPO-GGUF
https://huggingface.co/bartowski/Tiamat-8b-1.2-Llama-3-DPO-GGUF
Many of these GGUF versions can be imported into ollama fairly easily through open-webui, etc.
but i thought i would share and recommend them to be added since i have tested many models and was the most happy with these two specially and always find myself using them above all the others.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5104/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5104/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/2420
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2420/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2420/comments
|
https://api.github.com/repos/ollama/ollama/issues/2420/events
|
https://github.com/ollama/ollama/issues/2420
| 2,126,539,503
|
I_kwDOJ0Z1Ps5-wGrv
| 2,420
|
Will you add the "Smaug-72B" model?
|
{
"login": "konstantin1722",
"id": 55327489,
"node_id": "MDQ6VXNlcjU1MzI3NDg5",
"avatar_url": "https://avatars.githubusercontent.com/u/55327489?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/konstantin1722",
"html_url": "https://github.com/konstantin1722",
"followers_url": "https://api.github.com/users/konstantin1722/followers",
"following_url": "https://api.github.com/users/konstantin1722/following{/other_user}",
"gists_url": "https://api.github.com/users/konstantin1722/gists{/gist_id}",
"starred_url": "https://api.github.com/users/konstantin1722/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/konstantin1722/subscriptions",
"organizations_url": "https://api.github.com/users/konstantin1722/orgs",
"repos_url": "https://api.github.com/users/konstantin1722/repos",
"events_url": "https://api.github.com/users/konstantin1722/events{/privacy}",
"received_events_url": "https://api.github.com/users/konstantin1722/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
closed
| false
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 23
| 2024-02-09T06:13:58
| 2024-03-12T17:21:12
| 2024-03-11T19:14:53
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
They say it outperformed in many ways, GPT-3.5, Mistral Medium and Qwen-72B.
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
|
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2420/reactions",
"total_count": 6,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2420/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7585
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7585/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7585/comments
|
https://api.github.com/repos/ollama/ollama/issues/7585/events
|
https://github.com/ollama/ollama/issues/7585
| 2,645,626,792
|
I_kwDOJ0Z1Ps6dsQ-o
| 7,585
|
why Ollama runs on CPU by default
|
{
"login": "yhz114514",
"id": 119857104,
"node_id": "U_kgDOByTf0A",
"avatar_url": "https://avatars.githubusercontent.com/u/119857104?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yhz114514",
"html_url": "https://github.com/yhz114514",
"followers_url": "https://api.github.com/users/yhz114514/followers",
"following_url": "https://api.github.com/users/yhz114514/following{/other_user}",
"gists_url": "https://api.github.com/users/yhz114514/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yhz114514/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yhz114514/subscriptions",
"organizations_url": "https://api.github.com/users/yhz114514/orgs",
"repos_url": "https://api.github.com/users/yhz114514/repos",
"events_url": "https://api.github.com/users/yhz114514/events{/privacy}",
"received_events_url": "https://api.github.com/users/yhz114514/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 6
| 2024-11-09T04:48:46
| 2024-11-09T12:41:18
| 2024-11-09T12:41:18
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
My device:
NVIDIA RTX4070 12G
The remaining video memory is 10G
Run a 7B model and have enough video memory to run the model
ollama will be forced to run on the CPU no matter what, even if its performance is much lower than that of the GPU
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
latest
|
{
"login": "yhz114514",
"id": 119857104,
"node_id": "U_kgDOByTf0A",
"avatar_url": "https://avatars.githubusercontent.com/u/119857104?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yhz114514",
"html_url": "https://github.com/yhz114514",
"followers_url": "https://api.github.com/users/yhz114514/followers",
"following_url": "https://api.github.com/users/yhz114514/following{/other_user}",
"gists_url": "https://api.github.com/users/yhz114514/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yhz114514/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yhz114514/subscriptions",
"organizations_url": "https://api.github.com/users/yhz114514/orgs",
"repos_url": "https://api.github.com/users/yhz114514/repos",
"events_url": "https://api.github.com/users/yhz114514/events{/privacy}",
"received_events_url": "https://api.github.com/users/yhz114514/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7585/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7585/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7163
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7163/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7163/comments
|
https://api.github.com/repos/ollama/ollama/issues/7163/events
|
https://github.com/ollama/ollama/issues/7163
| 2,579,319,049
|
I_kwDOJ0Z1Ps6ZvUkJ
| 7,163
|
Ollama does not run
|
{
"login": "d3tk",
"id": 90400076,
"node_id": "MDQ6VXNlcjkwNDAwMDc2",
"avatar_url": "https://avatars.githubusercontent.com/u/90400076?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/d3tk",
"html_url": "https://github.com/d3tk",
"followers_url": "https://api.github.com/users/d3tk/followers",
"following_url": "https://api.github.com/users/d3tk/following{/other_user}",
"gists_url": "https://api.github.com/users/d3tk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/d3tk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/d3tk/subscriptions",
"organizations_url": "https://api.github.com/users/d3tk/orgs",
"repos_url": "https://api.github.com/users/d3tk/repos",
"events_url": "https://api.github.com/users/d3tk/events{/privacy}",
"received_events_url": "https://api.github.com/users/d3tk/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg",
"url": "https://api.github.com/repos/ollama/ollama/labels/windows",
"name": "windows",
"color": "0052CC",
"default": false,
"description": ""
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 45
| 2024-10-10T16:27:46
| 2024-11-05T20:03:37
| 2024-11-05T20:03:37
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
The process never completes when I try to do ollama run or ollama list.
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.3.12
|
{
"login": "d3tk",
"id": 90400076,
"node_id": "MDQ6VXNlcjkwNDAwMDc2",
"avatar_url": "https://avatars.githubusercontent.com/u/90400076?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/d3tk",
"html_url": "https://github.com/d3tk",
"followers_url": "https://api.github.com/users/d3tk/followers",
"following_url": "https://api.github.com/users/d3tk/following{/other_user}",
"gists_url": "https://api.github.com/users/d3tk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/d3tk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/d3tk/subscriptions",
"organizations_url": "https://api.github.com/users/d3tk/orgs",
"repos_url": "https://api.github.com/users/d3tk/repos",
"events_url": "https://api.github.com/users/d3tk/events{/privacy}",
"received_events_url": "https://api.github.com/users/d3tk/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7163/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7163/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1494
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1494/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1494/comments
|
https://api.github.com/repos/ollama/ollama/issues/1494/events
|
https://github.com/ollama/ollama/issues/1494
| 2,038,852,557
|
I_kwDOJ0Z1Ps55hmvN
| 1,494
|
suggestion: download models to home directory instead of `/usr/share/` on linux ?
|
{
"login": "hualet",
"id": 2023967,
"node_id": "MDQ6VXNlcjIwMjM5Njc=",
"avatar_url": "https://avatars.githubusercontent.com/u/2023967?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hualet",
"html_url": "https://github.com/hualet",
"followers_url": "https://api.github.com/users/hualet/followers",
"following_url": "https://api.github.com/users/hualet/following{/other_user}",
"gists_url": "https://api.github.com/users/hualet/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hualet/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hualet/subscriptions",
"organizations_url": "https://api.github.com/users/hualet/orgs",
"repos_url": "https://api.github.com/users/hualet/repos",
"events_url": "https://api.github.com/users/hualet/events{/privacy}",
"received_events_url": "https://api.github.com/users/hualet/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 7
| 2023-12-13T02:43:35
| 2024-06-25T18:01:25
| 2023-12-25T14:27:06
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I suggest that models should be downloaded to home directory like `~/.ollama/models` instead of `/usr/share/ollama/.ollama/models`, since I think it's a conviention that data should be in home not root.
I didn't create root with a copacity big enough and encounter this :joy:

|
{
"login": "hualet",
"id": 2023967,
"node_id": "MDQ6VXNlcjIwMjM5Njc=",
"avatar_url": "https://avatars.githubusercontent.com/u/2023967?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hualet",
"html_url": "https://github.com/hualet",
"followers_url": "https://api.github.com/users/hualet/followers",
"following_url": "https://api.github.com/users/hualet/following{/other_user}",
"gists_url": "https://api.github.com/users/hualet/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hualet/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hualet/subscriptions",
"organizations_url": "https://api.github.com/users/hualet/orgs",
"repos_url": "https://api.github.com/users/hualet/repos",
"events_url": "https://api.github.com/users/hualet/events{/privacy}",
"received_events_url": "https://api.github.com/users/hualet/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1494/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1494/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3175
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3175/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3175/comments
|
https://api.github.com/repos/ollama/ollama/issues/3175/events
|
https://github.com/ollama/ollama/issues/3175
| 2,189,666,086
|
I_kwDOJ0Z1Ps6Cg6cm
| 3,175
|
Run Mixtral-8x7B on Consumer Hardware with Expert Offloading
|
{
"login": "arjunkrishna",
"id": 5271912,
"node_id": "MDQ6VXNlcjUyNzE5MTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/5271912?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arjunkrishna",
"html_url": "https://github.com/arjunkrishna",
"followers_url": "https://api.github.com/users/arjunkrishna/followers",
"following_url": "https://api.github.com/users/arjunkrishna/following{/other_user}",
"gists_url": "https://api.github.com/users/arjunkrishna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/arjunkrishna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arjunkrishna/subscriptions",
"organizations_url": "https://api.github.com/users/arjunkrishna/orgs",
"repos_url": "https://api.github.com/users/arjunkrishna/repos",
"events_url": "https://api.github.com/users/arjunkrishna/events{/privacy}",
"received_events_url": "https://api.github.com/users/arjunkrishna/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-03-16T00:52:15
| 2024-03-16T01:24:02
| 2024-03-16T01:13:39
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What are you trying to do?
mixtral:8x7B on rtx 3090 runs slow due to size issue.
### How should we solve this?
in this article it says we can offload some experts to make it run faster.
https://kaitchup.substack.com/p/run-mixtral-8x7b-on-consumer-hardware
If you have already implemented this in ollama, then I apologize.
### What is the impact of not solving this?
mixtral may run faster on rtx 3090.
### Anything else?
https://github.com/dvmazur/mixtral-offloading
_No response_
|
{
"login": "arjunkrishna",
"id": 5271912,
"node_id": "MDQ6VXNlcjUyNzE5MTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/5271912?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arjunkrishna",
"html_url": "https://github.com/arjunkrishna",
"followers_url": "https://api.github.com/users/arjunkrishna/followers",
"following_url": "https://api.github.com/users/arjunkrishna/following{/other_user}",
"gists_url": "https://api.github.com/users/arjunkrishna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/arjunkrishna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arjunkrishna/subscriptions",
"organizations_url": "https://api.github.com/users/arjunkrishna/orgs",
"repos_url": "https://api.github.com/users/arjunkrishna/repos",
"events_url": "https://api.github.com/users/arjunkrishna/events{/privacy}",
"received_events_url": "https://api.github.com/users/arjunkrishna/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3175/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3175/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2984
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2984/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2984/comments
|
https://api.github.com/repos/ollama/ollama/issues/2984/events
|
https://github.com/ollama/ollama/issues/2984
| 2,174,429,341
|
I_kwDOJ0Z1Ps6Bmyid
| 2,984
|
Examples without code
|
{
"login": "slovanos",
"id": 48527469,
"node_id": "MDQ6VXNlcjQ4NTI3NDY5",
"avatar_url": "https://avatars.githubusercontent.com/u/48527469?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/slovanos",
"html_url": "https://github.com/slovanos",
"followers_url": "https://api.github.com/users/slovanos/followers",
"following_url": "https://api.github.com/users/slovanos/following{/other_user}",
"gists_url": "https://api.github.com/users/slovanos/gists{/gist_id}",
"starred_url": "https://api.github.com/users/slovanos/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/slovanos/subscriptions",
"organizations_url": "https://api.github.com/users/slovanos/orgs",
"repos_url": "https://api.github.com/users/slovanos/repos",
"events_url": "https://api.github.com/users/slovanos/events{/privacy}",
"received_events_url": "https://api.github.com/users/slovanos/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-03-07T17:53:01
| 2024-03-07T18:49:41
| 2024-03-07T18:49:41
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Some examples, such as the following, contain no code at all, just a README file:
examples/python-chat-app
examples/modelfile-tweetwriter
Is this how it should be?
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2984/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2984/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/231
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/231/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/231/comments
|
https://api.github.com/repos/ollama/ollama/issues/231/events
|
https://github.com/ollama/ollama/pull/231
| 1,825,085,139
|
PR_kwDOJ0Z1Ps5WlXhL
| 231
|
Update discord invite link
|
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-07-27T19:43:21
| 2023-07-27T19:43:53
| 2023-07-27T19:43:53
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/231",
"html_url": "https://github.com/ollama/ollama/pull/231",
"diff_url": "https://github.com/ollama/ollama/pull/231.diff",
"patch_url": "https://github.com/ollama/ollama/pull/231.patch",
"merged_at": "2023-07-27T19:43:53"
}
|
Update discord invite link
|
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/231/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/231/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2456
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2456/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2456/comments
|
https://api.github.com/repos/ollama/ollama/issues/2456/events
|
https://github.com/ollama/ollama/issues/2456
| 2,129,221,795
|
I_kwDOJ0Z1Ps5-6Vij
| 2,456
|
Providing unsupported image formats (e.g. `avif`) results in server error/hang
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 0
| 2024-02-11T23:26:28
| 2024-02-12T19:16:21
| 2024-02-12T19:16:21
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Providing unsupported image formats causes a hang and error
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2456/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2456/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8278
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8278/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8278/comments
|
https://api.github.com/repos/ollama/ollama/issues/8278/events
|
https://github.com/ollama/ollama/issues/8278
| 2,764,814,723
|
I_kwDOJ0Z1Ps6ky7mD
| 8,278
|
Ollama v0.5.4 not response with stream mode when submit tool option
|
{
"login": "maminge",
"id": 64125498,
"node_id": "MDQ6VXNlcjY0MTI1NDk4",
"avatar_url": "https://avatars.githubusercontent.com/u/64125498?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/maminge",
"html_url": "https://github.com/maminge",
"followers_url": "https://api.github.com/users/maminge/followers",
"following_url": "https://api.github.com/users/maminge/following{/other_user}",
"gists_url": "https://api.github.com/users/maminge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/maminge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/maminge/subscriptions",
"organizations_url": "https://api.github.com/users/maminge/orgs",
"repos_url": "https://api.github.com/users/maminge/repos",
"events_url": "https://api.github.com/users/maminge/events{/privacy}",
"received_events_url": "https://api.github.com/users/maminge/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 1
| 2025-01-01T03:30:48
| 2025-01-13T01:50:17
| 2025-01-13T01:50:17
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Ollama v0.5.4 not response with stream mode when submit tool option
When the content of the reply is not tool_call, I hope to reply in stream mode.
# Thanks lot for your efforts!!!
------------------------------------------
### POST DATA to Ollama API: http://localhost:11434/api/chat
------------------------------------------
```
{
"model": "Qwen2.5-Instruct-q4_1:14b",
"messages": [
{
"role": "system",
"content": "Determine which agent is best suited to handle the user's request, and transfer the conversation to that agent."
},
{
"role": "user",
"content": "hello"
}
],
"stream": true,
"tools": [
{
"type": "function",
"function": {
"name": "transfer_to_sales",
"description": "",
"parameters": {
"type": "object",
"properties": {},
"required": []
}
}
},
{
"type": "function",
"function": {
"name": "transfer_to_refunds",
"description": "",
"parameters": {
"type": "object",
"properties": {},
"required": []
}
}
}
]
}
```
------------------------------------------
### Ollama's Response (NOT STREAM MODE):
------------------------------------------
```
{
"model": "Qwen2.5-Instruct-q4_1:14b",
"created_at": "2025-01-01T03:24:28.802889952Z",
"message": {
"role": "assistant",
"content": "Hello! How can I assist you today? If you're looking to make a purchase, request a refund, or have any other queries, please let me know so I can transfer you to the right agent."
},
"done_reason": "stop",
"done": true,
"total_duration": 1795719070,
"load_duration": 11320172,
"prompt_eval_count": 178,
"prompt_eval_duration": 82000000,
"eval_count": 43,
"eval_duration": 1696000000
}
```
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.5.4
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8278/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8278/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5663
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5663/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5663/comments
|
https://api.github.com/repos/ollama/ollama/issues/5663/events
|
https://github.com/ollama/ollama/issues/5663
| 2,406,698,835
|
I_kwDOJ0Z1Ps6Pc09T
| 5,663
|
Error: llama runner process has terminated: signal: abort trap error:vocab size mismatch.
|
{
"login": "asap-blocky",
"id": 147228147,
"node_id": "U_kgDOCMaF8w",
"avatar_url": "https://avatars.githubusercontent.com/u/147228147?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/asap-blocky",
"html_url": "https://github.com/asap-blocky",
"followers_url": "https://api.github.com/users/asap-blocky/followers",
"following_url": "https://api.github.com/users/asap-blocky/following{/other_user}",
"gists_url": "https://api.github.com/users/asap-blocky/gists{/gist_id}",
"starred_url": "https://api.github.com/users/asap-blocky/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/asap-blocky/subscriptions",
"organizations_url": "https://api.github.com/users/asap-blocky/orgs",
"repos_url": "https://api.github.com/users/asap-blocky/repos",
"events_url": "https://api.github.com/users/asap-blocky/events{/privacy}",
"received_events_url": "https://api.github.com/users/asap-blocky/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-07-13T04:42:19
| 2024-08-04T08:46:57
| 2024-07-13T20:56:10
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
While attempting to run my fine tuned model using the Ollama library, I got this error message, "Error: llama runner process has terminated: signal: abort trap error:vocab size mismatch."
### Model and Environment:
- The model was fine-tuned using the FastLanguageModel from the unsloth library and saved in the GGUF format.
- The tokenizer was applied using a chat template for formatting inputs.
- The model and tokenizer were loaded correctly, and the inference process was initiated.
### Error Occurrence:
- The error occurs immediately after issuing the **ollama run model-name** command.
- The detailed logs indicate a vocabulary size mismatch between the model and the tokenizer.
### Model Metadata:
- The model’s configuration (config.json) indicates a vocabulary size of 32064.
- The tokenizer configuration (tokenizer.json) and metadata logs show different values, leading to the mismatch.
### Steps Taken:
1. Verified the consistency of vocabulary size across all relevant configuration files.
2. Attempted to resize token embeddings to match the tokenizer’s vocabulary size during model loading.
3. Checked for any missing or additional special tokens in the tokenizer configuration.
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
ollama version is 0.2.2 Warning: client version is 0.2.1
|
{
"login": "asap-blocky",
"id": 147228147,
"node_id": "U_kgDOCMaF8w",
"avatar_url": "https://avatars.githubusercontent.com/u/147228147?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/asap-blocky",
"html_url": "https://github.com/asap-blocky",
"followers_url": "https://api.github.com/users/asap-blocky/followers",
"following_url": "https://api.github.com/users/asap-blocky/following{/other_user}",
"gists_url": "https://api.github.com/users/asap-blocky/gists{/gist_id}",
"starred_url": "https://api.github.com/users/asap-blocky/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/asap-blocky/subscriptions",
"organizations_url": "https://api.github.com/users/asap-blocky/orgs",
"repos_url": "https://api.github.com/users/asap-blocky/repos",
"events_url": "https://api.github.com/users/asap-blocky/events{/privacy}",
"received_events_url": "https://api.github.com/users/asap-blocky/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5663/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5663/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1084
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1084/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1084/comments
|
https://api.github.com/repos/ollama/ollama/issues/1084/events
|
https://github.com/ollama/ollama/issues/1084
| 1,988,854,004
|
I_kwDOJ0Z1Ps52i4D0
| 1,084
|
Adding ollama serve to run as a daemon
|
{
"login": "rutsam",
"id": 14162212,
"node_id": "MDQ6VXNlcjE0MTYyMjEy",
"avatar_url": "https://avatars.githubusercontent.com/u/14162212?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rutsam",
"html_url": "https://github.com/rutsam",
"followers_url": "https://api.github.com/users/rutsam/followers",
"following_url": "https://api.github.com/users/rutsam/following{/other_user}",
"gists_url": "https://api.github.com/users/rutsam/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rutsam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rutsam/subscriptions",
"organizations_url": "https://api.github.com/users/rutsam/orgs",
"repos_url": "https://api.github.com/users/rutsam/repos",
"events_url": "https://api.github.com/users/rutsam/events{/privacy}",
"received_events_url": "https://api.github.com/users/rutsam/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 3
| 2023-11-11T09:09:02
| 2023-12-04T23:45:25
| 2023-12-04T23:45:25
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I have been experimenting with ollama and I noticed it was heavily inspired by docker, however I run it on the server and where I do not use the desktop version, and thus find it better if there were to added an option to **run ollama server as a daemon** in the same fashion as docker compose symbolized with **a parameter -d**
```
ollama serve -d
```
|
{
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.github.com/users/technovangelist/followers",
"following_url": "https://api.github.com/users/technovangelist/following{/other_user}",
"gists_url": "https://api.github.com/users/technovangelist/gists{/gist_id}",
"starred_url": "https://api.github.com/users/technovangelist/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/technovangelist/subscriptions",
"organizations_url": "https://api.github.com/users/technovangelist/orgs",
"repos_url": "https://api.github.com/users/technovangelist/repos",
"events_url": "https://api.github.com/users/technovangelist/events{/privacy}",
"received_events_url": "https://api.github.com/users/technovangelist/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1084/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1084/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6754
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6754/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6754/comments
|
https://api.github.com/repos/ollama/ollama/issues/6754/events
|
https://github.com/ollama/ollama/pull/6754
| 2,519,740,089
|
PR_kwDOJ0Z1Ps57KYfc
| 6,754
|
Added QodeAssist link to README.md
|
{
"login": "Palm1r",
"id": 9195189,
"node_id": "MDQ6VXNlcjkxOTUxODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/9195189?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Palm1r",
"html_url": "https://github.com/Palm1r",
"followers_url": "https://api.github.com/users/Palm1r/followers",
"following_url": "https://api.github.com/users/Palm1r/following{/other_user}",
"gists_url": "https://api.github.com/users/Palm1r/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Palm1r/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Palm1r/subscriptions",
"organizations_url": "https://api.github.com/users/Palm1r/orgs",
"repos_url": "https://api.github.com/users/Palm1r/repos",
"events_url": "https://api.github.com/users/Palm1r/events{/privacy}",
"received_events_url": "https://api.github.com/users/Palm1r/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-09-11T13:22:04
| 2024-09-11T20:19:49
| 2024-09-11T20:19:49
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6754",
"html_url": "https://github.com/ollama/ollama/pull/6754",
"diff_url": "https://github.com/ollama/ollama/pull/6754.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6754.patch",
"merged_at": "2024-09-11T20:19:49"
}
|
QodeAssist is using ollama to provide an AI-powered coding assistant plugin for Qt Creator
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6754/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6754/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4124
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4124/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4124/comments
|
https://api.github.com/repos/ollama/ollama/issues/4124/events
|
https://github.com/ollama/ollama/issues/4124
| 2,277,528,591
|
I_kwDOJ0Z1Ps6HwFQP
| 4,124
|
`/api/embeddings` responds with 500 before Ollama is initialized - handle max queued requests failure better
|
{
"login": "maximiliangugler",
"id": 90111898,
"node_id": "MDQ6VXNlcjkwMTExODk4",
"avatar_url": "https://avatars.githubusercontent.com/u/90111898?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/maximiliangugler",
"html_url": "https://github.com/maximiliangugler",
"followers_url": "https://api.github.com/users/maximiliangugler/followers",
"following_url": "https://api.github.com/users/maximiliangugler/following{/other_user}",
"gists_url": "https://api.github.com/users/maximiliangugler/gists{/gist_id}",
"starred_url": "https://api.github.com/users/maximiliangugler/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/maximiliangugler/subscriptions",
"organizations_url": "https://api.github.com/users/maximiliangugler/orgs",
"repos_url": "https://api.github.com/users/maximiliangugler/repos",
"events_url": "https://api.github.com/users/maximiliangugler/events{/privacy}",
"received_events_url": "https://api.github.com/users/maximiliangugler/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg",
"url": "https://api.github.com/repos/ollama/ollama/labels/windows",
"name": "windows",
"color": "0052CC",
"default": false,
"description": ""
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 5
| 2024-05-03T11:56:55
| 2024-05-05T17:53:45
| 2024-05-05T17:53:45
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Hello,
please forgive the ambiguity of this report.
The issue i am encountering now is the following:
Before updating to 0.1.33, i was running on version 0.1.32.
I was running the server with embedding-models for generating embeddings and I was using the langchain OllamaEmbeddings class for it.
I wrote a custom wrapper for asynchronous embeddings to speed up the time it takes to embed documents:
[https://github.com/maxggl/rag-experiment/blob/main/get_embedding_function.py](url)
```
import asyncio
import aiohttp
from langchain_community.embeddings.ollama import OllamaEmbeddings
class AsyncOllamaEmbedder:
def __init__(self, model='avr/sfr-embedding-mistral:q8_0', base_url='http://localhost:11434'):
self.sync_embeddings = OllamaEmbeddings(model=model)
self.base_url = f"{base_url}/api/embeddings"
self.session = None
async def init_session(self):
if self.session is None or self.session.closed:
self.session = await aiohttp.ClientSession().__aenter__()
async def close_session(self):
if self.session and not self.session.closed:
await self.session.__aexit__(None, None, None)
def embed_documents(self, texts):
# Synchronous wrapper for asynchronous embedding
return self.sync_call(self.async_embed_documents, texts)
def embed_query(self, query):
# Single query synchronous wrapper for asynchronous embedding
return self.sync_call(self.async_embed_documents, [query])[0]
async def async_embed_documents(self, texts):
# Initialize session right before use
await self.init_session()
tasks = [self.send_embedding_request(text) for text in texts]
results = await asyncio.gather(*tasks)
await self.close_session()
return results
async def send_embedding_request(self, text):
await self.init_session() # Ensure session is available
async with self.session.post(self.base_url, json={'model': self.sync_embeddings.model, 'prompt': text}) as response:
if response.status == 200:
data = await response.json()
return data.get('embedding')
else:
return None # Handle errors as needed
def sync_call(self, async_func, *args):
loop = asyncio.get_event_loop()
if loop.is_running():
new_loop = asyncio.new_event_loop()
result = new_loop.run_until_complete(async_func(*args))
new_loop.close()
return result
else:
return loop.run_until_complete(async_func(*args))
def get_embedding_function():
return AsyncOllamaEmbedder()
```
With v. 0.1.32 everything was working fine and all requests returned 200 after the model loaded:
```
time=2024-05-03T13:46:20.391+02:00 level=INFO source=gpu.go:202 msg="[cudart] CUDART CUDA Compute Capability detected: 8.6"
time=2024-05-03T13:46:20.405+02:00 level=INFO source=server.go:127 msg="offload to gpu" reallayers=13 layers=13 required="691.1 MiB" used="691.1 MiB" available="9073.0 MiB" kv="6.0 MiB" fulloffload="12.0 MiB" partialoffload="12.0 MiB"
time=2024-05-03T13:46:20.405+02:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-05-03T13:46:20.409+02:00 level=INFO source=server.go:264 msg="starting llama server" cmd="C:\\Users\\MAXIMI~1\\AppData\\Local\\Temp\\ollama1382626371\\runners\\cuda_v11.3\\ollama_llama_server.exe --model C:\\Users\\Maximilian\\.ollama\\models\\blobs\\sha256-970aa74c0a90ef7482477cf803618e776e173c007bf957f635f1015bfcfef0e6 --ctx-size 2048 --batch-size 512 --embedding --log-disable --n-gpu-layers 13 --port 53479"
time=2024-05-03T13:46:20.430+02:00 level=INFO source=server.go:389 msg="waiting for llama runner to start responding"
{"function":"server_params_parse","level":"INFO","line":2603,"msg":"logging to file is disabled.","tid":"17128","timestamp":1714736780}
{"build":2679,"commit":"7593639","function":"wmain","level":"INFO","line":2820,"msg":"build info","tid":"17128","timestamp":1714736780}
{"function":"wmain","level":"INFO","line":2827,"msg":"system info","n_threads":12,"n_threads_batch":-1,"system_info":"AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | SSSE3 = 0 | VSX = 0 | MATMUL_INT8 = 0 | ","tid":"17128","timestamp":1714736780,"total_threads":24}
llama_model_loader: loaded meta data with 24 key-value pairs and 112 tensors from C:\Users\Maximilian\.ollama\models\blobs\sha256-970aa74c0a90ef7482477cf803618e776e173c007bf957f635f1015bfcfef0e6 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = nomic-bert
llama_model_loader: - kv 1: general.name str = nomic-embed-text-v1.5
llama_model_loader: - kv 2: nomic-bert.block_count u32 = 12
llama_model_loader: - kv 3: nomic-bert.context_length u32 = 2048
llama_model_loader: - kv 4: nomic-bert.embedding_length u32 = 768
llama_model_loader: - kv 5: nomic-bert.feed_forward_length u32 = 3072
llama_model_loader: - kv 6: nomic-bert.attention.head_count u32 = 12
llama_model_loader: - kv 7: nomic-bert.attention.layer_norm_epsilon f32 = 0.000000
llama_model_loader: - kv 8: general.file_type u32 = 1
llama_model_loader: - kv 9: nomic-bert.attention.causal bool = false
llama_model_loader: - kv 10: nomic-bert.pooling_type u32 = 1
llama_model_loader: - kv 11: nomic-bert.rope.freq_base f32 = 1000.000000
llama_model_loader: - kv 12: tokenizer.ggml.token_type_count u32 = 2
llama_model_loader: - kv 13: tokenizer.ggml.bos_token_id u32 = 101
llama_model_loader: - kv 14: tokenizer.ggml.eos_token_id u32 = 102
llama_model_loader: - kv 15: tokenizer.ggml.model str = bert
llama_model_loader: - kv 16: tokenizer.ggml.tokens arr[str,30522] = ["[PAD]", "[unused0]", "[unused1]", "...
llama_model_loader: - kv 17: tokenizer.ggml.scores arr[f32,30522] = [-1000.000000, -1000.000000, -1000.00...
llama_model_loader: - kv 18: tokenizer.ggml.token_type arr[i32,30522] = [3, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 19: tokenizer.ggml.unknown_token_id u32 = 100
llama_model_loader: - kv 20: tokenizer.ggml.seperator_token_id u32 = 102
llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 0
llama_model_loader: - kv 22: tokenizer.ggml.cls_token_id u32 = 101
llama_model_loader: - kv 23: tokenizer.ggml.mask_token_id u32 = 103
llama_model_loader: - type f32: 51 tensors
llama_model_loader: - type f16: 61 tensors
llm_load_vocab: mismatch in special tokens definition ( 7104/30522 vs 5/30522 ).
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = nomic-bert
llm_load_print_meta: vocab type = WPM
llm_load_print_meta: n_vocab = 30522
llm_load_print_meta: n_merges = 0
llm_load_print_meta: n_ctx_train = 2048
llm_load_print_meta: n_embd = 768
llm_load_print_meta: n_head = 12
llm_load_print_meta: n_head_kv = 12
llm_load_print_meta: n_layer = 12
llm_load_print_meta: n_rot = 64
llm_load_print_meta: n_embd_head_k = 64
llm_load_print_meta: n_embd_head_v = 64
llm_load_print_meta: n_gqa = 1
llm_load_print_meta: n_embd_k_gqa = 768
llm_load_print_meta: n_embd_v_gqa = 768
llm_load_print_meta: f_norm_eps = 1.0e-12
llm_load_print_meta: f_norm_rms_eps = 0.0e+00
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 3072
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 0
llm_load_print_meta: pooling type = 1
llm_load_print_meta: rope type = 2
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 1000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx = 2048
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: model type = 137M
llm_load_print_meta: model ftype = F16
llm_load_print_meta: model params = 136.73 M
llm_load_print_meta: model size = 260.86 MiB (16.00 BPW)
llm_load_print_meta: general.name = nomic-embed-text-v1.5
llm_load_print_meta: BOS token = 101 '[CLS]'
llm_load_print_meta: EOS token = 102 '[SEP]'
llm_load_print_meta: UNK token = 100 '[UNK]'
llm_load_print_meta: SEP token = 102 '[SEP]'
llm_load_print_meta: PAD token = 0 '[PAD]'
llm_load_print_meta: CLS token = 101 '[CLS]'
llm_load_print_meta: MASK token = 103 '[MASK]'
llm_load_print_meta: LF token = 0 '[PAD]'
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: CUDA_USE_TENSOR_CORES: yes
ggml_cuda_init: found 1 CUDA devices:
Device 0: NVIDIA GeForce RTX 3080, compute capability 8.6, VMM: yes
llm_load_tensors: ggml ctx size = 0.09 MiB
llm_load_tensors: offloading 12 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 13/13 layers to GPU
llm_load_tensors: CPU buffer size = 44.72 MiB
llm_load_tensors: CUDA0 buffer size = 216.15 MiB
.......................................................
llama_new_context_with_model: n_ctx = 2048
llama_new_context_with_model: n_batch = 512
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: freq_base = 1000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: CUDA0 KV buffer size = 72.00 MiB
llama_new_context_with_model: KV self size = 72.00 MiB, K (f16): 36.00 MiB, V (f16): 36.00 MiB
llama_new_context_with_model: CPU output buffer size = 0.00 MiB
llama_new_context_with_model: CUDA0 compute buffer size = 23.00 MiB
llama_new_context_with_model: CUDA_Host compute buffer size = 3.50 MiB
llama_new_context_with_model: graph nodes = 453
llama_new_context_with_model: graph splits = 2
{"function":"initialize","level":"INFO","line":448,"msg":"initializing slots","n_slots":1,"tid":"17128","timestamp":1714736781}
{"function":"initialize","level":"INFO","line":460,"msg":"new slot","n_ctx_slot":2048,"slot_id":0,"tid":"17128","timestamp":1714736781}
{"function":"wmain","level":"INFO","line":3064,"msg":"model loaded","tid":"17128","timestamp":1714736781}
{"function":"wmain","hostname":"127.0.0.1","level":"INFO","line":3267,"msg":"HTTP server listening","n_threads_http":"23","port":"53479","tid":"17128","timestamp":1714736782}
{"function":"update_slots","level":"INFO","line":1578,"msg":"all slots are idle and system prompt is empty, clear the KV cache","tid":"17128","timestamp":1714736782}
{"function":"process_single_task","level":"INFO","line":1510,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":0,"tid":"17128","timestamp":1714736782}
{"function":"log_server_request","level":"INFO","line":2741,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":53484,"status":200,"tid":"26436","timestamp":1714736782}
{"function":"process_single_task","level":"INFO","line":1510,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":1,"tid":"17128","timestamp":1714736782}
{"function":"process_single_task","level":"INFO","line":1510,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":3,"tid":"17128","timestamp":1714736782}
{"function":"log_server_request","level":"INFO","line":2741,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":53483,"status":200,"tid":"25184","timestamp":1714736782}
{"function":"process_single_task","level":"INFO","line":1510,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":4,"tid":"17128","timestamp":1714736782}{"function":"log_server_request","level":"INFO","line":2741,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":53481,"status":200,"tid":"18948","timestamp":1714736782}
{"function":"process_single_task","level":"INFO","line":1510,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":2,"tid":"17128","timestamp":1714736782}
{"function":"log_server_request","level":"INFO","line":2741,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":53486,"status":200,"tid":"10400","timestamp":1714736782}
{"function":"log_server_request","level":"INFO","line":2741,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":53485,"status":200,"tid":"5640","timestamp":1714736782}
{"function":"process_single_task","level":"INFO","line":1510,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":5,"tid":"17128","timestamp":1714736782}
{"function":"log_server_request","level":"INFO","line":2741,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":53498,"status":200,"tid":"20792","timestamp":1714736782}
{"function":"process_single_task","level":"INFO","line":1510,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":6,"tid":"17128","timestamp":1714736782}
{"function":"log_server_request","level":"INFO","line":2741,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":53498,"status":200,"tid":"20792","timestamp":1714736782}
{"function":"launch_slot_with_data","level":"INFO","line":833,"msg":"slot is processing task","slot_id":0,"task_id":7,"tid":"17128","timestamp":1714736782}
{"function":"update_slots","level":"INFO","line":1840,"msg":"kv cache rm [p0, end)","p0":0,"slot_id":0,"task_id":7,"tid":"17128","timestamp":1714736782}
{"function":"update_slots","level":"INFO","line":1648,"msg":"slot released","n_cache_tokens":183,"n_ctx":2048,"n_past":183,"n_system_tokens":0,"slot_id":0,"task_id":7,"tid":"17128","timestamp":1714736782,"truncated":false}
{"function":"log_server_request","level":"INFO","line":2741,"method":"POST","msg":"request","params":{},"path":"/embedding","remote_addr":"127.0.0.1","remote_port":53498,"status":200,"tid":"20792","timestamp":1714736782}
[GIN] 2024/05/03 - 13:46:22 | 200 | 2.2024646s | 127.0.0.1 | POST "/api/embeddings"
{"function":"process_single_task","level":"INFO","line":1510,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":10,"tid":"17128","timestamp":1714736782}
{"function":"log_server_request","level":"INFO","line":2741,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":53498,"status":200,"tid":"20792","timestamp":1714736782}
{"function":"process_single_task","level":"INFO","line":1510,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":11,"tid":"17128","timestamp":1714736782}
{"function":"log_server_request","level":"INFO","line":2741,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":53498,"status":200,"tid":"20792","timestamp":1714736782}
{"function":"launch_slot_with_data","level":"INFO","line":833,"msg":"slot is processing task","slot_id":0,"task_id":12,"tid":"17128","timestamp":1714736782}
{"function":"update_slots","level":"INFO","line":1840,"msg":"kv cache rm [p0, end)","p0":0,"slot_id":0,"task_id":12,"tid":"17128","timestamp":1714736782}
{"function":"update_slots","level":"INFO","line":1648,"msg":"slot released","n_cache_tokens":22,"n_ctx":2048,"n_past":22,"n_system_tokens":0,"slot_id":0,"task_id":12,"tid":"17128","timestamp":1714736782,"truncated":false}
{"function":"log_server_request","level":"INFO","line":2741,"method":"POST","msg":"request","params":{},"path":"/embedding","remote_addr":"127.0.0.1","remote_port":53549,"status":200,"tid":"22736","timestamp":1714736782}
```
But after updating to 0.1.33, it seems as if the requests were processed and returned 500 because the model was not yet loaded but the server responded anyway, at least in the log it appears to be this way:
```
time=2024-05-03T13:51:30.047+02:00 level=INFO source=gpu.go:96 msg="Detecting GPUs"
[GIN] 2024/05/03 - 13:51:30 | 500 | 11.5625ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 | 11.5625ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 | 11.5625ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 | 12.6713ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 | 12.7613ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 | 13.2678ms | 127.0.0.1 | POST "/api/embeddings"
time=2024-05-03T13:51:30.057+02:00 level=INFO source=gpu.go:101 msg="detected GPUs" library=C:\Users\Maximilian\AppData\Local\Programs\Ollama\cudart64_110.dll count=1
time=2024-05-03T13:51:30.057+02:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
[GIN] 2024/05/03 - 13:51:30 | 500 | 12.8669ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 | 13.386ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 | 14.3949ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 | 13.8924ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 | 14.3949ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 | 14.9223ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 | 15.0123ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 | 15.0123ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 | 15.1662ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 | 15.6853ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 | 15.6853ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 | 15.6853ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 | 15.6853ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 | 15.1662ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 | 15.672ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 | 15.1662ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 | 15.1662ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 | 16.4901ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 | 15.6853ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 | 15.672ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 | 17.0477ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 | 16.6936ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 | 16.6936ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 | 16.4901ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 | 17.0477ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 | 17.0477ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 | 16.6936ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 | 18.0693ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 | 18.1391ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 | 6.329ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 | 18.1391ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 | 8.2953ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 | 6.476ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 | 6.5942ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 | 18.1391ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 | 7.6967ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 | 18.073ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 | 5.9696ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 | 6.476ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 | 17.0415ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 | 5.3425ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 | 6.069ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 | 8.9295ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 | 6.675ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 | 6.682ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 | 5.4222ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 | 5.9277ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 | 6.7458ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 | 7.2516ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 | 7.5524ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 | 7.5524ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 | 7.5524ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 | 7.5524ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 | 6.4886ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 | 6.9948ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 | 6.4886ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 | 6.4886ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 | 6.9948ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 | 8.8933ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 | 7.2587ms | 127.0.0.1 | POST "/api/embeddings"
....
[GIN] 2024/05/03 - 13:51:30 | 500 | 2.6255ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 | 2.1017ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 | 1.5844ms | 127.0.0.1 | POST "/api/embeddings"
time=2024-05-03T13:51:30.281+02:00 level=WARN source=memory.go:18 msg="requested context length is greater than model max context length" requested=8192 model=2048
time=2024-05-03T13:51:30.281+02:00 level=INFO source=memory.go:152 msg="offload to gpu" layers.real=-1 layers.estimate=13 memory.available="9073.0 MiB" memory.required.full="735.9 MiB" memory.required.partial="735.9 MiB" memory.required.kv="6.0 MiB" memory.weights.total="260.9 MiB" memory.weights.repeating="216.1 MiB" memory.weights.nonrepeating="44.7 MiB" memory.graph.full="12.0 MiB" memory.graph.partial="12.0 MiB"
time=2024-05-03T13:51:30.281+02:00 level=WARN source=server.go:76 msg="requested context length is greater than the model's training context window size" requested=8192 "training size"=2048
time=2024-05-03T13:51:30.282+02:00 level=INFO source=memory.go:152 msg="offload to gpu" layers.real=-1 layers.estimate=13 memory.available="9073.0 MiB" memory.required.full="789.9 MiB" memory.required.partial="789.9 MiB" memory.required.kv="24.0 MiB" memory.weights.total="260.9 MiB" memory.weights.repeating="216.1 MiB" memory.weights.nonrepeating="44.7 MiB" memory.graph.full="48.0 MiB" memory.graph.partial="48.0 MiB"
time=2024-05-03T13:51:30.282+02:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-05-03T13:51:30.286+02:00 level=INFO source=server.go:289 msg="starting llama server" cmd="C:\\Users\\Maximilian\\AppData\\Local\\Programs\\Ollama\\ollama_runners\\cuda_v11.3\\ollama_llama_server.exe --model C:\\Users\\Maximilian\\.ollama\\models\\blobs\\sha256-970aa74c0a90ef7482477cf803618e776e173c007bf957f635f1015bfcfef0e6 --ctx-size 8192 --batch-size 512 --embedding --log-disable --n-gpu-layers 13 --parallel 1 --port 53915"
time=2024-05-03T13:51:30.305+02:00 level=INFO source=sched.go:340 msg="loaded runners" count=1
time=2024-05-03T13:51:30.305+02:00 level=INFO source=server.go:432 msg="waiting for llama runner to start responding"
{"function":"server_params_parse","level":"INFO","line":2606,"msg":"logging to file is disabled.","tid":"9888","timestamp":1714737090}
{"build":2770,"commit":"952d03d","function":"wmain","level":"INFO","line":2823,"msg":"build info","tid":"9888","timestamp":1714737090}
{"function":"wmain","level":"INFO","line":2830,"msg":"system info","n_threads":12,"n_threads_batch":-1,"system_info":"AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | SSSE3 = 0 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | ","tid":"9888","timestamp":1714737090,"total_threads":24}
llama_model_loader: loaded meta data with 24 key-value pairs and 112 tensors from C:\Users\Maximilian\.ollama\models\blobs\sha256-970aa74c0a90ef7482477cf803618e776e173c007bf957f635f1015bfcfef0e6 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = nomic-bert
llama_model_loader: - kv 1: general.name str = nomic-embed-text-v1.5
llama_model_loader: - kv 2: nomic-bert.block_count u32 = 12
llama_model_loader: - kv 3: nomic-bert.context_length u32 = 2048
llama_model_loader: - kv 4: nomic-bert.embedding_length u32 = 768
llama_model_loader: - kv 5: nomic-bert.feed_forward_length u32 = 3072
llama_model_loader: - kv 6: nomic-bert.attention.head_count u32 = 12
llama_model_loader: - kv 7: nomic-bert.attention.layer_norm_epsilon f32 = 0.000000
llama_model_loader: - kv 8: general.file_type u32 = 1
llama_model_loader: - kv 9: nomic-bert.attention.causal bool = false
llama_model_loader: - kv 10: nomic-bert.pooling_type u32 = 1
llama_model_loader: - kv 11: nomic-bert.rope.freq_base f32 = 1000.000000
llama_model_loader: - kv 12: tokenizer.ggml.token_type_count u32 = 2
llama_model_loader: - kv 13: tokenizer.ggml.bos_token_id u32 = 101
llama_model_loader: - kv 14: tokenizer.ggml.eos_token_id u32 = 102
llama_model_loader: - kv 15: tokenizer.ggml.model str = bert
llama_model_loader: - kv 16: tokenizer.ggml.tokens arr[str,30522] = ["[PAD]", "[unused0]", "[unused1]", "...
llama_model_loader: - kv 17: tokenizer.ggml.scores arr[f32,30522] = [-1000.000000, -1000.000000, -1000.00...
llama_model_loader: - kv 18: tokenizer.ggml.token_type arr[i32,30522] = [3, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 19: tokenizer.ggml.unknown_token_id u32 = 100
llama_model_loader: - kv 20: tokenizer.ggml.seperator_token_id u32 = 102
llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 0
llama_model_loader: - kv 22: tokenizer.ggml.cls_token_id u32 = 101
llama_model_loader: - kv 23: tokenizer.ggml.mask_token_id u32 = 103
llama_model_loader: - type f32: 51 tensors
llama_model_loader: - type f16: 61 tensors
llm_load_vocab: mismatch in special tokens definition ( 7104/30522 vs 5/30522 ).
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = nomic-bert
llm_load_print_meta: vocab type = WPM
llm_load_print_meta: n_vocab = 30522
llm_load_print_meta: n_merges = 0
llm_load_print_meta: n_ctx_train = 2048
llm_load_print_meta: n_embd = 768
llm_load_print_meta: n_head = 12
llm_load_print_meta: n_head_kv = 12
llm_load_print_meta: n_layer = 12
llm_load_print_meta: n_rot = 64
llm_load_print_meta: n_embd_head_k = 64
llm_load_print_meta: n_embd_head_v = 64
llm_load_print_meta: n_gqa = 1
llm_load_print_meta: n_embd_k_gqa = 768
llm_load_print_meta: n_embd_v_gqa = 768
llm_load_print_meta: f_norm_eps = 1.0e-12
llm_load_print_meta: f_norm_rms_eps = 0.0e+00
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 3072
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 0
llm_load_print_meta: pooling type = 1
llm_load_print_meta: rope type = 2
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 1000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx = 2048
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: model type = 137M
llm_load_print_meta: model ftype = F16
llm_load_print_meta: model params = 136.73 M
llm_load_print_meta: model size = 260.86 MiB (16.00 BPW)
llm_load_print_meta: general.name = nomic-embed-text-v1.5
llm_load_print_meta: BOS token = 101 '[CLS]'
llm_load_print_meta: EOS token = 102 '[SEP]'
llm_load_print_meta: UNK token = 100 '[UNK]'
llm_load_print_meta: SEP token = 102 '[SEP]'
llm_load_print_meta: PAD token = 0 '[PAD]'
llm_load_print_meta: CLS token = 101 '[CLS]'
llm_load_print_meta: MASK token = 103 '[MASK]'
llm_load_print_meta: LF token = 0 '[PAD]'
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: CUDA_USE_TENSOR_CORES: yes
ggml_cuda_init: found 1 CUDA devices:
Device 0: NVIDIA GeForce RTX 3080, compute capability 8.6, VMM: yes
llm_load_tensors: ggml ctx size = 0.11 MiB
llm_load_tensors: offloading 12 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 13/13 layers to GPU
llm_load_tensors: CPU buffer size = 44.72 MiB
llm_load_tensors: CUDA0 buffer size = 216.15 MiB
.......................................................
llama_new_context_with_model: n_ctx = 8192
llama_new_context_with_model: n_batch = 512
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: freq_base = 1000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: CUDA0 KV buffer size = 288.00 MiB
llama_new_context_with_model: KV self size = 288.00 MiB, K (f16): 144.00 MiB, V (f16): 144.00 MiB
llama_new_context_with_model: CPU output buffer size = 0.00 MiB
llama_new_context_with_model: CUDA0 compute buffer size = 23.00 MiB
llama_new_context_with_model: CUDA_Host compute buffer size = 3.50 MiB
llama_new_context_with_model: graph nodes = 453
llama_new_context_with_model: graph splits = 2
{"function":"initialize","level":"INFO","line":448,"msg":"initializing slots","n_slots":1,"tid":"9888","timestamp":1714737092}
{"function":"initialize","level":"INFO","line":460,"msg":"new slot","n_ctx_slot":8192,"slot_id":0,"tid":"9888","timestamp":1714737092}
{"function":"wmain","level":"INFO","line":3067,"msg":"model loaded","tid":"9888","timestamp":1714737092}
{"function":"wmain","hostname":"127.0.0.1","level":"INFO","line":3270,"msg":"HTTP server listening","n_threads_http":"23","port":"53915","tid":"9888","timestamp":1714737092}
{"function":"update_slots","level":"INFO","line":1581,"msg":"all slots are idle and system prompt is empty, clear the KV cache","tid":"9888","timestamp":1714737092}
{"function":"process_single_task","level":"INFO","line":1513,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":3,"tid":"9888","timestamp":1714737092}
{"function":"process_single_task","level":"INFO","line":1513,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":2,"tid":"9888","timestamp":1714737092}
{"function":"log_server_request","level":"INFO","line":2744,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":53928,"status":200,"tid":"8144","timestamp":1714737092}{"function":"process_single_task","level":"INFO","line":1513,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":0,"tid":"9888","timestamp":1714737092}
{"function":"log_server_request","level":"INFO","line":2744,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":53930,"status":200,"tid":"9528","timestamp":1714737092}
{"function":"log_server_request","level":"INFO","line":2744,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":53926,"status":200,"tid":"5772","timestamp":1714737092}
{"function":"process_single_task","level":"INFO","line":1513,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":1,"tid":"9888","timestamp":1714737092}
{"function":"process_single_task","level":"INFO","line":1513,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":4,"tid":"9888","timestamp":1714737092}
{"function":"log_server_request","level":"INFO","line":2744,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":53927,"status":200,"tid":"9092","timestamp":1714737092}
{"function":"log_server_request","level":"INFO","line":2744,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":53929,"status":200,"tid":"20912","timestamp":1714737092}
{"function":"process_single_task","level":"INFO","line":1513,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":5,"tid":"9888","timestamp":1714737092}
{"function":"log_server_request","level":"INFO","line":2744,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":53964,"status":200,"tid":"1948","timestamp":1714737092}
{"function":"process_single_task","level":"INFO","line":1513,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":6,"tid":"9888","timestamp":1714737092}
{"function":"process_single_task","level":"INFO","line":1513,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":7,"tid":"9888","timestamp":1714737092}
{"function":"log_server_request","level":"INFO","line":2744,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":53964,"status":200,"tid":"1948","timestamp":1714737092}
{"function":"log_server_request","level":"INFO","line":2744,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":53965,"status":200,"tid":"18032","timestamp":1714737092}
{"function":"process_single_task","level":"INFO","line":1513,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":8,"tid":"9888","timestamp":1714737092}
{"function":"log_server_request","level":"INFO","line":2744,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":53964,"status":200,"tid":"1948","timestamp":1714737092}
{"function":"launch_slot_with_data","level":"INFO","line":833,"msg":"slot is processing task","slot_id":0,"task_id":9,"tid":"9888","timestamp":1714737092}
{"function":"process_single_task","level":"INFO","line":1513,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":10,"tid":"9888","timestamp":1714737092}
{"function":"log_server_request","level":"INFO","line":2744,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":53964,"status":200,"tid":"1948","timestamp":1714737092}
[GIN] 2024/05/03 - 13:51:32 | 500 | 4.7901ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:32 | 500 | 4.8137ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:32 | 500 | 5.3891ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:32 | 500 | 5.3891ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:32 | 500 | 5.3891ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:32 | 500 | 5.3891ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:32 | 500 | 5.3888ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:32 | 500 | 5.8936ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:32 | 500 | 5.8936ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:32 | 500 | 5.3888ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:32 | 500 | 5.8936ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:32 | 500 | 5.2997ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:32 | 500 | 6.9595ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:32 | 500 | 6.4547ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:32 | 500 | 6.4547ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:32 | 500 | 5.2997ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:32 | 500 | 6.4547ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:32 | 500 | 6.4547ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:32 | 500 | 6.4547ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:32 | 500 | 6.9595ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:32 | 500 | 6.4547ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:32 | 500 | 5.2997ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:32 | 500 | 6.4547ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:32 | 500 | 6.4547ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:32 | 500 | 2.3236ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:32 | 500 | 3.1593ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:32 | 500 | 3.1593ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:32 | 500 | 3.6841ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:32 | 500 | 4.2082ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:32 | 500 | 4.2082ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:32 | 500 | 4.2082ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:32 | 500 | 4.2082ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:32 | 500 | 4.2082ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:32 | 500 | 3.65ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:32 | 500 | 4.2082ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:32 | 500 | 3.65ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:32 | 500 | 4.2012ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:32 | 500 | 4.7594ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:32 | 500 | 4.2012ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:32 | 500 | 4.2012ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:32 | 500 | 4.2012ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:32 | 500 | 4.2544ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:32 | 500 | 4.2012ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:32 | 500 | 3.6973ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:32 | 500 | 4.2012ms | 127.0.0.1 | POST "/api/embeddings"
[GIN] 2024/05/03 - 13:51:32 | 500 | 4.2012ms | 127.0.0.1 | POST "/api/embeddings"
{"function":"update_slots","level":"INFO","line":1843,"msg":"kv cache rm [p0, end)","p0":0,"slot_id":0,"task_id":9,"tid":"9888","timestamp":1714737092}
{"function":"process_single_task","level":"INFO","line":1513,"msg":"slot data","n_idle_slots":0,"n_processing_slots":1,"task_id":12,"tid":"9888","timestamp":1714737092}
{"function":"log_server_request","level":"INFO","line":2744,"method":"POST","msg":"request","params":{},"path":"/embedding","remote_addr":"127.0.0.1","remote_port":53965,"status":200,"tid":"18032","timestamp":1714737092}
{"function":"update_slots","level":"INFO","line":1651,"msg":"slot released","n_cache_tokens":201,"n_ctx":8192,"n_past":201,"n_system_tokens":0,"slot_id":0,"task_id":9,"tid":"9888","timestamp":1714737092,"truncated":false}
{"function":"log_server_request","level":"INFO","line":2744,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":53964,"status":200,"tid":"1948","timestamp":1714737092}
{"function":"process_single_task","level":"INFO","line":1513,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":14,"tid":"9888","timestamp":1714737092}
[GIN] 2024/05/03 - 13:51:32 | 200 | 2.2254284s | 127.0.0.1 | POST "/api/embeddings"
{"function":"log_server_request","level":"INFO","line":2744,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":53965,"status":200,"tid":"18032","timestamp":1714737092}
{"function":"process_single_task","level":"INFO","line":1513,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":15,"tid":"9888","timestamp":1714737092}
{"function":"process_single_task","level":"INFO","line":1513,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":16,"tid":"9888","timestamp":1714737092}
{"function":"log_server_request","level":"INFO","line":2744,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":53988,"status":200,"tid":"11896","timestamp":1714737092}
{"function":"log_server_request","level":"INFO","line":2744,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":53965,"status":200,"tid":"18032","timestamp":1714737092}
{"function":"process_single_task","level":"INFO","line":1513,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":17,"tid":"9888","timestamp":1714737092}
{"function":"launch_slot_with_data","level":"INFO","line":833,"msg":"slot is processing task","slot_id":0,"task_id":18,"tid":"9888","timestamp":1714737092}
{"function":"log_server_request","level":"INFO","line":2744,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":53988,"status":200,"tid":"11896","timestamp":1714737092}
{"function":"update_slots","level":"INFO","line":1843,"msg":"kv cache rm [p0, end)","p0":0,"slot_id":0,"task_id":18,"tid":"9888","timestamp":1714737092}
{"function":"process_single_task","level":"INFO","line":1513,"msg":"slot data","n_idle_slots":0,"n_processing_slots":1,"task_id":20,"tid":"9888","timestamp":1714737092}
{"function":"log_server_request","level":"INFO","line":2744,"method":"POST","msg":"request","params":{},"path":"/embedding","remote_addr":"127.0.0.1","remote_port":53965,"status":200,"tid":"18032","timestamp":1714737092}
{"function":"update_slots","level":"INFO","line":1651,"msg":"slot released","n_cache_tokens":183,"n_ctx":8192,"n_past":183,"n_system_tokens":0,"slot_id":0,"task_id":18,"tid":"9888","timestamp":1714737092,"truncated":false}
{"function":"log_server_request","level":"INFO","line":2744,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":53988,"status":200,"tid":"11896","timestamp":1714737092}
{"function":"process_single_task","level":"INFO","line":1513,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":22,"tid":"9888","timestamp":1714737092}
```
Thank you for your help!
### OS
_No response_
### GPU
_No response_
### CPU
_No response_
### Ollama version
0.1.33
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4124/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4124/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6593
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6593/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6593/comments
|
https://api.github.com/repos/ollama/ollama/issues/6593/events
|
https://github.com/ollama/ollama/issues/6593
| 2,501,234,628
|
I_kwDOJ0Z1Ps6VFc_E
| 6,593
|
Get supported models with API
|
{
"login": "angelozerr",
"id": 1932211,
"node_id": "MDQ6VXNlcjE5MzIyMTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1932211?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/angelozerr",
"html_url": "https://github.com/angelozerr",
"followers_url": "https://api.github.com/users/angelozerr/followers",
"following_url": "https://api.github.com/users/angelozerr/following{/other_user}",
"gists_url": "https://api.github.com/users/angelozerr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/angelozerr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/angelozerr/subscriptions",
"organizations_url": "https://api.github.com/users/angelozerr/orgs",
"repos_url": "https://api.github.com/users/angelozerr/repos",
"events_url": "https://api.github.com/users/angelozerr/events{/privacy}",
"received_events_url": "https://api.github.com/users/angelozerr/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-09-02T15:36:12
| 2024-09-02T22:02:26
| 2024-09-02T22:02:26
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
The API provides the capability to get the list of local models, but I have not found an API to get the supported models that we can see with HTML page at https://ollama.com/library?q=l&sort=featured
It would be nice if API could provide a list of supported models.
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6593/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6593/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/648
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/648/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/648/comments
|
https://api.github.com/repos/ollama/ollama/issues/648/events
|
https://github.com/ollama/ollama/issues/648
| 1,919,580,844
|
I_kwDOJ0Z1Ps5yanqs
| 648
|
Model Parameters Not Getting Set
|
{
"login": "fmackenzie",
"id": 38498536,
"node_id": "MDQ6VXNlcjM4NDk4NTM2",
"avatar_url": "https://avatars.githubusercontent.com/u/38498536?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fmackenzie",
"html_url": "https://github.com/fmackenzie",
"followers_url": "https://api.github.com/users/fmackenzie/followers",
"following_url": "https://api.github.com/users/fmackenzie/following{/other_user}",
"gists_url": "https://api.github.com/users/fmackenzie/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fmackenzie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fmackenzie/subscriptions",
"organizations_url": "https://api.github.com/users/fmackenzie/orgs",
"repos_url": "https://api.github.com/users/fmackenzie/repos",
"events_url": "https://api.github.com/users/fmackenzie/events{/privacy}",
"received_events_url": "https://api.github.com/users/fmackenzie/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 6
| 2023-09-29T16:28:16
| 2023-10-02T19:50:10
| 2023-10-02T19:50:10
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
From what I can tell, the parameters set in the model file are not getting set properly. Taking the mario Modelfile as an example and adding an EMBED and a few PARAMETERS, it looks like in the server output that the PARAMETERS are having issues getting set to the appropriate type, and thus are not actually getting set as configured.
Here's the sample Modelfile
```
FROM llama2
EMBED /data/ollama/data/sample-content/*.txt
PARAMETER temperature 0.8
# PARAMETER num_thread 2
PARAMETER num_ctx 4096
PARAMETER num_gpu 1
SYSTEM """
You are Mario from super mario bros, acting as an assistant.
"""
```
and when the creation of the new model is run, against the started server, the following outcomes appear to indicate that there are issues setting the data values as configured:
```
2023/09/29 12:23:06 types.go:234: could not convert model parameter num_ctx to int, skipped
2023/09/29 12:23:06 types.go:234: could not convert model parameter num_gpu to int, skipped
2023/09/29 12:23:06 types.go:247: could not convert model parameter temperature to float32, skipped
```
|
{
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.github.com/users/technovangelist/followers",
"following_url": "https://api.github.com/users/technovangelist/following{/other_user}",
"gists_url": "https://api.github.com/users/technovangelist/gists{/gist_id}",
"starred_url": "https://api.github.com/users/technovangelist/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/technovangelist/subscriptions",
"organizations_url": "https://api.github.com/users/technovangelist/orgs",
"repos_url": "https://api.github.com/users/technovangelist/repos",
"events_url": "https://api.github.com/users/technovangelist/events{/privacy}",
"received_events_url": "https://api.github.com/users/technovangelist/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/648/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/648/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2447
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2447/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2447/comments
|
https://api.github.com/repos/ollama/ollama/issues/2447/events
|
https://github.com/ollama/ollama/pull/2447
| 2,128,901,871
|
PR_kwDOJ0Z1Ps5mkovQ
| 2,447
|
Add Page Assist to the community integrations
|
{
"login": "n4ze3m",
"id": 39720973,
"node_id": "MDQ6VXNlcjM5NzIwOTcz",
"avatar_url": "https://avatars.githubusercontent.com/u/39720973?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/n4ze3m",
"html_url": "https://github.com/n4ze3m",
"followers_url": "https://api.github.com/users/n4ze3m/followers",
"following_url": "https://api.github.com/users/n4ze3m/following{/other_user}",
"gists_url": "https://api.github.com/users/n4ze3m/gists{/gist_id}",
"starred_url": "https://api.github.com/users/n4ze3m/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/n4ze3m/subscriptions",
"organizations_url": "https://api.github.com/users/n4ze3m/orgs",
"repos_url": "https://api.github.com/users/n4ze3m/repos",
"events_url": "https://api.github.com/users/n4ze3m/events{/privacy}",
"received_events_url": "https://api.github.com/users/n4ze3m/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 3
| 2024-02-11T08:59:21
| 2024-02-20T19:03:58
| 2024-02-20T19:03:58
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/2447",
"html_url": "https://github.com/ollama/ollama/pull/2447",
"diff_url": "https://github.com/ollama/ollama/pull/2447.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2447.patch",
"merged_at": "2024-02-20T19:03:58"
}
|
Hey, I'd like to share my Chrome extension project I've been working on, `Page Assist`, for community integration. It offers a sidebar and web UI for Ollama :). Please review this PR. Thank you.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2447/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2447/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/508
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/508/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/508/comments
|
https://api.github.com/repos/ollama/ollama/issues/508/events
|
https://github.com/ollama/ollama/pull/508
| 1,891,322,441
|
PR_kwDOJ0Z1Ps5aEQFY
| 508
|
create the blobs directory correctly
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-09-11T21:53:52
| 2023-09-11T21:54:52
| 2023-09-11T21:54:52
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/508",
"html_url": "https://github.com/ollama/ollama/pull/508",
"diff_url": "https://github.com/ollama/ollama/pull/508.diff",
"patch_url": "https://github.com/ollama/ollama/pull/508.patch",
"merged_at": "2023-09-11T21:54:52"
}
| null |
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/508/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/508/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5938
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5938/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5938/comments
|
https://api.github.com/repos/ollama/ollama/issues/5938/events
|
https://github.com/ollama/ollama/issues/5938
| 2,428,849,521
|
I_kwDOJ0Z1Ps6QxU1x
| 5,938
|
Error: could not connect to ollama app, is it running?
|
{
"login": "wwjCMP",
"id": 32979859,
"node_id": "MDQ6VXNlcjMyOTc5ODU5",
"avatar_url": "https://avatars.githubusercontent.com/u/32979859?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wwjCMP",
"html_url": "https://github.com/wwjCMP",
"followers_url": "https://api.github.com/users/wwjCMP/followers",
"following_url": "https://api.github.com/users/wwjCMP/following{/other_user}",
"gists_url": "https://api.github.com/users/wwjCMP/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wwjCMP/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wwjCMP/subscriptions",
"organizations_url": "https://api.github.com/users/wwjCMP/orgs",
"repos_url": "https://api.github.com/users/wwjCMP/repos",
"events_url": "https://api.github.com/users/wwjCMP/events{/privacy}",
"received_events_url": "https://api.github.com/users/wwjCMP/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 14
| 2024-07-25T02:49:43
| 2024-07-26T11:02:04
| 2024-07-26T11:02:04
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Environment="OLLAMA_MODELS=/home/try/ollama/models"
After changing the Environment OLLAMA_MODELS, ollamal can not connect. If i cancel it, ollama can run again. What is the reason?
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.2.8
|
{
"login": "wwjCMP",
"id": 32979859,
"node_id": "MDQ6VXNlcjMyOTc5ODU5",
"avatar_url": "https://avatars.githubusercontent.com/u/32979859?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wwjCMP",
"html_url": "https://github.com/wwjCMP",
"followers_url": "https://api.github.com/users/wwjCMP/followers",
"following_url": "https://api.github.com/users/wwjCMP/following{/other_user}",
"gists_url": "https://api.github.com/users/wwjCMP/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wwjCMP/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wwjCMP/subscriptions",
"organizations_url": "https://api.github.com/users/wwjCMP/orgs",
"repos_url": "https://api.github.com/users/wwjCMP/repos",
"events_url": "https://api.github.com/users/wwjCMP/events{/privacy}",
"received_events_url": "https://api.github.com/users/wwjCMP/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5938/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5938/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6838
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6838/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6838/comments
|
https://api.github.com/repos/ollama/ollama/issues/6838/events
|
https://github.com/ollama/ollama/issues/6838
| 2,531,080,329
|
I_kwDOJ0Z1Ps6W3TiJ
| 6,838
|
Old Context Information fetched
|
{
"login": "atul-siriusai",
"id": 172748914,
"node_id": "U_kgDOCkvwcg",
"avatar_url": "https://avatars.githubusercontent.com/u/172748914?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/atul-siriusai",
"html_url": "https://github.com/atul-siriusai",
"followers_url": "https://api.github.com/users/atul-siriusai/followers",
"following_url": "https://api.github.com/users/atul-siriusai/following{/other_user}",
"gists_url": "https://api.github.com/users/atul-siriusai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/atul-siriusai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/atul-siriusai/subscriptions",
"organizations_url": "https://api.github.com/users/atul-siriusai/orgs",
"repos_url": "https://api.github.com/users/atul-siriusai/repos",
"events_url": "https://api.github.com/users/atul-siriusai/events{/privacy}",
"received_events_url": "https://api.github.com/users/atul-siriusai/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 14
| 2024-09-17T12:50:03
| 2024-11-29T23:55:51
| 2024-09-18T00:23:51
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hello,
I am currently working on a Retrieval-Augmented Generation (RAG) application using LLaMA 3.1 70B. The workflow involves a set of documents in markdown format and an Excel sheet containing specific information that needs to be extracted from these documents. The process iterates over each row, dynamically generating a prompt and retrieving the relevant record along with its citation.
However, I am encountering an issue when processing subsequent documents. It appears that the model is retaining context from the previous document and using it to answer queries for the new document. This is causing inconsistencies in the responses and affecting the accuracy of the extraction.
Any insights on how to ensure the model resets context between documents would be appreciated
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.3.6
|
{
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users/jessegross/followers",
"following_url": "https://api.github.com/users/jessegross/following{/other_user}",
"gists_url": "https://api.github.com/users/jessegross/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jessegross/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jessegross/subscriptions",
"organizations_url": "https://api.github.com/users/jessegross/orgs",
"repos_url": "https://api.github.com/users/jessegross/repos",
"events_url": "https://api.github.com/users/jessegross/events{/privacy}",
"received_events_url": "https://api.github.com/users/jessegross/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6838/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6838/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1914
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1914/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1914/comments
|
https://api.github.com/repos/ollama/ollama/issues/1914/events
|
https://github.com/ollama/ollama/pull/1914
| 2,075,372,743
|
PR_kwDOJ0Z1Ps5jvQlF
| 1,914
|
Smarter GPU Management library detection
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-01-10T23:07:34
| 2024-01-11T01:28:42
| 2024-01-10T23:21:57
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1914",
"html_url": "https://github.com/ollama/ollama/pull/1914",
"diff_url": "https://github.com/ollama/ollama/pull/1914.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1914.patch",
"merged_at": "2024-01-10T23:21:57"
}
|
When there are multiple management libraries installed on a system
not every one will be compatible with the current driver. This change
improves our management library algorithm to build up a set of discovered
libraries based on glob patterns, and then try all of them until we're able to
load one without error.
Fixes #1903
Fixes #1898
Fixes #1888
Fixes #1879
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1914/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1914/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6712
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6712/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6712/comments
|
https://api.github.com/repos/ollama/ollama/issues/6712/events
|
https://github.com/ollama/ollama/issues/6712
| 2,514,175,809
|
I_kwDOJ0Z1Ps6V20dB
| 6,712
|
400 Bad Request when running behind Nginx Proxy Manager
|
{
"login": "Joly0",
"id": 13993216,
"node_id": "MDQ6VXNlcjEzOTkzMjE2",
"avatar_url": "https://avatars.githubusercontent.com/u/13993216?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Joly0",
"html_url": "https://github.com/Joly0",
"followers_url": "https://api.github.com/users/Joly0/followers",
"following_url": "https://api.github.com/users/Joly0/following{/other_user}",
"gists_url": "https://api.github.com/users/Joly0/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Joly0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Joly0/subscriptions",
"organizations_url": "https://api.github.com/users/Joly0/orgs",
"repos_url": "https://api.github.com/users/Joly0/repos",
"events_url": "https://api.github.com/users/Joly0/events{/privacy}",
"received_events_url": "https://api.github.com/users/Joly0/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 14
| 2024-09-09T14:45:23
| 2024-10-17T09:00:21
| 2024-10-08T19:27:25
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Hey guys, i have an ollama instance that i would like to make public (of course with basic auth) through nginx proxy manager, but whenever i try to reach the api even with a simple request like `Invoke-RestMethod -Method Get -Uri https://ollama.mydoamin.com/api/tags` i get the error `Invoke-RestMethod: 400 Bad Request`
### OS
Docker
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.3.9
|
{
"login": "Joly0",
"id": 13993216,
"node_id": "MDQ6VXNlcjEzOTkzMjE2",
"avatar_url": "https://avatars.githubusercontent.com/u/13993216?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Joly0",
"html_url": "https://github.com/Joly0",
"followers_url": "https://api.github.com/users/Joly0/followers",
"following_url": "https://api.github.com/users/Joly0/following{/other_user}",
"gists_url": "https://api.github.com/users/Joly0/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Joly0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Joly0/subscriptions",
"organizations_url": "https://api.github.com/users/Joly0/orgs",
"repos_url": "https://api.github.com/users/Joly0/repos",
"events_url": "https://api.github.com/users/Joly0/events{/privacy}",
"received_events_url": "https://api.github.com/users/Joly0/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6712/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6712/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5339
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5339/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5339/comments
|
https://api.github.com/repos/ollama/ollama/issues/5339/events
|
https://github.com/ollama/ollama/issues/5339
| 2,378,928,059
|
I_kwDOJ0Z1Ps6Ny4-7
| 5,339
|
Deepseek coder v2 is providing gibberish output
|
{
"login": "Manik04IISER",
"id": 120251924,
"node_id": "U_kgDOByrmFA",
"avatar_url": "https://avatars.githubusercontent.com/u/120251924?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Manik04IISER",
"html_url": "https://github.com/Manik04IISER",
"followers_url": "https://api.github.com/users/Manik04IISER/followers",
"following_url": "https://api.github.com/users/Manik04IISER/following{/other_user}",
"gists_url": "https://api.github.com/users/Manik04IISER/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Manik04IISER/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Manik04IISER/subscriptions",
"organizations_url": "https://api.github.com/users/Manik04IISER/orgs",
"repos_url": "https://api.github.com/users/Manik04IISER/repos",
"events_url": "https://api.github.com/users/Manik04IISER/events{/privacy}",
"received_events_url": "https://api.github.com/users/Manik04IISER/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6849881759,
"node_id": "LA_kwDOJ0Z1Ps8AAAABmEjmnw",
"url": "https://api.github.com/repos/ollama/ollama/labels/memory",
"name": "memory",
"color": "5017EA",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] | null | 4
| 2024-06-27T19:20:14
| 2025-01-06T07:04:10
| 2025-01-06T07:04:10
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
The Model being Deepseek Coder v2 16b q: 5_K_M
I provided a code block to the model and it started to produce gibberish. Whereas for any other model, it works fine.

The log file :

### OS
Linux
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.1.47
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5339/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5339/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7161
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7161/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7161/comments
|
https://api.github.com/repos/ollama/ollama/issues/7161/events
|
https://github.com/ollama/ollama/issues/7161
| 2,578,850,725
|
I_kwDOJ0Z1Ps6ZtiOl
| 7,161
|
Problem with load llm model in Jetson AGX Orin Developer Kit (64GB)
|
{
"login": "witold-gren",
"id": 2304938,
"node_id": "MDQ6VXNlcjIzMDQ5Mzg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2304938?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/witold-gren",
"html_url": "https://github.com/witold-gren",
"followers_url": "https://api.github.com/users/witold-gren/followers",
"following_url": "https://api.github.com/users/witold-gren/following{/other_user}",
"gists_url": "https://api.github.com/users/witold-gren/gists{/gist_id}",
"starred_url": "https://api.github.com/users/witold-gren/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/witold-gren/subscriptions",
"organizations_url": "https://api.github.com/users/witold-gren/orgs",
"repos_url": "https://api.github.com/users/witold-gren/repos",
"events_url": "https://api.github.com/users/witold-gren/events{/privacy}",
"received_events_url": "https://api.github.com/users/witold-gren/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg",
"url": "https://api.github.com/repos/ollama/ollama/labels/nvidia",
"name": "nvidia",
"color": "8CDB00",
"default": false,
"description": "Issues relating to Nvidia GPUs and CUDA"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 2
| 2024-10-10T13:25:42
| 2024-10-11T23:41:14
| 2024-10-11T23:40:48
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Hey, thanks for your great contribution to this project. I use it on a normal computer with an RTX 4090 card and everything works very well. However, I have a problem with my Nvidia Jetson AGX Orin. I'm trying to run it the same way:
and I just install ollama using command:
```
curl -fsSL https://ollama.com/install.sh | sh
```
but then when I try load llm model:
```
ollama run SpeakLeash/bielik-11b-v2.3-instruct:Q4_K_M
```
I see information that ollama can't load this model.. also I see that even if my GPU was recognised it is not use during load model. Below I added logs which from command `journalctl -e -u ollama`:
```
paź 10 14:48:09 jetson systemd[1]: Started Ollama Service.
paź 10 14:48:09 jetson ollama[5761]: Couldn't find '/usr/share/ollama/.ollama/id_ed25519'. Generating new private key.
paź 10 14:48:09 jetson ollama[5761]: Your new public key is:
paź 10 14:48:09 jetson ollama[5761]: ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIH5sELAWBM8Np0o8l13zZlj0nCPYuuApt4h+ijT5qYo6
paź 10 14:48:09 jetson ollama[5761]: 2024/10/10 14:48:09 routes.go:1153: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLA>
paź 10 14:48:09 jetson ollama[5761]: time=2024-10-10T14:48:09.967+02:00 level=INFO source=images.go:753 msg="total blobs: 0"
paź 10 14:48:09 jetson ollama[5761]: time=2024-10-10T14:48:09.967+02:00 level=INFO source=images.go:760 msg="total unused blobs removed: 0"
paź 10 14:48:09 jetson ollama[5761]: time=2024-10-10T14:48:09.967+02:00 level=INFO source=routes.go:1200 msg="Listening on 127.0.0.1:11434 (version 0.3.12)"
paź 10 14:48:09 jetson ollama[5761]: time=2024-10-10T14:48:09.968+02:00 level=INFO source=common.go:135 msg="extracting embedded files" dir=/tmp/ollama1142511168/runners
paź 10 14:48:26 jetson ollama[5761]: time=2024-10-10T14:48:26.857+02:00 level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cuda_v11 cuda_v12 cpu]"
paź 10 14:48:26 jetson ollama[5761]: time=2024-10-10T14:48:26.857+02:00 level=INFO source=gpu.go:199 msg="looking for compatible GPUs"
paź 10 14:48:27 jetson ollama[5761]: time=2024-10-10T14:48:27.120+02:00 level=INFO source=types.go:107 msg="inference compute" id=GPU-771384f7-53b6-57c9-a4c5-4fa00f6622bd library=cuda variant=jetpack6 compute=8.7 driver=12.6 name=Orin total="61.4 GiB" av>
paź 10 14:48:31 jetson ollama[5761]: [GIN] 2024/10/10 - 14:48:31 | 200 | 551.04µs | 127.0.0.1 | GET "/api/tags"
paź 10 14:49:01 jetson ollama[5761]: [GIN] 2024/10/10 - 14:49:01 | 200 | 273.633µs | 127.0.0.1 | GET "/api/tags"
paź 10 14:49:31 jetson ollama[5761]: [GIN] 2024/10/10 - 14:49:31 | 200 | 192.897µs | 127.0.0.1 | GET "/api/tags"
paź 10 14:50:01 jetson ollama[5761]: [GIN] 2024/10/10 - 14:50:01 | 200 | 168.992µs | 127.0.0.1 | GET "/api/tags"
paź 10 14:50:31 jetson ollama[5761]: [GIN] 2024/10/10 - 14:50:31 | 200 | 227.073µs | 127.0.0.1 | GET "/api/tags"
paź 10 14:51:01 jetson ollama[5761]: [GIN] 2024/10/10 - 14:51:01 | 200 | 255.329µs | 127.0.0.1 | GET "/api/tags"
paź 10 14:51:31 jetson ollama[5761]: [GIN] 2024/10/10 - 14:51:31 | 200 | 194.4µs | 127.0.0.1 | GET "/api/tags"
paź 10 14:52:01 jetson ollama[5761]: [GIN] 2024/10/10 - 14:52:01 | 200 | 225.184µs | 127.0.0.1 | GET "/api/tags"
paź 10 14:52:32 jetson ollama[5761]: [GIN] 2024/10/10 - 14:52:32 | 200 | 264.224µs | 127.0.0.1 | GET "/api/tags"
paź 10 14:53:02 jetson ollama[5761]: [GIN] 2024/10/10 - 14:53:02 | 200 | 216.992µs | 127.0.0.1 | GET "/api/tags"
paź 10 14:53:32 jetson ollama[5761]: [GIN] 2024/10/10 - 14:53:32 | 200 | 169.984µs | 127.0.0.1 | GET "/api/tags"
paź 10 14:54:02 jetson ollama[5761]: [GIN] 2024/10/10 - 14:54:02 | 200 | 265.888µs | 127.0.0.1 | GET "/api/tags"
paź 10 14:54:32 jetson ollama[5761]: [GIN] 2024/10/10 - 14:54:32 | 200 | 417.057µs | 127.0.0.1 | GET "/api/tags"
paź 10 14:55:02 jetson ollama[5761]: [GIN] 2024/10/10 - 14:55:02 | 200 | 178.624µs | 127.0.0.1 | GET "/api/tags"
paź 10 14:55:32 jetson ollama[5761]: [GIN] 2024/10/10 - 14:55:32 | 200 | 273.664µs | 127.0.0.1 | GET "/api/tags"
paź 10 14:56:02 jetson ollama[5761]: [GIN] 2024/10/10 - 14:56:02 | 200 | 270.304µs | 127.0.0.1 | GET "/api/tags"
paź 10 14:56:32 jetson ollama[5761]: [GIN] 2024/10/10 - 14:56:32 | 200 | 389.92µs | 127.0.0.1 | GET "/api/tags"
paź 10 14:57:02 jetson ollama[5761]: [GIN] 2024/10/10 - 14:57:02 | 200 | 187.68µs | 127.0.0.1 | GET "/api/tags"
paź 10 14:57:32 jetson ollama[5761]: [GIN] 2024/10/10 - 14:57:32 | 200 | 168.749µs | 127.0.0.1 | GET "/api/tags"
paź 10 14:58:03 jetson ollama[5761]: [GIN] 2024/10/10 - 14:58:03 | 200 | 187.565µs | 127.0.0.1 | GET "/api/tags"
paź 10 14:58:33 jetson ollama[5761]: [GIN] 2024/10/10 - 14:58:33 | 200 | 264.209µs | 127.0.0.1 | GET "/api/tags"
paź 10 14:59:03 jetson ollama[5761]: [GIN] 2024/10/10 - 14:59:03 | 200 | 220.493µs | 127.0.0.1 | GET "/api/tags"
paź 10 14:59:33 jetson ollama[5761]: [GIN] 2024/10/10 - 14:59:33 | 200 | 257.006µs | 127.0.0.1 | GET "/api/tags"
paź 10 15:00:03 jetson ollama[5761]: [GIN] 2024/10/10 - 15:00:03 | 200 | 220.075µs | 127.0.0.1 | GET "/api/tags"
paź 10 15:00:12 jetson ollama[5761]: [GIN] 2024/10/10 - 15:00:12 | 200 | 74.756µs | 127.0.0.1 | GET "/api/version"
paź 10 15:00:33 jetson ollama[5761]: [GIN] 2024/10/10 - 15:00:33 | 200 | 254.476µs | 127.0.0.1 | GET "/api/tags"
paź 10 15:00:58 jetson ollama[5761]: [GIN] 2024/10/10 - 15:00:58 | 200 | 45.634µs | 127.0.0.1 | HEAD "/"
paź 10 15:00:58 jetson ollama[5761]: [GIN] 2024/10/10 - 15:00:58 | 404 | 350.094µs | 127.0.0.1 | POST "/api/show"
paź 10 15:00:59 jetson ollama[5761]: time=2024-10-10T15:00:59.926+02:00 level=INFO source=download.go:175 msg="downloading ece698889c07 in 16 420 MB part(s)"
paź 10 15:01:03 jetson ollama[5761]: [GIN] 2024/10/10 - 15:01:03 | 200 | 225.129µs | 127.0.0.1 | GET "/api/tags"
paź 10 15:01:33 jetson ollama[5761]: [GIN] 2024/10/10 - 15:01:33 | 200 | 190.312µs | 127.0.0.1 | GET "/api/tags"
paź 10 15:02:03 jetson ollama[5761]: [GIN] 2024/10/10 - 15:02:03 | 200 | 177.383µs | 127.0.0.1 | GET "/api/tags"
paź 10 15:02:10 jetson ollama[5761]: time=2024-10-10T15:02:10.048+02:00 level=INFO source=download.go:175 msg="downloading f7426507909a in 1 263 B part(s)"
paź 10 15:02:12 jetson ollama[5761]: time=2024-10-10T15:02:12.166+02:00 level=INFO source=download.go:175 msg="downloading 3685c9d39c8b in 1 114 B part(s)"
paź 10 15:02:14 jetson ollama[5761]: time=2024-10-10T15:02:14.263+02:00 level=INFO source=download.go:175 msg="downloading d0b273b04783 in 1 414 B part(s)"
paź 10 15:02:22 jetson ollama[5761]: [GIN] 2024/10/10 - 15:02:22 | 200 | 1m23s | 127.0.0.1 | POST "/api/pull"
paź 10 15:02:22 jetson ollama[5761]: [GIN] 2024/10/10 - 15:02:22 | 200 | 17.204756ms | 127.0.0.1 | POST "/api/show"
paź 10 15:02:22 jetson ollama[5761]: time=2024-10-10T15:02:22.393+02:00 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/usr/share/ollama/.ollama/models/blobs/sha256-ece698889c07d4a98a8fb7c9968ad7ad2>
paź 10 15:02:22 jetson ollama[5761]: time=2024-10-10T15:02:22.393+02:00 level=INFO source=server.go:103 msg="system memory" total="61.4 GiB" free="54.8 GiB" free_swap="30.7 GiB"
paź 10 15:02:22 jetson ollama[5761]: time=2024-10-10T15:02:22.395+02:00 level=INFO source=memory.go:326 msg="offload to cuda" layers.requested=-1 layers.model=51 layers.offload=51 layers.split="" memory.available="[54.6 GiB]" memory.gpu_overhead="0 B" me>
paź 10 15:02:22 jetson ollama[5761]: time=2024-10-10T15:02:22.399+02:00 level=INFO source=server.go:388 msg="starting llama server" cmd="/tmp/ollama1142511168/runners/cuda_v11/ollama_llama_server --model /usr/share/ollama/.ollama/models/blobs/sha256-ece6>
paź 10 15:02:22 jetson ollama[5761]: time=2024-10-10T15:02:22.400+02:00 level=INFO source=sched.go:449 msg="loaded runners" count=1
paź 10 15:02:22 jetson ollama[5761]: time=2024-10-10T15:02:22.400+02:00 level=INFO source=server.go:587 msg="waiting for llama runner to start responding"
paź 10 15:02:22 jetson ollama[5761]: time=2024-10-10T15:02:22.400+02:00 level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server error"
paź 10 15:02:22 jetson ollama[8696]: INFO [main] build info | build=10 commit="fd5a74e" tid="281472960698432" timestamp=1728565342
paź 10 15:02:22 jetson ollama[8696]: INFO [main] system info | n_threads=12 n_threads_batch=12 system_info="AVX = 0 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 1 | SVE = 0 | ARM_FMA = 1 >
paź 10 15:02:22 jetson ollama[8696]: INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="11" port="42635" tid="281472960698432" timestamp=1728565342
paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: loaded meta data with 32 key-value pairs and 453 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-ece698889c07d4a98a8fb7c9968ad7ad20961cf824c0b008895fe0506c87b834 (version GGUF V3 (latest>
paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv 0: general.architecture str = llama
paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv 1: general.type str = model
paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv 2: general.name str = tekken
paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv 3: general.version str = 0-2
paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv 4: general.size_label str = 11B
paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv 5: general.base_model.count u32 = 0
paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv 6: general.tags arr[str,2] = ["mergekit", "merge"]
paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv 7: llama.block_count u32 = 50
paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv 8: llama.context_length u32 = 32768
paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv 9: llama.embedding_length u32 = 4096
paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv 10: llama.feed_forward_length u32 = 14336
paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv 11: llama.attention.head_count u32 = 32
paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv 12: llama.attention.head_count_kv u32 = 8
paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv 13: llama.rope.freq_base f32 = 1000000.000000
paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv 14: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv 15: general.file_type u32 = 15
paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv 16: llama.vocab_size u32 = 32128
paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv 17: llama.rope.dimension_count u32 = 128
paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv 18: tokenizer.ggml.add_space_prefix bool = true
paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv 19: tokenizer.ggml.model str = llama
paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv 20: tokenizer.ggml.pre str = default
paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv 21: tokenizer.ggml.tokens arr[str,32128] = ["<unk>", "<s>", "</s>", "<0x00>", "<...
paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv 22: tokenizer.ggml.scores arr[f32,32128] = [-1000.000000, -1000.000000, -1000.00...
paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv 23: tokenizer.ggml.token_type arr[i32,32128] = [3, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv 24: tokenizer.ggml.bos_token_id u32 = 1
paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv 25: tokenizer.ggml.eos_token_id u32 = 32001
paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv 26: tokenizer.ggml.unknown_token_id u32 = 0
paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv 27: tokenizer.ggml.padding_token_id u32 = 2
paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv 28: tokenizer.ggml.add_bos_token bool = false
paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv 29: tokenizer.ggml.add_eos_token bool = false
paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv 30: tokenizer.chat_template str = {{bos_token}}{% for message in messag...
paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv 31: general.quantization_version u32 = 2
paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - type f32: 101 tensors
paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - type q4_K: 301 tensors
paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - type q6_K: 51 tensors
paź 10 15:02:22 jetson ollama[5761]: llm_load_vocab: special tokens cache size = 131
paź 10 15:02:22 jetson ollama[5761]: llm_load_vocab: token to piece cache size = 0.1654 MB
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: format = GGUF V3 (latest)
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: arch = llama
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: vocab type = SPM
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: n_vocab = 32128
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: n_merges = 0
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: vocab_only = 0
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: n_ctx_train = 32768
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: n_embd = 4096
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: n_layer = 50
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: n_head = 32
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: n_head_kv = 8
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: n_rot = 128
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: n_swa = 0
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: n_embd_head_k = 128
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: n_embd_head_v = 128
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: n_gqa = 4
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: n_embd_k_gqa = 1024
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: n_embd_v_gqa = 1024
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: f_norm_eps = 0.0e+00
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: f_norm_rms_eps = 1.0e-05
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: f_clamp_kqv = 0.0e+00
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: f_logit_scale = 0.0e+00
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: n_ff = 14336
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: n_expert = 0
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: n_expert_used = 0
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: causal attn = 1
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: pooling type = 0
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: rope type = 0
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: rope scaling = linear
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: freq_base_train = 1000000.0
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: freq_scale_train = 1
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: n_ctx_orig_yarn = 32768
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: rope_finetuned = unknown
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: ssm_d_conv = 0
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: ssm_d_inner = 0
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: ssm_d_state = 0
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: ssm_dt_rank = 0
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: ssm_dt_b_c_rms = 0
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: model type = ?B
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: model ftype = Q4_K - Medium
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: model params = 11.17 B
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: model size = 6.26 GiB (4.82 BPW)
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: general.name = tekken
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: BOS token = 1 '<s>'
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: EOS token = 32001 '<|im_end|>'
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: UNK token = 0 '<unk>'
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: PAD token = 2 '</s>'
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: LF token = 13 '<0x0A>'
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: EOT token = 32001 '<|im_end|>'
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: max token length = 48
paź 10 15:02:22 jetson ollama[5761]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
paź 10 15:02:22 jetson ollama[5761]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
paź 10 15:02:22 jetson ollama[5761]: ggml_cuda_init: found 1 CUDA devices:
paź 10 15:02:22 jetson ollama[5761]: Device 0: Orin, compute capability 8.7, VMM: yes
paź 10 15:02:22 jetson ollama[5761]: time=2024-10-10T15:02:22.652+02:00 level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server loading model"
paź 10 15:02:33 jetson ollama[5761]: [GIN] 2024/10/10 - 15:02:33 | 200 | 595.667µs | 127.0.0.1 | GET "/api/tags"
paź 10 15:07:22 jetson ollama[5761]: time=2024-10-10T15:07:22.421+02:00 level=ERROR source=sched.go:455 msg="error loading llama server" error="timed out waiting for llama runner to start - progress 0.00 - "
paź 10 15:07:22 jetson ollama[5761]: [GIN] 2024/10/10 - 15:07:22 | 500 | 5m0s | 127.0.0.1 | POST "/api/generate"
paź 10 15:07:27 jetson ollama[5761]: time=2024-10-10T15:07:27.635+02:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.214104708 model=/usr/share/ollama/.ollama/models/blobs/sha256-ece698889c07d4a98a8fb7c9968a>
paź 10 15:07:27 jetson ollama[5761]: time=2024-10-10T15:07:27.886+02:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.464459141 model=/usr/share/ollama/.ollama/models/blobs/sha256-ece698889c07d4a98a8fb7c9968a>
paź 10 15:07:28 jetson ollama[5761]: time=2024-10-10T15:07:28.135+02:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.713591502 model=/usr/share/ollama/.ollama/models/blobs/sha256-ece698889c07d4a98a8fb7c9968a>
```
What can I do to temporarily solve this problem? I use this jetpack version: `sudo apt show nvidia-jetpack`
```
Package: nvidia-jetpack
Version: 6.1+b123
Priority: standard
Section: metapackages
Source: nvidia-jetpack (6.1)
Maintainer: NVIDIA Corporation
Installed-Size: 199 kB
Depends: nvidia-jetpack-runtime (= 6.1+b123), nvidia-jetpack-dev (= 6.1+b123)
Homepage: http://developer.nvidia.com/jetson
Download-Size: 29,3 kB
APT-Sources: https://repo.download.nvidia.com/jetson/common r36.4/main arm64 Packages
Description: NVIDIA Jetpack Meta Package
```
### OS
Linux
### GPU
Nvidia
### CPU
Other
### Ollama version
0.3.12
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7161/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7161/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8598
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8598/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8598/comments
|
https://api.github.com/repos/ollama/ollama/issues/8598/events
|
https://github.com/ollama/ollama/issues/8598
| 2,811,838,719
|
I_kwDOJ0Z1Ps6nmUD_
| 8,598
|
Error Running Mistral Nemo Imported from .safetensors
|
{
"login": "aallgeier",
"id": 121313302,
"node_id": "U_kgDOBzsYFg",
"avatar_url": "https://avatars.githubusercontent.com/u/121313302?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aallgeier",
"html_url": "https://github.com/aallgeier",
"followers_url": "https://api.github.com/users/aallgeier/followers",
"following_url": "https://api.github.com/users/aallgeier/following{/other_user}",
"gists_url": "https://api.github.com/users/aallgeier/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aallgeier/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aallgeier/subscriptions",
"organizations_url": "https://api.github.com/users/aallgeier/orgs",
"repos_url": "https://api.github.com/users/aallgeier/repos",
"events_url": "https://api.github.com/users/aallgeier/events{/privacy}",
"received_events_url": "https://api.github.com/users/aallgeier/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 0
| 2025-01-26T23:15:22
| 2025-01-26T23:27:58
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I encountered an error when attempting to run the Mistral Nemo model imported from `.safetensors`. I intend to run the model on CPU only, even though I have a GPU (see the Modelfile below).
- I am able to run the model converted to `.gguf`.
- However, I would like to import and run directly from `.safetensors` if possible.
### Steps to Reproduce
1. Download model files from [mistralai/Mistral-Nemo-Instruct-2407](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407/tree/main).
2. Create a `Modelfile` with the following content:
```
FROM <PATH TO .SAFETENSOR FILES>
PARAMETER num_gpu 0
```
3. Start the ollama server: `ollama serve`
4. Create the model: `ollama create nemo -f Modelfile`
5. Run the model: `ollama run nemo`
- **Error message**: `Error: llama runner process has terminated: error loading model: error loading model hyperparameters: invalid n_rot: 160, expected 128 llama_load_model_from_file: failed to load model`
### OS, GPU, CPU
- OS: Linux fedora 6.12.6
- GPU: Radeon RX 7600 XT
- CPU: AMD Ryzen 7 7700X
- RAM: 64GB
Thank you in advance for the help!
### OS
Linux
### GPU
AMD
### CPU
AMD
### Ollama version
0.5.4
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8598/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8598/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/4535
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4535/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4535/comments
|
https://api.github.com/repos/ollama/ollama/issues/4535/events
|
https://github.com/ollama/ollama/pull/4535
| 2,305,902,909
|
PR_kwDOJ0Z1Ps5v9HOx
| 4,535
|
Correct typo in error message
|
{
"login": "likejazz",
"id": 1250095,
"node_id": "MDQ6VXNlcjEyNTAwOTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/1250095?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/likejazz",
"html_url": "https://github.com/likejazz",
"followers_url": "https://api.github.com/users/likejazz/followers",
"following_url": "https://api.github.com/users/likejazz/following{/other_user}",
"gists_url": "https://api.github.com/users/likejazz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/likejazz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/likejazz/subscriptions",
"organizations_url": "https://api.github.com/users/likejazz/orgs",
"repos_url": "https://api.github.com/users/likejazz/repos",
"events_url": "https://api.github.com/users/likejazz/events{/privacy}",
"received_events_url": "https://api.github.com/users/likejazz/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-05-20T12:34:40
| 2024-05-21T23:09:58
| 2024-05-21T20:39:02
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4535",
"html_url": "https://github.com/ollama/ollama/pull/4535",
"diff_url": "https://github.com/ollama/ollama/pull/4535.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4535.patch",
"merged_at": "2024-05-21T20:39:02"
}
|
The spelling of the term "request" has been corrected, which was previously mistakenly written as "requeset" in the error log message.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4535/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4535/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6149
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6149/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6149/comments
|
https://api.github.com/repos/ollama/ollama/issues/6149/events
|
https://github.com/ollama/ollama/issues/6149
| 2,446,249,341
|
I_kwDOJ0Z1Ps6Rzs19
| 6,149
|
Why is the NVidia GPU always going crashing when using ./ollama-linux-amd64 ?
|
{
"login": "tifDev",
"id": 39730484,
"node_id": "MDQ6VXNlcjM5NzMwNDg0",
"avatar_url": "https://avatars.githubusercontent.com/u/39730484?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tifDev",
"html_url": "https://github.com/tifDev",
"followers_url": "https://api.github.com/users/tifDev/followers",
"following_url": "https://api.github.com/users/tifDev/following{/other_user}",
"gists_url": "https://api.github.com/users/tifDev/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tifDev/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tifDev/subscriptions",
"organizations_url": "https://api.github.com/users/tifDev/orgs",
"repos_url": "https://api.github.com/users/tifDev/repos",
"events_url": "https://api.github.com/users/tifDev/events{/privacy}",
"received_events_url": "https://api.github.com/users/tifDev/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg",
"url": "https://api.github.com/repos/ollama/ollama/labels/nvidia",
"name": "nvidia",
"color": "8CDB00",
"default": false,
"description": "Issues relating to Nvidia GPUs and CUDA"
},
{
"id": 6677677816,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgVG-A",
"url": "https://api.github.com/repos/ollama/ollama/labels/docker",
"name": "docker",
"color": "0052CC",
"default": false,
"description": "Issues relating to using ollama in containers"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 4
| 2024-08-03T08:50:48
| 2024-10-24T03:18:01
| 2024-10-24T03:17:51
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Hello,
I've tried the protable edition that doesn't needs root installation (./ollama-linux-amd64).
Everything work fine but after a couple minutes the GPU stops working and ollama starts to use CPU only.
This is the error faced:
``` log
ggml_cuda_init: failed to initialize CUDA: unknown error
llm_load_tensors: ggml ctx size = 0.14 MiB
llm_load_tensors: offloading 21 repeating layers to GPU
llm_load_tensors: offloaded 21/33 layers to GPU
llm_load_tensors: CPU buffer size = 4437.80 MiB
llama_new_context_with_model: n_ctx = 2048
llama_new_context_with_model: n_batch = 512
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 500000.0
llama_new_context_with_model: freq_scale = 1
ggml_cuda_host_malloc: failed to allocate 256.00 MiB of pinned memory: unknown error
llama_kv_cache_init: CPU KV buffer size = 256.00 MiB
llama_new_context_with_model: KV self size = 256.00 MiB, K (f16): 128.00 MiB, V (f16): 128.00 MiB
ggml_cuda_host_malloc: failed to allocate 0.50 MiB of pinned memory: unknown error
llama_new_context_with_model: CPU output buffer size = 0.50 MiB
ggml_cuda_host_malloc: failed to allocate 258.50 MiB of pinned memory: unknown error
llama_new_context_with_model: CUDA_Host compute buffer size = 258.50 MiB
llama_new_context_with_model: graph nodes = 1030
llama_new_context_with_model: graph splits = 1
INFO [main] model loaded | tid="132032175734784" timestamp=1722637437
time=2024-08-02T23:23:57.145+01:00 level=INFO source=server.go:609 msg="llama runner started in 1.51 seconds"
INFO [update_slots] input truncated | n_ctx=2048 n_erase=1611 n_keep=4 n_left=2044 n_shift=1022 tid="132032175734784" timestamp=1722637437
[GIN] 2024/08/02 - 23:24:46 | 500 | 4m0s | 127.0.0.1 | POST "/api/chat"
cuda driver library failed to get device context 999time=2024-08-02T23:29:46.493+01:00 level=WARN source=gpu.go:374 msg="error looking up nvidia GPU memory"
cuda driver library failed to get device context 999time=2024-08-02T23:29:46.744+01:00 level=WARN source=gpu.go:374 msg="error looking up nvidia GPU memory"
cuda driver library failed to get device context 999time=2024-08-02T23:29:46.995+01:00 level=WARN source=gpu.go:374 msg="error looking up nvidia GPU memory"
cuda driver library failed to get device context 999time=2024-08-02T23:29:47.245+01:00 level=WARN source=gpu.go:374 msg="error looking up nvidia GPU memory"
cuda driver library failed to get device context 999time=2024-08-02T23:29:47.495+01:00 level=WARN source=gpu.go:374 msg="error looking up nvidia GPU memory"
cuda driver library failed to get device context 999time=2024-08-02T23:29:47.744+01:00 level=WARN source=gpu.go:374 msg="error looking up nvidia GPU memory"
cuda driver library failed to get device context 999time=2024-08-02T23:29:47.994+01:00 level=WARN source=gpu.go:374 msg="error looking up nvidia GPU memory"
cuda driver library failed to get device context 999time=2024-08-02T23:29:48.245+01:00 level=WARN source=gpu.go:374 msg="error looking up nvidia GPU memory"
cuda driver library failed to get device context 999time=2024-08-02T23:29:48.495+01:00 level=WARN source=gpu.go:374 msg="error looking up nvidia GPU memory"
cuda driver library failed to get device context 999time=2024-08-02T23:29:48.744+01:00 level=WARN source=gpu.go:374 msg="error looking up nvidia GPU memory"
cuda driver library failed to get device context 999time=2024-08-02T23:29:48.995+01:00 level=WARN source=gpu.go:374 msg="error looking up nvidia GPU memory"
cuda driver library failed to get device context 999time=2024-08-02T23:29:49.244+01:00 level=WARN source=gpu.go:374 msg="error looking up nvidia GPU memory"
cuda driver library failed to get device context 999time=2024-08-02T23:29:49.494+01:00 level=WARN source=gpu.go:374 msg="error looking up nvidia GPU memory"
cuda driver library failed to get device context 999time=2024-08-02T23:29:49.745+01:00 level=WARN source=gpu.go:374 msg="error looking up nvidia GPU memory"
cuda driver library failed to get device context 999time=2024-08-02T23:29:49.995+01:00 level=WARN source=gpu.go:374 msg="error looking up nvidia GPU memory"
cuda driver library failed to get device context 999time=2024-08-02T23:29:50.245+01:00 level=WARN source=gpu.go:374 msg="error looking up nvidia GPU memory"
cuda driver library failed to get device context 999time=2024-08-02T23:29:50.495+01:00 level=WARN source=gpu.go:374 msg="error looking up nvidia GPU memory"
cuda driver library failed to get device context 999time=2024-08-02T23:29:50.745+01:00 level=WARN source=gpu.go:374 msg="error looking up nvidia GPU memory"
cuda driver library failed to get device context 999time=2024-08-02T23:29:50.994+01:00 level=WARN source=gpu.go:374 msg="error looking up nvidia GPU memory"
cuda driver library failed to get device context 999time=2024-08-02T23:29:51.245+01:00 level=WARN source=gpu.go:374 msg="error looking up nvidia GPU memory"
time=2024-08-02T23:29:51.494+01:00 level=WARN source=sched.go:674 msg="gpu VRAM usage didn't recover within timeout" seconds=5.001706621 model=/home/xxxx/.ollama/models/blobs/sha256-87048bcd55216712ef14c11c2c303728463207b165bf18440b9b84b07ec00f87
cuda driver library failed to get device context 999time=2024-08-02T23:29:51.494+01:00 level=WARN source=gpu.go:374 msg="error looking up nvidia GPU memory"
time=2024-08-02T23:29:51.744+01:00 level=WARN source=sched.go:674 msg="gpu VRAM usage didn't recover within timeout" seconds=5.251527354 model=/home/xxxx/.ollama/models/blobs/sha256-87048bcd55216712ef14c11c2c303728463207b165bf18440b9b84b07ec00f87
cuda driver library failed to get device context 999time=2024-08-02T23:29:51.745+01:00 level=WARN source=gpu.go:374 msg="error looking up nvidia GPU memory"
time=2024-08-02T23:29:51.994+01:00 level=WARN source=sched.go:674 msg="gpu VRAM usage didn't recover within timeout" seconds=5.501999011
```
As workaround, I need to excute:
`sudo modprobe -r nvidia_uvm && sudo modprobe nvidia_uvm`
For the system wide installation this issue doesn't happens.
Can you suggest why is this happening and how to solve it?
Rgds,
KS
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
ollama version is 0.3.3
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6149/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6149/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6991
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6991/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6991/comments
|
https://api.github.com/repos/ollama/ollama/issues/6991/events
|
https://github.com/ollama/ollama/pull/6991
| 2,551,623,293
|
PR_kwDOJ0Z1Ps582kB9
| 6,991
|
llama: wire up builtin runner to main binary
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-09-26T22:24:29
| 2024-10-08T16:17:34
| 2024-10-08T15:53:58
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6991",
"html_url": "https://github.com/ollama/ollama/pull/6991",
"diff_url": "https://github.com/ollama/ollama/pull/6991.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6991.patch",
"merged_at": null
}
|
Replaced by #7138 on main
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6991/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6991/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4761
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4761/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4761/comments
|
https://api.github.com/repos/ollama/ollama/issues/4761/events
|
https://github.com/ollama/ollama/pull/4761
| 2,328,763,972
|
PR_kwDOJ0Z1Ps5xLT3c
| 4,761
|
revert tokenize ffi
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-06-01T00:25:44
| 2024-06-01T01:54:22
| 2024-06-01T01:54:21
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4761",
"html_url": "https://github.com/ollama/ollama/pull/4761",
"diff_url": "https://github.com/ollama/ollama/pull/4761.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4761.patch",
"merged_at": "2024-06-01T01:54:21"
}
|
this change reverts the series of changes introduced to call tokenize/detokenize. there's a bug on windows specifically where it'll segfault loading deepseek-llm's pretokenizer regexp. the most likely candidate is unicode support differences in mingw used by cgo and msvc used by the subprocess
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4761/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4761/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4010
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4010/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4010/comments
|
https://api.github.com/repos/ollama/ollama/issues/4010/events
|
https://github.com/ollama/ollama/issues/4010
| 2,267,908,860
|
I_kwDOJ0Z1Ps6HLYr8
| 4,010
|
How to set 'verbose' ON by default after a model is loaded?
|
{
"login": "taozhiyuai",
"id": 146583103,
"node_id": "U_kgDOCLyuPw",
"avatar_url": "https://avatars.githubusercontent.com/u/146583103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/taozhiyuai",
"html_url": "https://github.com/taozhiyuai",
"followers_url": "https://api.github.com/users/taozhiyuai/followers",
"following_url": "https://api.github.com/users/taozhiyuai/following{/other_user}",
"gists_url": "https://api.github.com/users/taozhiyuai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/taozhiyuai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/taozhiyuai/subscriptions",
"organizations_url": "https://api.github.com/users/taozhiyuai/orgs",
"repos_url": "https://api.github.com/users/taozhiyuai/repos",
"events_url": "https://api.github.com/users/taozhiyuai/events{/privacy}",
"received_events_url": "https://api.github.com/users/taozhiyuai/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-04-29T00:35:01
| 2024-04-29T15:38:13
| 2024-04-29T15:38:13
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
How to set 'verbose' ON by default after a model is loaded?
it is annoying to type /set verbose every time.
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4010/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4010/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3099
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3099/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3099/comments
|
https://api.github.com/repos/ollama/ollama/issues/3099/events
|
https://github.com/ollama/ollama/issues/3099
| 2,183,623,078
|
I_kwDOJ0Z1Ps6CJ3Gm
| 3,099
|
Working with gptscript
|
{
"login": "prologic",
"id": 1290234,
"node_id": "MDQ6VXNlcjEyOTAyMzQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1290234?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/prologic",
"html_url": "https://github.com/prologic",
"followers_url": "https://api.github.com/users/prologic/followers",
"following_url": "https://api.github.com/users/prologic/following{/other_user}",
"gists_url": "https://api.github.com/users/prologic/gists{/gist_id}",
"starred_url": "https://api.github.com/users/prologic/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/prologic/subscriptions",
"organizations_url": "https://api.github.com/users/prologic/orgs",
"repos_url": "https://api.github.com/users/prologic/repos",
"events_url": "https://api.github.com/users/prologic/events{/privacy}",
"received_events_url": "https://api.github.com/users/prologic/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-03-13T10:20:17
| 2024-03-13T14:38:14
| 2024-03-13T14:37:51
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Just wanted to bring your attention to this nice little project called [gptscript](https://github.com/gptscript-ai/gptscript) that mentions not working natively with Ollama [here](https://github.com/gptscript-ai/gptscript/issues/136#issuecomment-1993903566) due to missing `/models` endpoint. I had a quick look around the codebase and I _believe_ this isn't the case. Maybe this info is outdated and it works now? 🤔
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3099/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3099/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2667
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2667/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2667/comments
|
https://api.github.com/repos/ollama/ollama/issues/2667/events
|
https://github.com/ollama/ollama/issues/2667
| 2,148,389,832
|
I_kwDOJ0Z1Ps6ADdPI
| 2,667
|
Trojan:Script/Wacatac.B!ml After Ollama Update Ollama
|
{
"login": "gargakk",
"id": 11261036,
"node_id": "MDQ6VXNlcjExMjYxMDM2",
"avatar_url": "https://avatars.githubusercontent.com/u/11261036?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gargakk",
"html_url": "https://github.com/gargakk",
"followers_url": "https://api.github.com/users/gargakk/followers",
"following_url": "https://api.github.com/users/gargakk/following{/other_user}",
"gists_url": "https://api.github.com/users/gargakk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gargakk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gargakk/subscriptions",
"organizations_url": "https://api.github.com/users/gargakk/orgs",
"repos_url": "https://api.github.com/users/gargakk/repos",
"events_url": "https://api.github.com/users/gargakk/events{/privacy}",
"received_events_url": "https://api.github.com/users/gargakk/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-02-22T07:20:24
| 2024-02-22T07:30:08
| 2024-02-22T07:27:59
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Today after Ollama automatic update on a windows machine system find Trojan:Script/Wacatac.B!ml.
Why??

|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2667/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2667/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1594
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1594/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1594/comments
|
https://api.github.com/repos/ollama/ollama/issues/1594/events
|
https://github.com/ollama/ollama/issues/1594
| 2,047,830,033
|
I_kwDOJ0Z1Ps56D2gR
| 1,594
|
Wont run on amd or intel gpu's?
|
{
"login": "srgantmoomoo",
"id": 69589624,
"node_id": "MDQ6VXNlcjY5NTg5NjI0",
"avatar_url": "https://avatars.githubusercontent.com/u/69589624?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/srgantmoomoo",
"html_url": "https://github.com/srgantmoomoo",
"followers_url": "https://api.github.com/users/srgantmoomoo/followers",
"following_url": "https://api.github.com/users/srgantmoomoo/following{/other_user}",
"gists_url": "https://api.github.com/users/srgantmoomoo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/srgantmoomoo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/srgantmoomoo/subscriptions",
"organizations_url": "https://api.github.com/users/srgantmoomoo/orgs",
"repos_url": "https://api.github.com/users/srgantmoomoo/repos",
"events_url": "https://api.github.com/users/srgantmoomoo/events{/privacy}",
"received_events_url": "https://api.github.com/users/srgantmoomoo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 25
| 2023-12-19T03:02:47
| 2023-12-19T20:02:55
| 2023-12-19T19:57:12
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
it seems that I cannot get this to run on my amd or my intel machine... does it only support nvidia gpu's?
keep getting this...
```
2023/12/18 21:59:15 images.go:737: total blobs: 0
2023/12/18 21:59:15 images.go:744: total unused blobs removed: 0
2023/12/18 21:59:15 routes.go:871: Listening on 127.0.0.1:11434 (version 0.1.16)
2023/12/18 21:59:15 routes.go:891: warning: gpu support may not be enabled, check that you have installed GPU drivers: nvidia-smi command failed
```
|
{
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.github.com/users/technovangelist/followers",
"following_url": "https://api.github.com/users/technovangelist/following{/other_user}",
"gists_url": "https://api.github.com/users/technovangelist/gists{/gist_id}",
"starred_url": "https://api.github.com/users/technovangelist/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/technovangelist/subscriptions",
"organizations_url": "https://api.github.com/users/technovangelist/orgs",
"repos_url": "https://api.github.com/users/technovangelist/repos",
"events_url": "https://api.github.com/users/technovangelist/events{/privacy}",
"received_events_url": "https://api.github.com/users/technovangelist/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1594/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1594/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/702
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/702/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/702/comments
|
https://api.github.com/repos/ollama/ollama/issues/702/events
|
https://github.com/ollama/ollama/pull/702
| 1,926,953,741
|
PR_kwDOJ0Z1Ps5b8Kp2
| 702
|
display a message during a long model load in interactive mode
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-10-04T20:46:26
| 2023-10-20T16:43:54
| 2023-10-11T16:55:31
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/702",
"html_url": "https://github.com/ollama/ollama/pull/702",
"diff_url": "https://github.com/ollama/ollama/pull/702.diff",
"patch_url": "https://github.com/ollama/ollama/pull/702.patch",
"merged_at": null
}
|
Previous behavior:
The user must wait for the model to load while a spinner is displayed. This could take a while for large models.
New behavior:
After 30 seconds the spinner displays the message "please wait...". This will be removed from the display once there is a response from the generate endpoint.
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/702/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/702/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1881
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1881/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1881/comments
|
https://api.github.com/repos/ollama/ollama/issues/1881/events
|
https://github.com/ollama/ollama/issues/1881
| 2,073,404,373
|
I_kwDOJ0Z1Ps57laPV
| 1,881
|
Only generate lots of hashes
|
{
"login": "ZhihaoZhang97",
"id": 31653817,
"node_id": "MDQ6VXNlcjMxNjUzODE3",
"avatar_url": "https://avatars.githubusercontent.com/u/31653817?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZhihaoZhang97",
"html_url": "https://github.com/ZhihaoZhang97",
"followers_url": "https://api.github.com/users/ZhihaoZhang97/followers",
"following_url": "https://api.github.com/users/ZhihaoZhang97/following{/other_user}",
"gists_url": "https://api.github.com/users/ZhihaoZhang97/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZhihaoZhang97/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZhihaoZhang97/subscriptions",
"organizations_url": "https://api.github.com/users/ZhihaoZhang97/orgs",
"repos_url": "https://api.github.com/users/ZhihaoZhang97/repos",
"events_url": "https://api.github.com/users/ZhihaoZhang97/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZhihaoZhang97/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 9
| 2024-01-10T00:58:30
| 2024-01-27T02:47:24
| 2024-01-27T02:47:24
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |

Not sure if I am the first to encounter with this issue, when I installed the ollama and run the llama2 from the Quickstart, it only outputs a lots of '####'.
I suspect that might be caused by the hardware or software settings with my newly updated system?
Since it works with my old rig with i9-9900K and dual RTX 3090.
As shown in the screenshot below, I am currently using Pop!OS with AMD Threadripper 3960X and dual RTX 3090.

Any help would be greatly appreciated, thank you!
|
{
"login": "ZhihaoZhang97",
"id": 31653817,
"node_id": "MDQ6VXNlcjMxNjUzODE3",
"avatar_url": "https://avatars.githubusercontent.com/u/31653817?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZhihaoZhang97",
"html_url": "https://github.com/ZhihaoZhang97",
"followers_url": "https://api.github.com/users/ZhihaoZhang97/followers",
"following_url": "https://api.github.com/users/ZhihaoZhang97/following{/other_user}",
"gists_url": "https://api.github.com/users/ZhihaoZhang97/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZhihaoZhang97/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZhihaoZhang97/subscriptions",
"organizations_url": "https://api.github.com/users/ZhihaoZhang97/orgs",
"repos_url": "https://api.github.com/users/ZhihaoZhang97/repos",
"events_url": "https://api.github.com/users/ZhihaoZhang97/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZhihaoZhang97/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1881/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1881/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1547
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1547/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1547/comments
|
https://api.github.com/repos/ollama/ollama/issues/1547/events
|
https://github.com/ollama/ollama/issues/1547
| 2,044,089,323
|
I_kwDOJ0Z1Ps551lPr
| 1,547
|
API Llava Image Path
|
{
"login": "webmastermario",
"id": 121729061,
"node_id": "U_kgDOB0FwJQ",
"avatar_url": "https://avatars.githubusercontent.com/u/121729061?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/webmastermario",
"html_url": "https://github.com/webmastermario",
"followers_url": "https://api.github.com/users/webmastermario/followers",
"following_url": "https://api.github.com/users/webmastermario/following{/other_user}",
"gists_url": "https://api.github.com/users/webmastermario/gists{/gist_id}",
"starred_url": "https://api.github.com/users/webmastermario/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/webmastermario/subscriptions",
"organizations_url": "https://api.github.com/users/webmastermario/orgs",
"repos_url": "https://api.github.com/users/webmastermario/repos",
"events_url": "https://api.github.com/users/webmastermario/events{/privacy}",
"received_events_url": "https://api.github.com/users/webmastermario/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2023-12-15T17:19:14
| 2023-12-15T17:34:55
| 2023-12-15T17:34:00
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hello,
how can i use the API with llava model? How to add the image in the curl command like:
curl http://localhost:11434/api/generate -d '{
"model": "llava",
"prompt":"Whats in the image?"
}'
"image":"path" ?
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1547/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1547/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7789
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7789/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7789/comments
|
https://api.github.com/repos/ollama/ollama/issues/7789/events
|
https://github.com/ollama/ollama/issues/7789
| 2,681,972,884
|
I_kwDOJ0Z1Ps6f26iU
| 7,789
|
How to prevent Ollama requests to change the running model on Ollama?
|
{
"login": "WoodenTiger000",
"id": 5031620,
"node_id": "MDQ6VXNlcjUwMzE2MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/5031620?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/WoodenTiger000",
"html_url": "https://github.com/WoodenTiger000",
"followers_url": "https://api.github.com/users/WoodenTiger000/followers",
"following_url": "https://api.github.com/users/WoodenTiger000/following{/other_user}",
"gists_url": "https://api.github.com/users/WoodenTiger000/gists{/gist_id}",
"starred_url": "https://api.github.com/users/WoodenTiger000/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/WoodenTiger000/subscriptions",
"organizations_url": "https://api.github.com/users/WoodenTiger000/orgs",
"repos_url": "https://api.github.com/users/WoodenTiger000/repos",
"events_url": "https://api.github.com/users/WoodenTiger000/events{/privacy}",
"received_events_url": "https://api.github.com/users/WoodenTiger000/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-11-22T06:32:53
| 2024-12-29T22:15:15
| 2024-12-29T22:15:14
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
How can we prevent Ollama requests to change the running model on Ollama?
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7789/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7789/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2622
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2622/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2622/comments
|
https://api.github.com/repos/ollama/ollama/issues/2622/events
|
https://github.com/ollama/ollama/issues/2622
| 2,145,659,261
|
I_kwDOJ0Z1Ps5_5Cl9
| 2,622
|
How to set a crt file or disable the SSL verify in Windows
|
{
"login": "NeuroWhAI",
"id": 1130686,
"node_id": "MDQ6VXNlcjExMzA2ODY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1130686?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NeuroWhAI",
"html_url": "https://github.com/NeuroWhAI",
"followers_url": "https://api.github.com/users/NeuroWhAI/followers",
"following_url": "https://api.github.com/users/NeuroWhAI/following{/other_user}",
"gists_url": "https://api.github.com/users/NeuroWhAI/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NeuroWhAI/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NeuroWhAI/subscriptions",
"organizations_url": "https://api.github.com/users/NeuroWhAI/orgs",
"repos_url": "https://api.github.com/users/NeuroWhAI/repos",
"events_url": "https://api.github.com/users/NeuroWhAI/events{/privacy}",
"received_events_url": "https://api.github.com/users/NeuroWhAI/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg",
"url": "https://api.github.com/repos/ollama/ollama/labels/windows",
"name": "windows",
"color": "0052CC",
"default": false,
"description": ""
},
{
"id": 6677370291,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCVsw",
"url": "https://api.github.com/repos/ollama/ollama/labels/networking",
"name": "networking",
"color": "0B5368",
"default": false,
"description": "Issues relating to ollama pull and push"
}
] |
open
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 8
| 2024-02-21T02:32:25
| 2024-05-24T16:44:45
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hello.
I am having a problem with 403 response from run command while trying to use the Ollama(Windows Preview) behind company proxy server.
There is nothing special left in the log, but it is obvious that it is a proxy problem.
The http(s)_proxy environment variable is set and crt certificate is installed.
**i remember turning off the ssl verify option or specifying the crt file when using other programs such as pip.**
**Does ollama support the same option?** My company is doing weird things to monitor the https connection, so there are many problems like this :/
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2622/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2622/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/6660
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6660/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6660/comments
|
https://api.github.com/repos/ollama/ollama/issues/6660/events
|
https://github.com/ollama/ollama/issues/6660
| 2,508,688,797
|
I_kwDOJ0Z1Ps6Vh42d
| 6,660
|
on ollama.com's profile settings page , email addr shown mangled
|
{
"login": "fxmbsw7",
"id": 39368685,
"node_id": "MDQ6VXNlcjM5MzY4Njg1",
"avatar_url": "https://avatars.githubusercontent.com/u/39368685?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fxmbsw7",
"html_url": "https://github.com/fxmbsw7",
"followers_url": "https://api.github.com/users/fxmbsw7/followers",
"following_url": "https://api.github.com/users/fxmbsw7/following{/other_user}",
"gists_url": "https://api.github.com/users/fxmbsw7/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fxmbsw7/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fxmbsw7/subscriptions",
"organizations_url": "https://api.github.com/users/fxmbsw7/orgs",
"repos_url": "https://api.github.com/users/fxmbsw7/repos",
"events_url": "https://api.github.com/users/fxmbsw7/events{/privacy}",
"received_events_url": "https://api.github.com/users/fxmbsw7/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6573197867,
"node_id": "LA_kwDOJ0Z1Ps8AAAABh8sKKw",
"url": "https://api.github.com/repos/ollama/ollama/labels/ollama.com",
"name": "ollama.com",
"color": "ffffff",
"default": false,
"description": ""
}
] |
open
| false
| null |
[] | null | 0
| 2024-09-05T20:51:57
| 2024-09-05T20:53:48
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
on that page , where edit name and bio
the after top header , there is first shown :
username \n
email addr
of the logged in user
then there are the editfields
anyway at my email , gmail , ends with 7 , .. that 7 isnt displayes there on the page
just the addr without the ending 7
greets
### OS
_No response_
### GPU
_No response_
### CPU
_No response_
### Ollama version
_No response_
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6660/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6660/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/3318
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3318/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3318/comments
|
https://api.github.com/repos/ollama/ollama/issues/3318/events
|
https://github.com/ollama/ollama/pull/3318
| 2,204,107,381
|
PR_kwDOJ0Z1Ps5qkpRM
| 3,318
|
Update faq.md
|
{
"login": "ltrivaldi322",
"id": 125631184,
"node_id": "U_kgDOB3z60A",
"avatar_url": "https://avatars.githubusercontent.com/u/125631184?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ltrivaldi322",
"html_url": "https://github.com/ltrivaldi322",
"followers_url": "https://api.github.com/users/ltrivaldi322/followers",
"following_url": "https://api.github.com/users/ltrivaldi322/following{/other_user}",
"gists_url": "https://api.github.com/users/ltrivaldi322/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ltrivaldi322/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ltrivaldi322/subscriptions",
"organizations_url": "https://api.github.com/users/ltrivaldi322/orgs",
"repos_url": "https://api.github.com/users/ltrivaldi322/repos",
"events_url": "https://api.github.com/users/ltrivaldi322/events{/privacy}",
"received_events_url": "https://api.github.com/users/ltrivaldi322/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-03-24T00:17:12
| 2024-03-24T00:25:18
| 2024-03-24T00:25:18
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3318",
"html_url": "https://github.com/ollama/ollama/pull/3318",
"diff_url": "https://github.com/ollama/ollama/pull/3318.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3318.patch",
"merged_at": null
}
|
Use right config option you fucking idiot
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3318/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3318/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5964
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5964/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5964/comments
|
https://api.github.com/repos/ollama/ollama/issues/5964/events
|
https://github.com/ollama/ollama/pull/5964
| 2,431,159,586
|
PR_kwDOJ0Z1Ps52hhlZ
| 5,964
|
Fix typo and improve readability
|
{
"login": "eust-w",
"id": 39115651,
"node_id": "MDQ6VXNlcjM5MTE1NjUx",
"avatar_url": "https://avatars.githubusercontent.com/u/39115651?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eust-w",
"html_url": "https://github.com/eust-w",
"followers_url": "https://api.github.com/users/eust-w/followers",
"following_url": "https://api.github.com/users/eust-w/following{/other_user}",
"gists_url": "https://api.github.com/users/eust-w/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eust-w/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eust-w/subscriptions",
"organizations_url": "https://api.github.com/users/eust-w/orgs",
"repos_url": "https://api.github.com/users/eust-w/repos",
"events_url": "https://api.github.com/users/eust-w/events{/privacy}",
"received_events_url": "https://api.github.com/users/eust-w/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-07-26T00:13:38
| 2024-08-14T00:54:20
| 2024-08-14T00:54:20
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5964",
"html_url": "https://github.com/ollama/ollama/pull/5964",
"diff_url": "https://github.com/ollama/ollama/pull/5964.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5964.patch",
"merged_at": "2024-08-14T00:54:20"
}
|
* Rename updatAvailableMenuID to updateAvailableMenuID
* Replace unused cmd parameter with _ in RunServer function
* Fix typos in comments
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5964/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5964/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2618
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2618/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2618/comments
|
https://api.github.com/repos/ollama/ollama/issues/2618/events
|
https://github.com/ollama/ollama/pull/2618
| 2,145,128,986
|
PR_kwDOJ0Z1Ps5nb4dg
| 2,618
|
Update llama.cpp submodule to `66c1968f7`
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-02-20T19:35:40
| 2024-02-20T22:42:32
| 2024-02-20T22:42:31
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/2618",
"html_url": "https://github.com/ollama/ollama/pull/2618",
"diff_url": "https://github.com/ollama/ollama/pull/2618.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2618.patch",
"merged_at": "2024-02-20T22:42:31"
}
|
This update's the llama.cpp commit to one that supports the newer embedding models. A few updates:
- The previous patch 02 was merged 🎉
- Numa is now an enum
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2618/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2618/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3254
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3254/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3254/comments
|
https://api.github.com/repos/ollama/ollama/issues/3254/events
|
https://github.com/ollama/ollama/issues/3254
| 2,195,323,215
|
I_kwDOJ0Z1Ps6C2flP
| 3,254
|
I can't run llama2 model in my computer
|
{
"login": "Francois-lenne",
"id": 114836746,
"node_id": "U_kgDOBthFCg",
"avatar_url": "https://avatars.githubusercontent.com/u/114836746?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Francois-lenne",
"html_url": "https://github.com/Francois-lenne",
"followers_url": "https://api.github.com/users/Francois-lenne/followers",
"following_url": "https://api.github.com/users/Francois-lenne/following{/other_user}",
"gists_url": "https://api.github.com/users/Francois-lenne/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Francois-lenne/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Francois-lenne/subscriptions",
"organizations_url": "https://api.github.com/users/Francois-lenne/orgs",
"repos_url": "https://api.github.com/users/Francois-lenne/repos",
"events_url": "https://api.github.com/users/Francois-lenne/events{/privacy}",
"received_events_url": "https://api.github.com/users/Francois-lenne/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-03-19T15:41:14
| 2024-03-19T23:01:21
| 2024-03-19T23:00:57
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
when i tap in my CLI :
`ollama run llama2`
i have this issue
`Error: error loading model /Users/francoislenne/.ollama/models/blobs/sha256:8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1f`
i try to delete and reload the llama2 LLM but i still have the same error
when i tap in my CLI
`ollama list`
i have this
```
NAME ID SIZE MODIFIED
llama2:latest 78e26419b446 3.8 GB 6 minutes ago
```
### What did you expect to see?
that ollama2 working like days before ago so i can ask questions
### Steps to reproduce
Ollama serve
ollama pull llama2
ollama run llama2
### Are there any recent changes that introduced the issue?
none
### OS
macOS
### Architecture
arm64
### Platform
WSL
### Ollama version
0.1.28
### GPU
Apple
### GPU info
_No response_
### CPU
Apple
### Other software
_No response_
|
{
"login": "Francois-lenne",
"id": 114836746,
"node_id": "U_kgDOBthFCg",
"avatar_url": "https://avatars.githubusercontent.com/u/114836746?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Francois-lenne",
"html_url": "https://github.com/Francois-lenne",
"followers_url": "https://api.github.com/users/Francois-lenne/followers",
"following_url": "https://api.github.com/users/Francois-lenne/following{/other_user}",
"gists_url": "https://api.github.com/users/Francois-lenne/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Francois-lenne/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Francois-lenne/subscriptions",
"organizations_url": "https://api.github.com/users/Francois-lenne/orgs",
"repos_url": "https://api.github.com/users/Francois-lenne/repos",
"events_url": "https://api.github.com/users/Francois-lenne/events{/privacy}",
"received_events_url": "https://api.github.com/users/Francois-lenne/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3254/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3254/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4388
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4388/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4388/comments
|
https://api.github.com/repos/ollama/ollama/issues/4388/events
|
https://github.com/ollama/ollama/issues/4388
| 2,291,709,100
|
I_kwDOJ0Z1Ps6ImLSs
| 4,388
|
Accept or Ignore additional headers in OpenAI compatible endpoints
|
{
"login": "UdaraJay",
"id": 1122227,
"node_id": "MDQ6VXNlcjExMjIyMjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1122227?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/UdaraJay",
"html_url": "https://github.com/UdaraJay",
"followers_url": "https://api.github.com/users/UdaraJay/followers",
"following_url": "https://api.github.com/users/UdaraJay/following{/other_user}",
"gists_url": "https://api.github.com/users/UdaraJay/gists{/gist_id}",
"starred_url": "https://api.github.com/users/UdaraJay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/UdaraJay/subscriptions",
"organizations_url": "https://api.github.com/users/UdaraJay/orgs",
"repos_url": "https://api.github.com/users/UdaraJay/repos",
"events_url": "https://api.github.com/users/UdaraJay/events{/privacy}",
"received_events_url": "https://api.github.com/users/UdaraJay/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 1
| 2024-05-13T03:10:34
| 2024-06-06T22:19:05
| 2024-06-06T22:19:05
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
The OpenAI javascript SDK adds some `x-stainless-*` headers to API calls that cause preflight checks to fail against Ollama's API when switching out the baseUrl for Ollama's `v1/chat/completions` endpoint.
```
Access to fetch at 'http://localhost:11434/v1/chat/completions' from origin 'http://localhost' has been blocked by CORS policy: Request header field x-stainless-os is not allowed by Access-Control-Allow-Headers in preflight response.
```
Would be great if you could configure cors in more detail when running the server or if these additional headers are ignored or accepted as they can't be disabled on the SDK unless you modify the library.
|
{
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjhan/followers",
"following_url": "https://api.github.com/users/royjhan/following{/other_user}",
"gists_url": "https://api.github.com/users/royjhan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/royjhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/royjhan/subscriptions",
"organizations_url": "https://api.github.com/users/royjhan/orgs",
"repos_url": "https://api.github.com/users/royjhan/repos",
"events_url": "https://api.github.com/users/royjhan/events{/privacy}",
"received_events_url": "https://api.github.com/users/royjhan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4388/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4388/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6885
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6885/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6885/comments
|
https://api.github.com/repos/ollama/ollama/issues/6885/events
|
https://github.com/ollama/ollama/issues/6885
| 2,537,472,316
|
I_kwDOJ0Z1Ps6XPsE8
| 6,885
|
Please support FreeBSD
|
{
"login": "yurivict",
"id": 271906,
"node_id": "MDQ6VXNlcjI3MTkwNg==",
"avatar_url": "https://avatars.githubusercontent.com/u/271906?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yurivict",
"html_url": "https://github.com/yurivict",
"followers_url": "https://api.github.com/users/yurivict/followers",
"following_url": "https://api.github.com/users/yurivict/following{/other_user}",
"gists_url": "https://api.github.com/users/yurivict/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yurivict/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yurivict/subscriptions",
"organizations_url": "https://api.github.com/users/yurivict/orgs",
"repos_url": "https://api.github.com/users/yurivict/repos",
"events_url": "https://api.github.com/users/yurivict/events{/privacy}",
"received_events_url": "https://api.github.com/users/yurivict/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-09-19T22:32:42
| 2024-09-20T17:59:14
| 2024-09-20T17:59:13
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi,
We have the FreeBSD port for ollama version 0.3.6: https://cgit.freebsd.org/ports/tree/misc/ollama
However, later versions fail to compile because of this extensive patch that one user submitted: https://cgit.freebsd.org/ports/tree/misc/ollama/files/patch-FreeBSD-compatibility
FreeBSD is very similar to Linux and in general should just compile except for some bits from the above patch.
Could you please integrate these bits into ollama so that it would build out-of-the-box on FreeBSD?
Thank you,
Yuri
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6885/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6885/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/441
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/441/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/441/comments
|
https://api.github.com/repos/ollama/ollama/issues/441/events
|
https://github.com/ollama/ollama/pull/441
| 1,872,558,414
|
PR_kwDOJ0Z1Ps5ZFM4e
| 441
|
GGUF support
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2023-08-29T22:02:08
| 2023-09-07T17:55:38
| 2023-09-07T17:55:37
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/441",
"html_url": "https://github.com/ollama/ollama/pull/441",
"diff_url": "https://github.com/ollama/ollama/pull/441.diff",
"patch_url": "https://github.com/ollama/ollama/pull/441.patch",
"merged_at": "2023-09-07T17:55:37"
}
|
This change adds support for running GGUF models which are currently in beta with llama.cpp. We will continue to run GGML models and this transition will be seamless to users.
- Adds a llama.cpp mainline submodule which runs `GGUF` models
- Dynamically select the right runner for the model type
- Moved a some code to different files
```
./ollama run gguf-codellama hello world
This is your first interaction with me. I am a bot, and I am created by you. Please ask me any questions you would like answered.
```
As mentioned in #423
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/441/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/441/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5720
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5720/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5720/comments
|
https://api.github.com/repos/ollama/ollama/issues/5720/events
|
https://github.com/ollama/ollama/issues/5720
| 2,410,526,967
|
I_kwDOJ0Z1Ps6Prbj3
| 5,720
|
ollama-docker-app using 100% without reason in idle state
|
{
"login": "jan-panoch",
"id": 34071544,
"node_id": "MDQ6VXNlcjM0MDcxNTQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/34071544?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jan-panoch",
"html_url": "https://github.com/jan-panoch",
"followers_url": "https://api.github.com/users/jan-panoch/followers",
"following_url": "https://api.github.com/users/jan-panoch/following{/other_user}",
"gists_url": "https://api.github.com/users/jan-panoch/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jan-panoch/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jan-panoch/subscriptions",
"organizations_url": "https://api.github.com/users/jan-panoch/orgs",
"repos_url": "https://api.github.com/users/jan-panoch/repos",
"events_url": "https://api.github.com/users/jan-panoch/events{/privacy}",
"received_events_url": "https://api.github.com/users/jan-panoch/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-07-16T08:20:55
| 2024-07-23T00:22:45
| 2024-07-23T00:22:03
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
we are runnig ollama with docker and container ollama-docker-app-1 in idletime is consuming 100% cpu without reason. no always, but occasionally. restart helps.
the olllama docker stack is started using docker compose file https://github.com/valiantlynx/ollama-docker/blob/main/docker-compose-ollama-gpu.yaml
when i look into container ollama-docker-app-1 i see 2 python processes using 100% cpu - id 13 and id 43. here is pstree:
uvicorn(1)─┬─python(7)
├─python(8)─┬─{python}(17)
│ ├─{python}(18)
│ ├─{python}(20)
│ └─{python}(21)
├─python(13)─┬─{python}(15)
│ ├─{python}(16)
│ ├─{python}(19)
│ └─{python}(43)
└─{uvicorn}(9)
here are cmdlines of bad processes:
root@ce73cb00b056:/code# cat /proc/13/cmdline
/usr/local/bin/python/usr/local/lib/python3.11/site-packages/debugpy/adapter--for-server60015--host0.0.0.0--port5678--server-access-token69fbba1e901ac3b269a1f62151fd6306108d2e5cc5300616cd4574282cfd4a56root@ce73cb00b056:/code# cat /proc/43/cmdline
/usr/local/bin/python/usr/local/lib/python3.11/site-packages/debugpy/adapter--for-server60015--host0.0.0.0--port5678--server-access-token69fbba1e901ac3b269a1f62151fd6306108d2e5cc5300616cd4574282
here is strace:
root@ce73cb00b056:/code# strace -p 13
strace: Process 13 attached
futex(0x5604f9e11fc0, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 0, NULL, FUTEX_BITSET_MATCH_ANY^Cstrace: Process 13 detached
<detached ...>
and
root@ce73cb00b056:/code# strace -p 43
strace: Process 43 attached
recvfrom(9, "", 1, 0, NULL, NULL) = 0
recvfrom(9, "", 1, 0, NULL, NULL) = 0
recvfrom(9, "", 1, 0, NULL, NULL) = 0
recvfrom(9, "", 1, 0, NULL, NULL) = 0
recvfrom(9, "", 1, 0, NULL, NULL) = 0
recvfrom(9, "", 1, 0, NULL, NULL) = 0
recvfrom(9, "", 1, 0, NULL, NULL) = 0
recvfrom(9, "", 1, 0, NULL, NULL) = 0
recvfrom(9, "", 1, 0, NULL, NULL) = 0
....
....
....
....
...until break
any ideas?
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.2.5
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5720/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5720/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6273
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6273/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6273/comments
|
https://api.github.com/repos/ollama/ollama/issues/6273/events
|
https://github.com/ollama/ollama/issues/6273
| 2,457,079,355
|
I_kwDOJ0Z1Ps6SdA47
| 6,273
|
unsupported content type: unknown
|
{
"login": "little1d",
"id": 115958756,
"node_id": "U_kgDOBulj5A",
"avatar_url": "https://avatars.githubusercontent.com/u/115958756?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/little1d",
"html_url": "https://github.com/little1d",
"followers_url": "https://api.github.com/users/little1d/followers",
"following_url": "https://api.github.com/users/little1d/following{/other_user}",
"gists_url": "https://api.github.com/users/little1d/gists{/gist_id}",
"starred_url": "https://api.github.com/users/little1d/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/little1d/subscriptions",
"organizations_url": "https://api.github.com/users/little1d/orgs",
"repos_url": "https://api.github.com/users/little1d/repos",
"events_url": "https://api.github.com/users/little1d/events{/privacy}",
"received_events_url": "https://api.github.com/users/little1d/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 6
| 2024-08-09T04:30:27
| 2024-08-14T20:47:16
| 2024-08-14T20:47:16
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Is ollama available to create model through safetensors file?
I run this commad, and error says unsupported content type: unknown. I have tried llama3.1 model and qwen2-0.5b, same outcome
**`command`**
ollama create mymodel2 -f ./Modelfile

**`model file`**

**`Modelfile`**
`FROM ./model/llama.safetensors`
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.3.4
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6273/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6273/timeline
| null |
not_planned
| false
|
https://api.github.com/repos/ollama/ollama/issues/8044
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8044/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8044/comments
|
https://api.github.com/repos/ollama/ollama/issues/8044/events
|
https://github.com/ollama/ollama/issues/8044
| 2,732,674,276
|
I_kwDOJ0Z1Ps6i4Uzk
| 8,044
|
I can't use llama3.2 after download.Error: llama runner process has terminated: exit status 0xc0000409
|
{
"login": "Hastersun",
"id": 78581699,
"node_id": "MDQ6VXNlcjc4NTgxNjk5",
"avatar_url": "https://avatars.githubusercontent.com/u/78581699?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Hastersun",
"html_url": "https://github.com/Hastersun",
"followers_url": "https://api.github.com/users/Hastersun/followers",
"following_url": "https://api.github.com/users/Hastersun/following{/other_user}",
"gists_url": "https://api.github.com/users/Hastersun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Hastersun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Hastersun/subscriptions",
"organizations_url": "https://api.github.com/users/Hastersun/orgs",
"repos_url": "https://api.github.com/users/Hastersun/repos",
"events_url": "https://api.github.com/users/Hastersun/events{/privacy}",
"received_events_url": "https://api.github.com/users/Hastersun/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-12-11T11:27:36
| 2024-12-14T16:35:35
| 2024-12-14T16:35:35
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Python is installed.
Version is 0.1.48
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8044/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8044/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8683
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8683/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8683/comments
|
https://api.github.com/repos/ollama/ollama/issues/8683/events
|
https://github.com/ollama/ollama/issues/8683
| 2,819,701,999
|
I_kwDOJ0Z1Ps6oETzv
| 8,683
|
Support release build without AVX
|
{
"login": "yoonsio",
"id": 24367477,
"node_id": "MDQ6VXNlcjI0MzY3NDc3",
"avatar_url": "https://avatars.githubusercontent.com/u/24367477?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yoonsio",
"html_url": "https://github.com/yoonsio",
"followers_url": "https://api.github.com/users/yoonsio/followers",
"following_url": "https://api.github.com/users/yoonsio/following{/other_user}",
"gists_url": "https://api.github.com/users/yoonsio/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yoonsio/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yoonsio/subscriptions",
"organizations_url": "https://api.github.com/users/yoonsio/orgs",
"repos_url": "https://api.github.com/users/yoonsio/repos",
"events_url": "https://api.github.com/users/yoonsio/events{/privacy}",
"received_events_url": "https://api.github.com/users/yoonsio/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 0
| 2025-01-30T01:34:51
| 2025-01-30T02:13:47
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Release image fails to detect the GPU when running on a CPU that does not support AVX.
Please add a non-AVX release build to the release pipeline.
```
msg="Dynamic LLM libraries" runners="[cpu_avx cpu cpu_avx2]"
```
Custom image can be built by overriding `CUSTOM_CPU_FLAGS`.
#### Example:
```
docker build --platform linux/amd64 --build-arg VERSION=noavx --build-arg CUSTOM_CPU_FLAGS= -t ollama/ollama:noavx -f Dockerfile .
```
#### Relevant issue:
* https://github.com/ollama/ollama/issues/2187
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8683/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8683/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/3876
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3876/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3876/comments
|
https://api.github.com/repos/ollama/ollama/issues/3876/events
|
https://github.com/ollama/ollama/issues/3876
| 2,261,414,283
|
I_kwDOJ0Z1Ps6GynGL
| 3,876
|
serving llama3 does not work
|
{
"login": "lambdaofgod",
"id": 3647577,
"node_id": "MDQ6VXNlcjM2NDc1Nzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3647577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lambdaofgod",
"html_url": "https://github.com/lambdaofgod",
"followers_url": "https://api.github.com/users/lambdaofgod/followers",
"following_url": "https://api.github.com/users/lambdaofgod/following{/other_user}",
"gists_url": "https://api.github.com/users/lambdaofgod/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lambdaofgod/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lambdaofgod/subscriptions",
"organizations_url": "https://api.github.com/users/lambdaofgod/orgs",
"repos_url": "https://api.github.com/users/lambdaofgod/repos",
"events_url": "https://api.github.com/users/lambdaofgod/events{/privacy}",
"received_events_url": "https://api.github.com/users/lambdaofgod/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 9
| 2024-04-24T14:15:31
| 2024-12-26T07:36:09
| 2024-04-25T09:02:47
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I am able to run llama 3 (`ollama run llama3`) but when I try to run the server I get
>{"error":"model 'llama3' not found, try pulling it first"}
This is in spite of `ollama list` detecting the model.
Specifically I ran
```
curl $LLAMA_URL -d '{
"model": "llama3",
"messages": [
{ "role": "user", "content": "why is the sky blue?" }
]
}'
```
### OS
Linux
### GPU
_No response_
### CPU
_No response_
### Ollama version
0.1.32
|
{
"login": "lambdaofgod",
"id": 3647577,
"node_id": "MDQ6VXNlcjM2NDc1Nzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3647577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lambdaofgod",
"html_url": "https://github.com/lambdaofgod",
"followers_url": "https://api.github.com/users/lambdaofgod/followers",
"following_url": "https://api.github.com/users/lambdaofgod/following{/other_user}",
"gists_url": "https://api.github.com/users/lambdaofgod/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lambdaofgod/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lambdaofgod/subscriptions",
"organizations_url": "https://api.github.com/users/lambdaofgod/orgs",
"repos_url": "https://api.github.com/users/lambdaofgod/repos",
"events_url": "https://api.github.com/users/lambdaofgod/events{/privacy}",
"received_events_url": "https://api.github.com/users/lambdaofgod/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3876/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3876/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/990
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/990/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/990/comments
|
https://api.github.com/repos/ollama/ollama/issues/990/events
|
https://github.com/ollama/ollama/issues/990
| 1,976,963,640
|
I_kwDOJ0Z1Ps511hI4
| 990
|
TPU backend support
|
{
"login": "coolrazor007",
"id": 62222426,
"node_id": "MDQ6VXNlcjYyMjIyNDI2",
"avatar_url": "https://avatars.githubusercontent.com/u/62222426?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/coolrazor007",
"html_url": "https://github.com/coolrazor007",
"followers_url": "https://api.github.com/users/coolrazor007/followers",
"following_url": "https://api.github.com/users/coolrazor007/following{/other_user}",
"gists_url": "https://api.github.com/users/coolrazor007/gists{/gist_id}",
"starred_url": "https://api.github.com/users/coolrazor007/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/coolrazor007/subscriptions",
"organizations_url": "https://api.github.com/users/coolrazor007/orgs",
"repos_url": "https://api.github.com/users/coolrazor007/repos",
"events_url": "https://api.github.com/users/coolrazor007/events{/privacy}",
"received_events_url": "https://api.github.com/users/coolrazor007/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 19
| 2023-11-03T21:39:11
| 2024-12-23T00:57:37
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Would love to see Ollama run on a TPU not just GPU. Has this been done by anyone already?
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/990/reactions",
"total_count": 26,
"+1": 26,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/990/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/580
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/580/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/580/comments
|
https://api.github.com/repos/ollama/ollama/issues/580/events
|
https://github.com/ollama/ollama/pull/580
| 1,909,612,642
|
PR_kwDOJ0Z1Ps5bB0OW
| 580
|
refactor and add other platforms
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-09-22T23:16:38
| 2023-09-23T13:42:41
| 2023-09-23T13:42:41
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/580",
"html_url": "https://github.com/ollama/ollama/pull/580",
"diff_url": "https://github.com/ollama/ollama/pull/580.diff",
"patch_url": "https://github.com/ollama/ollama/pull/580.patch",
"merged_at": "2023-09-23T13:42:41"
}
| null |
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/580/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/580/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4422
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4422/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4422/comments
|
https://api.github.com/repos/ollama/ollama/issues/4422/events
|
https://github.com/ollama/ollama/pull/4422
| 2,294,659,259
|
PR_kwDOJ0Z1Ps5vW661
| 4,422
|
add yi-1.5 example to model library
|
{
"login": "Yimi81",
"id": 66633207,
"node_id": "MDQ6VXNlcjY2NjMzMjA3",
"avatar_url": "https://avatars.githubusercontent.com/u/66633207?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Yimi81",
"html_url": "https://github.com/Yimi81",
"followers_url": "https://api.github.com/users/Yimi81/followers",
"following_url": "https://api.github.com/users/Yimi81/following{/other_user}",
"gists_url": "https://api.github.com/users/Yimi81/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Yimi81/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Yimi81/subscriptions",
"organizations_url": "https://api.github.com/users/Yimi81/orgs",
"repos_url": "https://api.github.com/users/Yimi81/repos",
"events_url": "https://api.github.com/users/Yimi81/events{/privacy}",
"received_events_url": "https://api.github.com/users/Yimi81/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 4
| 2024-05-14T07:35:23
| 2024-11-21T08:47:10
| 2024-11-21T08:47:10
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4422",
"html_url": "https://github.com/ollama/ollama/pull/4422",
"diff_url": "https://github.com/ollama/ollama/pull/4422.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4422.patch",
"merged_at": null
}
|
We hope the open-source community can be promptly informed that ollama supports the yi-1.5 series. We have updated the list of example models in the README.md. Thank you for your time. @jmorganca
|
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4422/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4422/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/902
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/902/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/902/comments
|
https://api.github.com/repos/ollama/ollama/issues/902/events
|
https://github.com/ollama/ollama/issues/902
| 1,960,645,151
|
I_kwDOJ0Z1Ps503RIf
| 902
|
Support more params when ollama run
|
{
"login": "UICJohn",
"id": 4167985,
"node_id": "MDQ6VXNlcjQxNjc5ODU=",
"avatar_url": "https://avatars.githubusercontent.com/u/4167985?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/UICJohn",
"html_url": "https://github.com/UICJohn",
"followers_url": "https://api.github.com/users/UICJohn/followers",
"following_url": "https://api.github.com/users/UICJohn/following{/other_user}",
"gists_url": "https://api.github.com/users/UICJohn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/UICJohn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/UICJohn/subscriptions",
"organizations_url": "https://api.github.com/users/UICJohn/orgs",
"repos_url": "https://api.github.com/users/UICJohn/repos",
"events_url": "https://api.github.com/users/UICJohn/events{/privacy}",
"received_events_url": "https://api.github.com/users/UICJohn/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 2
| 2023-10-25T06:32:04
| 2024-01-16T22:29:27
| 2024-01-16T22:29:27
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi there,
Thanks for all you have done.
Just wonder if there is any plan to support more params/options for running ollama model?
For example, --rope-freq-scale
So that we can run like this
`ollama run xxxx --rope-freq-scale 0.125`
I can see there is an Options map in api.GenerateRequest but it is not used when running generation
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/902/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/902/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8302
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8302/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8302/comments
|
https://api.github.com/repos/ollama/ollama/issues/8302/events
|
https://github.com/ollama/ollama/issues/8302
| 2,768,584,621
|
I_kwDOJ0Z1Ps6lBT-t
| 8,302
|
no compatible GPUs were discovered
|
{
"login": "fatebugs",
"id": 65278566,
"node_id": "MDQ6VXNlcjY1Mjc4NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/65278566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fatebugs",
"html_url": "https://github.com/fatebugs",
"followers_url": "https://api.github.com/users/fatebugs/followers",
"following_url": "https://api.github.com/users/fatebugs/following{/other_user}",
"gists_url": "https://api.github.com/users/fatebugs/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fatebugs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fatebugs/subscriptions",
"organizations_url": "https://api.github.com/users/fatebugs/orgs",
"repos_url": "https://api.github.com/users/fatebugs/repos",
"events_url": "https://api.github.com/users/fatebugs/events{/privacy}",
"received_events_url": "https://api.github.com/users/fatebugs/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 2
| 2025-01-04T07:37:06
| 2025-01-24T09:50:20
| 2025-01-24T09:50:20
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Due to the limitations of the latest version of MacOS, I am unable to use the ollama.app client and can only use Docker as the runtime tool for ollama. When running ollama in Docker launched on Macmini M4, it prompts that the GPU cannot be found. In this case, how should I solve this problem

### OS
macOS, Docker
### GPU
Apple
### CPU
Apple
### Ollama version
0.5.4-0-g2ddc32d-dirty
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8302/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8302/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8586
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8586/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8586/comments
|
https://api.github.com/repos/ollama/ollama/issues/8586/events
|
https://github.com/ollama/ollama/issues/8586
| 2,811,276,191
|
I_kwDOJ0Z1Ps6nkKuf
| 8,586
|
/v2/library/ 404 ollama -v 0.5.5
|
{
"login": "moofya",
"id": 55646892,
"node_id": "MDQ6VXNlcjU1NjQ2ODky",
"avatar_url": "https://avatars.githubusercontent.com/u/55646892?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/moofya",
"html_url": "https://github.com/moofya",
"followers_url": "https://api.github.com/users/moofya/followers",
"following_url": "https://api.github.com/users/moofya/following{/other_user}",
"gists_url": "https://api.github.com/users/moofya/gists{/gist_id}",
"starred_url": "https://api.github.com/users/moofya/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/moofya/subscriptions",
"organizations_url": "https://api.github.com/users/moofya/orgs",
"repos_url": "https://api.github.com/users/moofya/repos",
"events_url": "https://api.github.com/users/moofya/events{/privacy}",
"received_events_url": "https://api.github.com/users/moofya/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 0
| 2025-01-26T02:19:11
| 2025-01-26T05:38:58
| 2025-01-26T05:38:58
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
“https://registry.ollama.ai/v2/library/deepseek-r1/manifests/8b": dial tcp 104.21.75.227:443: i/o timeout
### OS
_No response_
### GPU
_No response_
### CPU
_No response_
### Ollama version
_No response_
|
{
"login": "moofya",
"id": 55646892,
"node_id": "MDQ6VXNlcjU1NjQ2ODky",
"avatar_url": "https://avatars.githubusercontent.com/u/55646892?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/moofya",
"html_url": "https://github.com/moofya",
"followers_url": "https://api.github.com/users/moofya/followers",
"following_url": "https://api.github.com/users/moofya/following{/other_user}",
"gists_url": "https://api.github.com/users/moofya/gists{/gist_id}",
"starred_url": "https://api.github.com/users/moofya/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/moofya/subscriptions",
"organizations_url": "https://api.github.com/users/moofya/orgs",
"repos_url": "https://api.github.com/users/moofya/repos",
"events_url": "https://api.github.com/users/moofya/events{/privacy}",
"received_events_url": "https://api.github.com/users/moofya/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8586/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8586/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/466
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/466/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/466/comments
|
https://api.github.com/repos/ollama/ollama/issues/466/events
|
https://github.com/ollama/ollama/pull/466
| 1,879,256,450
|
PR_kwDOJ0Z1Ps5Zbj0D
| 466
|
template extra args
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2023-09-03T22:34:05
| 2024-04-14T22:45:31
| 2024-04-14T22:45:30
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | true
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/466",
"html_url": "https://github.com/ollama/ollama/pull/466",
"diff_url": "https://github.com/ollama/ollama/pull/466.diff",
"patch_url": "https://github.com/ollama/ollama/pull/466.patch",
"merged_at": null
}
|
User defined arguments to the template, making things like infilling easier:
```
FROM codellama:7b-code
TEMPLATE "<PRE> {{ .Args.Prefix }} <SUF> {{- .Args.Suffix }} <MID>"
```
Request:
```
$ curl -s localhost:11434/api/generate -d '{"model":"codellama-infill","args":{"Prefix":"def remove_non_ascii(s: str) -> str:\n\t\"\"\"","Suffix":"return result"}}' | jq -r -j 'select(.response != null) | .response'
Remove non-ASCII characters from a string.
Parameters
----------
s : str
The input string.
Returns
-------
str
The output string.
"""
result = ""
for char in s:
if ord(char) < 128:
result += char
```
Or messages:
```
FROM llama2:7b
TEMPLATE """{{ range .Args.Messages }}[INST] {{ if .System -}}
<<SYS>>{{ .System }}<</SYS>>
{{- end }}
{{ range .Request }} [/INST] {{ .Response }} {{ end }}"""
```
```
$ curl -s localhost:11434/api/generate -d '{"model":"llama2","args": {"Messages": [{"System":"you are a good robot","Request":"tell me a joke","Response":"why is the sky blue?"},{"Request":"why did you say that?","Response":"i don't know"},{"Request":"do it again"}]'
```
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/466/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/466/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7074
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7074/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7074/comments
|
https://api.github.com/repos/ollama/ollama/issues/7074/events
|
https://github.com/ollama/ollama/issues/7074
| 2,560,503,446
|
I_kwDOJ0Z1Ps6Yni6W
| 7,074
|
Docker image size is over a GB larger than 0.3.10
|
{
"login": "codefromthecrypt",
"id": 64215,
"node_id": "MDQ6VXNlcjY0MjE1",
"avatar_url": "https://avatars.githubusercontent.com/u/64215?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/codefromthecrypt",
"html_url": "https://github.com/codefromthecrypt",
"followers_url": "https://api.github.com/users/codefromthecrypt/followers",
"following_url": "https://api.github.com/users/codefromthecrypt/following{/other_user}",
"gists_url": "https://api.github.com/users/codefromthecrypt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/codefromthecrypt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/codefromthecrypt/subscriptions",
"organizations_url": "https://api.github.com/users/codefromthecrypt/orgs",
"repos_url": "https://api.github.com/users/codefromthecrypt/repos",
"events_url": "https://api.github.com/users/codefromthecrypt/events{/privacy}",
"received_events_url": "https://api.github.com/users/codefromthecrypt/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
},
{
"id": 5755339642,
"node_id": "LA_kwDOJ0Z1Ps8AAAABVwuDeg",
"url": "https://api.github.com/repos/ollama/ollama/labels/linux",
"name": "linux",
"color": "516E70",
"default": false,
"description": ""
},
{
"id": 6677677816,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgVG-A",
"url": "https://api.github.com/repos/ollama/ollama/labels/docker",
"name": "docker",
"color": "0052CC",
"default": false,
"description": "Issues relating to using ollama in containers"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 4
| 2024-10-02T01:29:18
| 2024-10-07T01:13:28
| 2024-10-02T16:20:01
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I noticed some changelog in reducing docker image size, but when looking at darwin/arm64, it is significantly larger than 0.3.10. This may also apply to others. Can you mention if this is an accident or intent?
```
ollama/ollama 0.3.12 443040bf2568 6 days ago 3.22GB
ollama/ollama 0.3.11 580b37f3291e 13 days ago 3.22GB
ollama/ollama 0.3.10 a6cc2736bd5c 3 weeks ago 1.92GB
```
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.3.12
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7074/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7074/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5979
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5979/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5979/comments
|
https://api.github.com/repos/ollama/ollama/issues/5979/events
|
https://github.com/ollama/ollama/issues/5979
| 2,431,842,051
|
I_kwDOJ0Z1Ps6Q8vcD
| 5,979
|
0.2.6-rocm and above cannot be pulled with containerd on fedora
|
{
"login": "volatilemolotov",
"id": 20559691,
"node_id": "MDQ6VXNlcjIwNTU5Njkx",
"avatar_url": "https://avatars.githubusercontent.com/u/20559691?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/volatilemolotov",
"html_url": "https://github.com/volatilemolotov",
"followers_url": "https://api.github.com/users/volatilemolotov/followers",
"following_url": "https://api.github.com/users/volatilemolotov/following{/other_user}",
"gists_url": "https://api.github.com/users/volatilemolotov/gists{/gist_id}",
"starred_url": "https://api.github.com/users/volatilemolotov/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/volatilemolotov/subscriptions",
"organizations_url": "https://api.github.com/users/volatilemolotov/orgs",
"repos_url": "https://api.github.com/users/volatilemolotov/repos",
"events_url": "https://api.github.com/users/volatilemolotov/events{/privacy}",
"received_events_url": "https://api.github.com/users/volatilemolotov/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 8
| 2024-07-26T09:39:47
| 2024-08-01T13:25:38
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Pulling the image results in
```
Error: failed to extract layer sha256:00d2c36d84f963d50ac6a568b0be71eea96f3579770ef47c2ac3f94d4d3c346a: exit status 1: unpigz: skipping: <stdin>: corrupted -- crc32 mismatch
```
This happens for 0.2.6-rocm and further vesions
Not sure why it fails and what kind of issue pigz has here. Im using k0s which comes packaged with its own containerd so its not using fedora installed.
Really not sure where to look for a potential cause
### OS
Linux
### GPU
_No response_
### CPU
Intel, AMD
### Ollama version
0.3.0
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5979/reactions",
"total_count": 5,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5979/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/475
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/475/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/475/comments
|
https://api.github.com/repos/ollama/ollama/issues/475/events
|
https://github.com/ollama/ollama/issues/475
| 1,883,710,052
|
I_kwDOJ0Z1Ps5wRyJk
| 475
|
Bug: Importing a local model fails on MacOS
|
{
"login": "tianxiemaochiyu",
"id": 16790771,
"node_id": "MDQ6VXNlcjE2NzkwNzcx",
"avatar_url": "https://avatars.githubusercontent.com/u/16790771?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tianxiemaochiyu",
"html_url": "https://github.com/tianxiemaochiyu",
"followers_url": "https://api.github.com/users/tianxiemaochiyu/followers",
"following_url": "https://api.github.com/users/tianxiemaochiyu/following{/other_user}",
"gists_url": "https://api.github.com/users/tianxiemaochiyu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tianxiemaochiyu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tianxiemaochiyu/subscriptions",
"organizations_url": "https://api.github.com/users/tianxiemaochiyu/orgs",
"repos_url": "https://api.github.com/users/tianxiemaochiyu/repos",
"events_url": "https://api.github.com/users/tianxiemaochiyu/events{/privacy}",
"received_events_url": "https://api.github.com/users/tianxiemaochiyu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 4
| 2023-09-06T10:24:32
| 2023-12-04T19:23:52
| 2023-12-04T19:23:51
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Importing a local model fails on MacOS:
```
Parsing modelfile
Looking for model
⠋ Creating model layer Error: Invalid file magic
```
Here is the content of my Modelfile:
```
FROM ./ggml-Llama2-Chinese-13b-Chat-q4_k_m.ggmlv3.Q4_K_M.bin
TEMPLATE """
{{- if .First }}
<<SYS>>
{{ .System }}
<</SYS>>
{{- end }}
"""
```
The model file is located in the same directory as the Modelfile.
Any suggestions are welcome.
|
{
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.github.com/users/technovangelist/followers",
"following_url": "https://api.github.com/users/technovangelist/following{/other_user}",
"gists_url": "https://api.github.com/users/technovangelist/gists{/gist_id}",
"starred_url": "https://api.github.com/users/technovangelist/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/technovangelist/subscriptions",
"organizations_url": "https://api.github.com/users/technovangelist/orgs",
"repos_url": "https://api.github.com/users/technovangelist/repos",
"events_url": "https://api.github.com/users/technovangelist/events{/privacy}",
"received_events_url": "https://api.github.com/users/technovangelist/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/475/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/475/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6091
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6091/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6091/comments
|
https://api.github.com/repos/ollama/ollama/issues/6091/events
|
https://github.com/ollama/ollama/issues/6091
| 2,439,176,404
|
I_kwDOJ0Z1Ps6RYuDU
| 6,091
|
Parallel Bug: Would rather queue than reload on another GPU
|
{
"login": "txd0213",
"id": 62833076,
"node_id": "MDQ6VXNlcjYyODMzMDc2",
"avatar_url": "https://avatars.githubusercontent.com/u/62833076?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/txd0213",
"html_url": "https://github.com/txd0213",
"followers_url": "https://api.github.com/users/txd0213/followers",
"following_url": "https://api.github.com/users/txd0213/following{/other_user}",
"gists_url": "https://api.github.com/users/txd0213/gists{/gist_id}",
"starred_url": "https://api.github.com/users/txd0213/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/txd0213/subscriptions",
"organizations_url": "https://api.github.com/users/txd0213/orgs",
"repos_url": "https://api.github.com/users/txd0213/repos",
"events_url": "https://api.github.com/users/txd0213/events{/privacy}",
"received_events_url": "https://api.github.com/users/txd0213/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 1
| 2024-07-31T05:41:51
| 2024-08-01T22:19:54
| 2024-08-01T22:19:45
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
**Experimental environment: 8 x A6000 GPUs**
**LLM: qwen2:7b**
**Environment variables:**
```
Environment="OLLAMA_NUM_PARALLEL=16"
Environment="OLLAMA_MAX_LOADED_MODELS=4"
```
When the concurrency is less than or equal to **4**, the parallel processing is effective. However, once it exceeds 4, OLLAMA does not choose to **reload the same model on another GPU**.

Although I sent 16 requests simultaneously, as can be seen from the graph, the actual concurrency of the model is only 4.

### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.3.0
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6091/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6091/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7837
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7837/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7837/comments
|
https://api.github.com/repos/ollama/ollama/issues/7837/events
|
https://github.com/ollama/ollama/pull/7837
| 2,692,768,351
|
PR_kwDOJ0Z1Ps6DHZ8L
| 7,837
|
Export ctx, gpu, parallel parameters via /api/ps
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 0
| 2024-11-26T01:39:39
| 2024-11-26T01:39:39
| null |
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | true
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7837",
"html_url": "https://github.com/ollama/ollama/pull/7837",
"diff_url": "https://github.com/ollama/ollama/pull/7837.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7837.patch",
"merged_at": null
}
|
Allow clients to query some model run-time parameters.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7837/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7837/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4683
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4683/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4683/comments
|
https://api.github.com/repos/ollama/ollama/issues/4683/events
|
https://github.com/ollama/ollama/pull/4683
| 2,321,534,896
|
PR_kwDOJ0Z1Ps5wylbl
| 4,683
|
Fix nvidia detection in install script
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-05-28T16:57:49
| 2024-05-28T16:59:37
| 2024-05-28T16:59:37
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4683",
"html_url": "https://github.com/ollama/ollama/pull/4683",
"diff_url": "https://github.com/ollama/ollama/pull/4683.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4683.patch",
"merged_at": "2024-05-28T16:59:37"
}
| null |
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4683/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4683/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3394
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3394/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3394/comments
|
https://api.github.com/repos/ollama/ollama/issues/3394/events
|
https://github.com/ollama/ollama/issues/3394
| 2,214,089,998
|
I_kwDOJ0Z1Ps6D-FUO
| 3,394
|
Add support for MobileVLM
|
{
"login": "ddpasa",
"id": 112642920,
"node_id": "U_kgDOBrbLaA",
"avatar_url": "https://avatars.githubusercontent.com/u/112642920?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ddpasa",
"html_url": "https://github.com/ddpasa",
"followers_url": "https://api.github.com/users/ddpasa/followers",
"following_url": "https://api.github.com/users/ddpasa/following{/other_user}",
"gists_url": "https://api.github.com/users/ddpasa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ddpasa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ddpasa/subscriptions",
"organizations_url": "https://api.github.com/users/ddpasa/orgs",
"repos_url": "https://api.github.com/users/ddpasa/repos",
"events_url": "https://api.github.com/users/ddpasa/events{/privacy}",
"received_events_url": "https://api.github.com/users/ddpasa/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
open
| false
| null |
[] | null | 0
| 2024-03-28T20:34:09
| 2024-03-29T02:31:17
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What model would you like?
MobileVLM v2 is a very promising multimodal model that is already supported by llama.cpp. Here are the 3 versions:
1.7b: https://huggingface.co/mtgv/MobileVLM_V2-1.7B
3b: https://huggingface.co/mtgv/MobileVLM_V2-3B
7b: https://huggingface.co/mtgv/MobileVLM_V2-7B
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3394/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3394/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/432
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/432/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/432/comments
|
https://api.github.com/repos/ollama/ollama/issues/432/events
|
https://github.com/ollama/ollama/issues/432
| 1,868,497,921
|
I_kwDOJ0Z1Ps5vXwQB
| 432
|
Which files to copy in order to use model with Ollama on other computer?
|
{
"login": "ctsrc",
"id": 36199671,
"node_id": "MDQ6VXNlcjM2MTk5Njcx",
"avatar_url": "https://avatars.githubusercontent.com/u/36199671?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ctsrc",
"html_url": "https://github.com/ctsrc",
"followers_url": "https://api.github.com/users/ctsrc/followers",
"following_url": "https://api.github.com/users/ctsrc/following{/other_user}",
"gists_url": "https://api.github.com/users/ctsrc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ctsrc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ctsrc/subscriptions",
"organizations_url": "https://api.github.com/users/ctsrc/orgs",
"repos_url": "https://api.github.com/users/ctsrc/repos",
"events_url": "https://api.github.com/users/ctsrc/events{/privacy}",
"received_events_url": "https://api.github.com/users/ctsrc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] |
closed
| false
| null |
[] | null | 8
| 2023-08-27T13:31:51
| 2024-01-07T19:31:58
| 2023-08-30T00:39:25
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I have two computers with Ollama 0.0.16 installed on both.
I downloaded many gigabytes of models on one of them, and then I copied my `~/.ollama/` directory with all of its data from one computer to the other
However, Ollama on the other computer still wants to connect to the internet when I try to run one of the models I copied.
What other files do I need to copy from one computer to the other, in order for Ollama on the other computer to find the models?
|
{
"login": "ctsrc",
"id": 36199671,
"node_id": "MDQ6VXNlcjM2MTk5Njcx",
"avatar_url": "https://avatars.githubusercontent.com/u/36199671?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ctsrc",
"html_url": "https://github.com/ctsrc",
"followers_url": "https://api.github.com/users/ctsrc/followers",
"following_url": "https://api.github.com/users/ctsrc/following{/other_user}",
"gists_url": "https://api.github.com/users/ctsrc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ctsrc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ctsrc/subscriptions",
"organizations_url": "https://api.github.com/users/ctsrc/orgs",
"repos_url": "https://api.github.com/users/ctsrc/repos",
"events_url": "https://api.github.com/users/ctsrc/events{/privacy}",
"received_events_url": "https://api.github.com/users/ctsrc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/432/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/432/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3123
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3123/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3123/comments
|
https://api.github.com/repos/ollama/ollama/issues/3123/events
|
https://github.com/ollama/ollama/issues/3123
| 2,184,772,436
|
I_kwDOJ0Z1Ps6COPtU
| 3,123
|
Windows build script refinements
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg",
"url": "https://api.github.com/repos/ollama/ollama/labels/windows",
"name": "windows",
"color": "0052CC",
"default": false,
"description": ""
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 1
| 2024-03-13T19:48:31
| 2024-04-28T19:10:06
| 2024-04-28T19:10:06
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
- Soften the developer shell requirement if possible so we can build as long as the compiler is in the path
- Modularize the generate script using same variables as linux so we can build the various runners discretely
- Modularize the outer build script so we can parallelize the overall build
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3123/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3123/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2527
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2527/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2527/comments
|
https://api.github.com/repos/ollama/ollama/issues/2527/events
|
https://github.com/ollama/ollama/issues/2527
| 2,137,562,624
|
I_kwDOJ0Z1Ps5_aJ4A
| 2,527
|
Windows GPU libraries compiled with AVX2 instead of AVX
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-02-15T22:37:13
| 2024-02-19T21:13:06
| 2024-02-19T21:13:06
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Even though we're setting:
```
generating config with: cmake -S ../llama.cpp -B ../llama.cpp/build/windows/amd64/cuda_v11.3 -DBUILD_SHARED_LIBS=on -DLLAMA_NATIVE=off -A x64 -DCMAKE_VERBOSE_MAKEFILE=on -DLLAMA_SERVER_VERBOSE=on -DLLAMA_CUBLAS=ON -DLLAMA_AVX=on -DCMAKE_CUDA_ARCHITECTURES=50;52;61;70;75;80
```
The actual compile lines look like this:
```
ClCompile:
C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bin\HostX64\x64\CL.exe /c /I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.3\include" /Zi /W3 /WX- /diagnostics:column /O2 /Ob1 /D WIN32 /D _WINDOWS /D NDEBUG /D GGML_USE_CUBLAS /D GGML_CUDA_DMMV_X=32 /D GGML_CUDA_MMV_Y=1 /D K_QUANTS_PER_ITERATION=2 /D GGML_CUDA_PEER_MAX_BATCH_SIZE=128 /D _CRT_SECURE_NO_WARNINGS /D _XOPEN_SOURCE=600 /D "CMAKE_INTDIR=\"RelWithDebInfo\"" /D _MBCS /Gm- /EHsc /MD /GS /arch:AVX2 /fp:precise /Zc:wchar_t /Zc:forScope /Zc:inline /GR /Fo"build_info.dir\RelWithDebInfo\\" /Fd"build_info.dir\RelWithDebInfo\build_info.pdb" /external:W3 /Gd /TP /errorReport:queue "C:\Users\danie\code\ollama\llm\llama.cpp\common\build-info.cpp"
```
The `/arch:AVX2` shouldn't be there.
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2527/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2527/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8184
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8184/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8184/comments
|
https://api.github.com/repos/ollama/ollama/issues/8184/events
|
https://github.com/ollama/ollama/issues/8184
| 2,752,912,841
|
I_kwDOJ0Z1Ps6kFh3J
| 8,184
|
Falcon3 10B in 1.58bit format
|
{
"login": "thiswillbeyourgithub",
"id": 26625900,
"node_id": "MDQ6VXNlcjI2NjI1OTAw",
"avatar_url": "https://avatars.githubusercontent.com/u/26625900?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thiswillbeyourgithub",
"html_url": "https://github.com/thiswillbeyourgithub",
"followers_url": "https://api.github.com/users/thiswillbeyourgithub/followers",
"following_url": "https://api.github.com/users/thiswillbeyourgithub/following{/other_user}",
"gists_url": "https://api.github.com/users/thiswillbeyourgithub/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thiswillbeyourgithub/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thiswillbeyourgithub/subscriptions",
"organizations_url": "https://api.github.com/users/thiswillbeyourgithub/orgs",
"repos_url": "https://api.github.com/users/thiswillbeyourgithub/repos",
"events_url": "https://api.github.com/users/thiswillbeyourgithub/events{/privacy}",
"received_events_url": "https://api.github.com/users/thiswillbeyourgithub/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-12-20T15:02:17
| 2025-01-13T01:43:30
| 2025-01-13T01:43:30
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I'n sort of surprised that all variants of falcon3 have been added very quickly but not the 1.58bit one and nobody seems to have asked for it.
The full 10B model is only 3.99Go in 1,58bit format according to [their hf repo](https://huggingface.co/tiiuae/Falcon3-10B-Instruct-1.58bit/tree/main) so I think it would be interesting to play with!
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8184/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8184/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2240
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2240/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2240/comments
|
https://api.github.com/repos/ollama/ollama/issues/2240/events
|
https://github.com/ollama/ollama/issues/2240
| 2,104,243,882
|
I_kwDOJ0Z1Ps59bDaq
| 2,240
|
How to limit output token generated: Phi model
|
{
"login": "bm777",
"id": 29865600,
"node_id": "MDQ6VXNlcjI5ODY1NjAw",
"avatar_url": "https://avatars.githubusercontent.com/u/29865600?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bm777",
"html_url": "https://github.com/bm777",
"followers_url": "https://api.github.com/users/bm777/followers",
"following_url": "https://api.github.com/users/bm777/following{/other_user}",
"gists_url": "https://api.github.com/users/bm777/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bm777/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bm777/subscriptions",
"organizations_url": "https://api.github.com/users/bm777/orgs",
"repos_url": "https://api.github.com/users/bm777/repos",
"events_url": "https://api.github.com/users/bm777/events{/privacy}",
"received_events_url": "https://api.github.com/users/bm777/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
|
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 10
| 2024-01-28T16:28:07
| 2024-12-20T23:40:28
| 2024-12-20T23:40:28
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
From a given context + query, the model generates well the answer, but very long -> around `2000 chars`.
Is there any way to do `max_output_tokens=200` like pplx or openAI API?
This is my prompt template:
```js
_template = "You are an assistant that delivers short answers to the user inquiry from the provided context.\n\n
context: {conditioned_passages}\n\n
query: {query}
answer:"
```
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2240/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2240/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/81
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/81/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/81/comments
|
https://api.github.com/repos/ollama/ollama/issues/81/events
|
https://github.com/ollama/ollama/pull/81
| 1,805,608,793
|
PR_kwDOJ0Z1Ps5Vjzp7
| 81
|
fix race
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-07-14T21:36:02
| 2023-07-14T22:12:13
| 2023-07-14T22:12:01
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/81",
"html_url": "https://github.com/ollama/ollama/pull/81",
"diff_url": "https://github.com/ollama/ollama/pull/81.diff",
"patch_url": "https://github.com/ollama/ollama/pull/81.patch",
"merged_at": "2023-07-14T22:12:01"
}
|
block on write which only returns when the channel is closed. this is contrary to the previous arrangement where the handler may return but the stream hasn't finished writing. it can lead to the client receiving unexpected responses (since the request has been handled) or worst case a nil-pointer dereference as the stream tries to flush a nil writer
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/81/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/81/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4824
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4824/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4824/comments
|
https://api.github.com/repos/ollama/ollama/issues/4824/events
|
https://github.com/ollama/ollama/issues/4824
| 2,334,777,318
|
I_kwDOJ0Z1Ps6LKd_m
| 4,824
|
Error: llama runner process has terminated: signal: aborted (core dumped)
|
{
"login": "ignore1999",
"id": 64943360,
"node_id": "MDQ6VXNlcjY0OTQzMzYw",
"avatar_url": "https://avatars.githubusercontent.com/u/64943360?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ignore1999",
"html_url": "https://github.com/ignore1999",
"followers_url": "https://api.github.com/users/ignore1999/followers",
"following_url": "https://api.github.com/users/ignore1999/following{/other_user}",
"gists_url": "https://api.github.com/users/ignore1999/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ignore1999/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ignore1999/subscriptions",
"organizations_url": "https://api.github.com/users/ignore1999/orgs",
"repos_url": "https://api.github.com/users/ignore1999/repos",
"events_url": "https://api.github.com/users/ignore1999/events{/privacy}",
"received_events_url": "https://api.github.com/users/ignore1999/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 8
| 2024-06-05T02:36:37
| 2024-08-07T10:51:28
| 2024-06-09T17:12:07
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
When I run the MiniCPM-Llama3-V-2_5, I get an error:"Error: llama runner process has terminated: signal: aborted (core dumped)",This is the case for both version 0.1.39 and 0.1.41
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.41
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4824/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4824/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7008
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7008/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7008/comments
|
https://api.github.com/repos/ollama/ollama/issues/7008/events
|
https://github.com/ollama/ollama/issues/7008
| 2,553,626,482
|
I_kwDOJ0Z1Ps6YNT9y
| 7,008
|
/api/embed uses 512 token context window even though model was configured with 8192
|
{
"login": "khromov",
"id": 1207507,
"node_id": "MDQ6VXNlcjEyMDc1MDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1207507?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/khromov",
"html_url": "https://github.com/khromov",
"followers_url": "https://api.github.com/users/khromov/followers",
"following_url": "https://api.github.com/users/khromov/following{/other_user}",
"gists_url": "https://api.github.com/users/khromov/gists{/gist_id}",
"starred_url": "https://api.github.com/users/khromov/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/khromov/subscriptions",
"organizations_url": "https://api.github.com/users/khromov/orgs",
"repos_url": "https://api.github.com/users/khromov/repos",
"events_url": "https://api.github.com/users/khromov/events{/privacy}",
"received_events_url": "https://api.github.com/users/khromov/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-09-27T19:45:54
| 2024-10-01T23:49:50
| 2024-10-01T23:49:50
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I'm using Continue.dev and have configured the following to generate embeddings:
```json
"embeddingsProvider": {
"provider": "ollama",
"model": "mxbai-embed-large:latest"
},
```
When inspecting the model, we see context is 8192:
```
ollama show --modelfile nomic-embed-text:latest | grep num_ctx
PARAMETER num_ctx 8192
```
However, it only seems to use 512 tokens in Ollama, while indexing we get:
```
[GIN] 2024/09/27 - 21:40:26 | 200 | 145.149375ms | 127.0.0.1 | POST "/api/embed"
INFO [update_slots] input truncated | n_ctx=512 n_erase=258 n_keep=0 n_left=512 n_shift=256 tid="0x1ebf88f40" timestamp=1727466026
```
### OS
macOS
### GPU
Apple
### CPU
_No response_
### Ollama version
0.3.12
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7008/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7008/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1259
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1259/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1259/comments
|
https://api.github.com/repos/ollama/ollama/issues/1259/events
|
https://github.com/ollama/ollama/issues/1259
| 2,009,070,995
|
I_kwDOJ0Z1Ps53v_2T
| 1,259
|
Missing logprob
|
{
"login": "ex3ndr",
"id": 400659,
"node_id": "MDQ6VXNlcjQwMDY1OQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/400659?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ex3ndr",
"html_url": "https://github.com/ex3ndr",
"followers_url": "https://api.github.com/users/ex3ndr/followers",
"following_url": "https://api.github.com/users/ex3ndr/following{/other_user}",
"gists_url": "https://api.github.com/users/ex3ndr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ex3ndr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ex3ndr/subscriptions",
"organizations_url": "https://api.github.com/users/ex3ndr/orgs",
"repos_url": "https://api.github.com/users/ex3ndr/repos",
"events_url": "https://api.github.com/users/ex3ndr/events{/privacy}",
"received_events_url": "https://api.github.com/users/ex3ndr/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
|
{
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "https://api.github.com/users/ParthSareen/followers",
"following_url": "https://api.github.com/users/ParthSareen/following{/other_user}",
"gists_url": "https://api.github.com/users/ParthSareen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ParthSareen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ParthSareen/subscriptions",
"organizations_url": "https://api.github.com/users/ParthSareen/orgs",
"repos_url": "https://api.github.com/users/ParthSareen/repos",
"events_url": "https://api.github.com/users/ParthSareen/events{/privacy}",
"received_events_url": "https://api.github.com/users/ParthSareen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "https://api.github.com/users/ParthSareen/followers",
"following_url": "https://api.github.com/users/ParthSareen/following{/other_user}",
"gists_url": "https://api.github.com/users/ParthSareen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ParthSareen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ParthSareen/subscriptions",
"organizations_url": "https://api.github.com/users/ParthSareen/orgs",
"repos_url": "https://api.github.com/users/ParthSareen/repos",
"events_url": "https://api.github.com/users/ParthSareen/events{/privacy}",
"received_events_url": "https://api.github.com/users/ParthSareen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 2
| 2023-11-24T04:18:38
| 2025-01-07T19:25:15
| 2025-01-07T19:25:15
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
For some reason, there are no way to get logprob of a completion to measure or visualise network performance, it would be nice to have to be able to build advanced tools for network debugging.
|
{
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "https://api.github.com/users/ParthSareen/followers",
"following_url": "https://api.github.com/users/ParthSareen/following{/other_user}",
"gists_url": "https://api.github.com/users/ParthSareen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ParthSareen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ParthSareen/subscriptions",
"organizations_url": "https://api.github.com/users/ParthSareen/orgs",
"repos_url": "https://api.github.com/users/ParthSareen/repos",
"events_url": "https://api.github.com/users/ParthSareen/events{/privacy}",
"received_events_url": "https://api.github.com/users/ParthSareen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1259/reactions",
"total_count": 11,
"+1": 11,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1259/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8592
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8592/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8592/comments
|
https://api.github.com/repos/ollama/ollama/issues/8592/events
|
https://github.com/ollama/ollama/issues/8592
| 2,811,574,204
|
I_kwDOJ0Z1Ps6nlTe8
| 8,592
|
ollama fails to detect old models after update
|
{
"login": "nevakrien",
"id": 101988414,
"node_id": "U_kgDOBhQ4Pg",
"avatar_url": "https://avatars.githubusercontent.com/u/101988414?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nevakrien",
"html_url": "https://github.com/nevakrien",
"followers_url": "https://api.github.com/users/nevakrien/followers",
"following_url": "https://api.github.com/users/nevakrien/following{/other_user}",
"gists_url": "https://api.github.com/users/nevakrien/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nevakrien/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nevakrien/subscriptions",
"organizations_url": "https://api.github.com/users/nevakrien/orgs",
"repos_url": "https://api.github.com/users/nevakrien/repos",
"events_url": "https://api.github.com/users/nevakrien/events{/privacy}",
"received_events_url": "https://api.github.com/users/nevakrien/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 1
| 2025-01-26T13:53:38
| 2025-01-26T14:03:02
| 2025-01-26T14:03:01
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
so my setup has a semi link for runing ollama model and I think i have over a tera byte of model weights so if there is a way to make it so i dont need to download the entire thing again i would be very happy
### OS
Linux
### GPU
_No response_
### CPU
_No response_
### Ollama version
0.5.7
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8592/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8592/timeline
| null |
duplicate
| false
|
https://api.github.com/repos/ollama/ollama/issues/2237
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2237/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2237/comments
|
https://api.github.com/repos/ollama/ollama/issues/2237/events
|
https://github.com/ollama/ollama/issues/2237
| 2,103,893,583
|
I_kwDOJ0Z1Ps59Zt5P
| 2,237
|
:lady_beetle: Missing model description on `ifioravanti/bagel-hermes`
|
{
"login": "adriens",
"id": 5235127,
"node_id": "MDQ6VXNlcjUyMzUxMjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/5235127?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/adriens",
"html_url": "https://github.com/adriens",
"followers_url": "https://api.github.com/users/adriens/followers",
"following_url": "https://api.github.com/users/adriens/following{/other_user}",
"gists_url": "https://api.github.com/users/adriens/gists{/gist_id}",
"starred_url": "https://api.github.com/users/adriens/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/adriens/subscriptions",
"organizations_url": "https://api.github.com/users/adriens/orgs",
"repos_url": "https://api.github.com/users/adriens/repos",
"events_url": "https://api.github.com/users/adriens/events{/privacy}",
"received_events_url": "https://api.github.com/users/adriens/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 2
| 2024-01-28T00:57:04
| 2024-03-12T18:37:21
| 2024-03-12T18:37:21
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
# :grey_question: About
[`ifioravanti/bagel-hermes`](https://ollama.ai/ifioravanti/bagel-hermes) is currently missing his description:

# :pray: Action
:point_right: Please :
- [ ] Put a short description like for the other ones
- [ ] Put a long description on the model's page
# :moneybag: Benefits
- Better indexation
- Automated docimentation
|
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2237/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2237/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5770
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5770/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5770/comments
|
https://api.github.com/repos/ollama/ollama/issues/5770/events
|
https://github.com/ollama/ollama/issues/5770
| 2,416,474,514
|
I_kwDOJ0Z1Ps6QCHmS
| 5,770
|
Can we add the new smollm models
|
{
"login": "psikosen",
"id": 5045515,
"node_id": "MDQ6VXNlcjUwNDU1MTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/5045515?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/psikosen",
"html_url": "https://github.com/psikosen",
"followers_url": "https://api.github.com/users/psikosen/followers",
"following_url": "https://api.github.com/users/psikosen/following{/other_user}",
"gists_url": "https://api.github.com/users/psikosen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/psikosen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/psikosen/subscriptions",
"organizations_url": "https://api.github.com/users/psikosen/orgs",
"repos_url": "https://api.github.com/users/psikosen/repos",
"events_url": "https://api.github.com/users/psikosen/events{/privacy}",
"received_events_url": "https://api.github.com/users/psikosen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
closed
| false
| null |
[] | null | 4
| 2024-07-18T13:57:13
| 2024-07-23T18:16:38
| 2024-07-23T18:16:38
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
These small models would be a valuable addition to ollama.
https://huggingface.co/collections/HuggingFaceTB/smollm-6695016cad7167254ce15966
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5770/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5770/timeline
| null |
completed
| false
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.