url
stringlengths 51
54
| repository_url
stringclasses 1
value | labels_url
stringlengths 65
68
| comments_url
stringlengths 60
63
| events_url
stringlengths 58
61
| html_url
stringlengths 39
44
| id
int64 1.78B
2.82B
| node_id
stringlengths 18
19
| number
int64 1
8.69k
| title
stringlengths 1
382
| user
dict | labels
listlengths 0
5
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
2
| milestone
null | comments
int64 0
323
| created_at
timestamp[s] | updated_at
timestamp[s] | closed_at
timestamp[s] | author_association
stringclasses 4
values | sub_issues_summary
dict | active_lock_reason
null | draft
bool 2
classes | pull_request
dict | body
stringlengths 2
118k
⌀ | closed_by
dict | reactions
dict | timeline_url
stringlengths 60
63
| performed_via_github_app
null | state_reason
stringclasses 4
values | is_pull_request
bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/ollama/ollama/issues/6063
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6063/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6063/comments
|
https://api.github.com/repos/ollama/ollama/issues/6063/events
|
https://github.com/ollama/ollama/pull/6063
| 2,436,478,903
|
PR_kwDOJ0Z1Ps52zc2p
| 6,063
|
convert: import support for command-r models from safetensors
|
{
"login": "joshyan1",
"id": 76125168,
"node_id": "MDQ6VXNlcjc2MTI1MTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/76125168?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joshyan1",
"html_url": "https://github.com/joshyan1",
"followers_url": "https://api.github.com/users/joshyan1/followers",
"following_url": "https://api.github.com/users/joshyan1/following{/other_user}",
"gists_url": "https://api.github.com/users/joshyan1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joshyan1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joshyan1/subscriptions",
"organizations_url": "https://api.github.com/users/joshyan1/orgs",
"repos_url": "https://api.github.com/users/joshyan1/repos",
"events_url": "https://api.github.com/users/joshyan1/events{/privacy}",
"received_events_url": "https://api.github.com/users/joshyan1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-07-29T22:17:37
| 2025-01-16T00:31:24
| 2025-01-16T00:31:23
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6063",
"html_url": "https://github.com/ollama/ollama/pull/6063",
"diff_url": "https://github.com/ollama/ollama/pull/6063.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6063.patch",
"merged_at": "2025-01-16T00:31:22"
}
|
working for
https://huggingface.co/CohereForAI/aya-23-8B
https://huggingface.co/CohereForAI/c4ai-command-r-v01
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6063/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6063/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6118
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6118/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6118/comments
|
https://api.github.com/repos/ollama/ollama/issues/6118/events
|
https://github.com/ollama/ollama/issues/6118
| 2,442,433,063
|
I_kwDOJ0Z1Ps6RlJIn
| 6,118
|
panic: runtime error: integer divide by zero in memory.go on bad model create
|
{
"login": "SongXiaoMao",
"id": 55074934,
"node_id": "MDQ6VXNlcjU1MDc0OTM0",
"avatar_url": "https://avatars.githubusercontent.com/u/55074934?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SongXiaoMao",
"html_url": "https://github.com/SongXiaoMao",
"followers_url": "https://api.github.com/users/SongXiaoMao/followers",
"following_url": "https://api.github.com/users/SongXiaoMao/following{/other_user}",
"gists_url": "https://api.github.com/users/SongXiaoMao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SongXiaoMao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SongXiaoMao/subscriptions",
"organizations_url": "https://api.github.com/users/SongXiaoMao/orgs",
"repos_url": "https://api.github.com/users/SongXiaoMao/repos",
"events_url": "https://api.github.com/users/SongXiaoMao/events{/privacy}",
"received_events_url": "https://api.github.com/users/SongXiaoMao/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 9
| 2024-08-01T13:15:23
| 2024-08-09T21:21:50
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I installed ollama today, the system is Ubuntu2204,I downloaded llama3.1-405b-Q2.gguf,There are 9 split files in total. Ollama create llama -f Modelfile.txt is completed successfully.
The ollama list is displayed normally, but an error occurs when running.Error: Post "http://127.0.0.1:11434/api/chat": EOF
Open this link and get an error 404 page not found,Open http://127.0.0.1:11434 and it will show that Ollama is running.
The model file size is 151GB, my computer memory is 128G, and the video memory is 96G. Please help me, thank you.
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
ollama version is 0.3.2
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6118/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/ollama/ollama/issues/6118/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/7412
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7412/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7412/comments
|
https://api.github.com/repos/ollama/ollama/issues/7412/events
|
https://github.com/ollama/ollama/pull/7412
| 2,622,649,442
|
PR_kwDOJ0Z1Ps6AUBcS
| 7,412
|
Implement tokenize and de-tokenize endpoints
|
{
"login": "jrmo14",
"id": 16376030,
"node_id": "MDQ6VXNlcjE2Mzc2MDMw",
"avatar_url": "https://avatars.githubusercontent.com/u/16376030?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jrmo14",
"html_url": "https://github.com/jrmo14",
"followers_url": "https://api.github.com/users/jrmo14/followers",
"following_url": "https://api.github.com/users/jrmo14/following{/other_user}",
"gists_url": "https://api.github.com/users/jrmo14/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jrmo14/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jrmo14/subscriptions",
"organizations_url": "https://api.github.com/users/jrmo14/orgs",
"repos_url": "https://api.github.com/users/jrmo14/repos",
"events_url": "https://api.github.com/users/jrmo14/events{/privacy}",
"received_events_url": "https://api.github.com/users/jrmo14/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
|
{
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "https://api.github.com/users/ParthSareen/followers",
"following_url": "https://api.github.com/users/ParthSareen/following{/other_user}",
"gists_url": "https://api.github.com/users/ParthSareen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ParthSareen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ParthSareen/subscriptions",
"organizations_url": "https://api.github.com/users/ParthSareen/orgs",
"repos_url": "https://api.github.com/users/ParthSareen/repos",
"events_url": "https://api.github.com/users/ParthSareen/events{/privacy}",
"received_events_url": "https://api.github.com/users/ParthSareen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "https://api.github.com/users/ParthSareen/followers",
"following_url": "https://api.github.com/users/ParthSareen/following{/other_user}",
"gists_url": "https://api.github.com/users/ParthSareen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ParthSareen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ParthSareen/subscriptions",
"organizations_url": "https://api.github.com/users/ParthSareen/orgs",
"repos_url": "https://api.github.com/users/ParthSareen/repos",
"events_url": "https://api.github.com/users/ParthSareen/events{/privacy}",
"received_events_url": "https://api.github.com/users/ParthSareen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 1
| 2024-10-30T01:38:06
| 2024-12-10T01:01:03
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7412",
"html_url": "https://github.com/ollama/ollama/pull/7412",
"diff_url": "https://github.com/ollama/ollama/pull/7412.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7412.patch",
"merged_at": null
}
|
Implement endpoints to tokenize (`/api/tokenize`) and detokenize (`/api/detokenize`) text
Closes #3582
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7412/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7412/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/301
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/301/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/301/comments
|
https://api.github.com/repos/ollama/ollama/issues/301/events
|
https://github.com/ollama/ollama/pull/301
| 1,838,617,020
|
PR_kwDOJ0Z1Ps5XSoAJ
| 301
|
pass flags to `serve` to allow setting allowed-origins + host and port
|
{
"login": "cmiller01",
"id": 3050939,
"node_id": "MDQ6VXNlcjMwNTA5Mzk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3050939?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cmiller01",
"html_url": "https://github.com/cmiller01",
"followers_url": "https://api.github.com/users/cmiller01/followers",
"following_url": "https://api.github.com/users/cmiller01/following{/other_user}",
"gists_url": "https://api.github.com/users/cmiller01/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cmiller01/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cmiller01/subscriptions",
"organizations_url": "https://api.github.com/users/cmiller01/orgs",
"repos_url": "https://api.github.com/users/cmiller01/repos",
"events_url": "https://api.github.com/users/cmiller01/events{/privacy}",
"received_events_url": "https://api.github.com/users/cmiller01/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 3
| 2023-08-07T03:41:01
| 2023-08-08T14:55:57
| 2023-08-08T14:41:43
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/301",
"html_url": "https://github.com/ollama/ollama/pull/301",
"diff_url": "https://github.com/ollama/ollama/pull/301.diff",
"patch_url": "https://github.com/ollama/ollama/pull/301.patch",
"merged_at": "2023-08-08T14:41:43"
}
|
resolves: https://github.com/jmorganca/ollama/issues/300 and https://github.com/jmorganca/ollama/issues/282
example usage:
```
ollama serve --port 9999 --allowed-origins "http://foo.example.com,http://192.0.0.1"
```
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/301/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/301/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/8650
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8650/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8650/comments
|
https://api.github.com/repos/ollama/ollama/issues/8650/events
|
https://github.com/ollama/ollama/issues/8650
| 2,817,224,735
|
I_kwDOJ0Z1Ps6n63Af
| 8,650
|
Request Support for Running Inference Through LM Studio
|
{
"login": "joseph777111",
"id": 80947356,
"node_id": "MDQ6VXNlcjgwOTQ3MzU2",
"avatar_url": "https://avatars.githubusercontent.com/u/80947356?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joseph777111",
"html_url": "https://github.com/joseph777111",
"followers_url": "https://api.github.com/users/joseph777111/followers",
"following_url": "https://api.github.com/users/joseph777111/following{/other_user}",
"gists_url": "https://api.github.com/users/joseph777111/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joseph777111/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joseph777111/subscriptions",
"organizations_url": "https://api.github.com/users/joseph777111/orgs",
"repos_url": "https://api.github.com/users/joseph777111/repos",
"events_url": "https://api.github.com/users/joseph777111/events{/privacy}",
"received_events_url": "https://api.github.com/users/joseph777111/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 2
| 2025-01-29T04:41:45
| 2025-01-29T23:32:52
| 2025-01-29T23:32:51
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null | ERROR: type should be string, got "https://lmstudio.ai\nhttps://github.com/lmstudio-ai/lms\n\nLM Studio is one of the most popular locally run inference platforms, which has its own inference server. Much like Ollama, LM Studio uses llama.cpp for inferences - but it also supports MLX.\n\nPlease kindly add support to use Goose with LM Studio as the inference backend. Thanks in advance! 🙏 \n\n"
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8650/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8650/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7859
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7859/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7859/comments
|
https://api.github.com/repos/ollama/ollama/issues/7859/events
|
https://github.com/ollama/ollama/issues/7859
| 2,698,287,004
|
I_kwDOJ0Z1Ps6g1Jec
| 7,859
|
Hymba-1.5B-family of models
|
{
"login": "jruokola",
"id": 90187138,
"node_id": "MDQ6VXNlcjkwMTg3MTM4",
"avatar_url": "https://avatars.githubusercontent.com/u/90187138?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jruokola",
"html_url": "https://github.com/jruokola",
"followers_url": "https://api.github.com/users/jruokola/followers",
"following_url": "https://api.github.com/users/jruokola/following{/other_user}",
"gists_url": "https://api.github.com/users/jruokola/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jruokola/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jruokola/subscriptions",
"organizations_url": "https://api.github.com/users/jruokola/orgs",
"repos_url": "https://api.github.com/users/jruokola/repos",
"events_url": "https://api.github.com/users/jruokola/events{/privacy}",
"received_events_url": "https://api.github.com/users/jruokola/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-11-27T11:45:33
| 2024-12-13T11:37:38
| 2024-12-13T11:37:38
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
https://huggingface.co/nvidia/Hymba-1.5B-Instruct
https://huggingface.co/nvidia/Hymba-1.5B-Base
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7859/reactions",
"total_count": 6,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7859/timeline
| null |
not_planned
| false
|
https://api.github.com/repos/ollama/ollama/issues/3084
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3084/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3084/comments
|
https://api.github.com/repos/ollama/ollama/issues/3084/events
|
https://github.com/ollama/ollama/pull/3084
| 2,182,552,422
|
PR_kwDOJ0Z1Ps5pba5V
| 3,084
|
update convert
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-03-12T19:48:35
| 2024-06-05T20:12:05
| 2024-03-27T21:02:34
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3084",
"html_url": "https://github.com/ollama/ollama/pull/3084",
"diff_url": "https://github.com/ollama/ollama/pull/3084.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3084.patch",
"merged_at": null
}
|
the output of convert remains exactly the same
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3084/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3084/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6016
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6016/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6016/comments
|
https://api.github.com/repos/ollama/ollama/issues/6016/events
|
https://github.com/ollama/ollama/issues/6016
| 2,433,436,116
|
I_kwDOJ0Z1Ps6RC0nU
| 6,016
|
Gemma2 and Mistral-nemo not running on ollama
|
{
"login": "gus147",
"id": 176750230,
"node_id": "U_kgDOCoj-lg",
"avatar_url": "https://avatars.githubusercontent.com/u/176750230?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gus147",
"html_url": "https://github.com/gus147",
"followers_url": "https://api.github.com/users/gus147/followers",
"following_url": "https://api.github.com/users/gus147/following{/other_user}",
"gists_url": "https://api.github.com/users/gus147/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gus147/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gus147/subscriptions",
"organizations_url": "https://api.github.com/users/gus147/orgs",
"repos_url": "https://api.github.com/users/gus147/repos",
"events_url": "https://api.github.com/users/gus147/events{/privacy}",
"received_events_url": "https://api.github.com/users/gus147/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 4
| 2024-07-27T11:25:40
| 2024-07-28T00:14:12
| 2024-07-28T00:14:12
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
[147@Clevo ~]$ ollama run mistral-nemo:12b-instruct-2407-fp16
Error: exception error loading model hyperparameters: invalid n_rot: 128, expected 160
[147@Clevo ~]$ ollama run gemma2:27b-instruct-q8_0
Error: exception error loading model architecture: unknown model architecture: 'gemma2'
Can someone explain why this is happening?
It is the first time models do not run on my ollama v0.1.30
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.30
|
{
"login": "gus147",
"id": 176750230,
"node_id": "U_kgDOCoj-lg",
"avatar_url": "https://avatars.githubusercontent.com/u/176750230?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gus147",
"html_url": "https://github.com/gus147",
"followers_url": "https://api.github.com/users/gus147/followers",
"following_url": "https://api.github.com/users/gus147/following{/other_user}",
"gists_url": "https://api.github.com/users/gus147/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gus147/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gus147/subscriptions",
"organizations_url": "https://api.github.com/users/gus147/orgs",
"repos_url": "https://api.github.com/users/gus147/repos",
"events_url": "https://api.github.com/users/gus147/events{/privacy}",
"received_events_url": "https://api.github.com/users/gus147/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6016/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6016/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6707
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6707/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6707/comments
|
https://api.github.com/repos/ollama/ollama/issues/6707/events
|
https://github.com/ollama/ollama/issues/6707
| 2,513,271,479
|
I_kwDOJ0Z1Ps6VzXq3
| 6,707
|
Generate endpoint intermittently misses final token before done
|
{
"login": "tarbard",
"id": 2259265,
"node_id": "MDQ6VXNlcjIyNTkyNjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/2259265?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tarbard",
"html_url": "https://github.com/tarbard",
"followers_url": "https://api.github.com/users/tarbard/followers",
"following_url": "https://api.github.com/users/tarbard/following{/other_user}",
"gists_url": "https://api.github.com/users/tarbard/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tarbard/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tarbard/subscriptions",
"organizations_url": "https://api.github.com/users/tarbard/orgs",
"repos_url": "https://api.github.com/users/tarbard/repos",
"events_url": "https://api.github.com/users/tarbard/events{/privacy}",
"received_events_url": "https://api.github.com/users/tarbard/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg",
"url": "https://api.github.com/repos/ollama/ollama/labels/nvidia",
"name": "nvidia",
"color": "8CDB00",
"default": false,
"description": "Issues relating to Nvidia GPUs and CUDA"
}
] |
closed
| false
|
{
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users/jessegross/followers",
"following_url": "https://api.github.com/users/jessegross/following{/other_user}",
"gists_url": "https://api.github.com/users/jessegross/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jessegross/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jessegross/subscriptions",
"organizations_url": "https://api.github.com/users/jessegross/orgs",
"repos_url": "https://api.github.com/users/jessegross/repos",
"events_url": "https://api.github.com/users/jessegross/events{/privacy}",
"received_events_url": "https://api.github.com/users/jessegross/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users/jessegross/followers",
"following_url": "https://api.github.com/users/jessegross/following{/other_user}",
"gists_url": "https://api.github.com/users/jessegross/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jessegross/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jessegross/subscriptions",
"organizations_url": "https://api.github.com/users/jessegross/orgs",
"repos_url": "https://api.github.com/users/jessegross/repos",
"events_url": "https://api.github.com/users/jessegross/events{/privacy}",
"received_events_url": "https://api.github.com/users/jessegross/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 6
| 2024-09-09T08:25:48
| 2024-09-14T05:05:42
| 2024-09-12T00:20:23
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
When using the generate endpoint it intermittently misses the last token right before the "done" message
```JSON
{"model":"adrienbrault/nous-hermes2theta-llama3-8b:q8_0","created_at":"2024-09-09T08:04:47.463348938Z","response":" Bear","done":false}
{"model":"adrienbrault/nous-hermes2theta-llama3-8b:q8_0","created_at":"2024-09-09T08:04:47.475993178Z","response":",","done":false}
{"model":"adrienbrault/nous-hermes2theta-llama3-8b:q8_0","created_at":"2024-09-09T08:04:47.488651949Z","response":" Elephant","done":false}
{"model":"adrienbrault/nous-hermes2theta-llama3-8b:q8_0","created_at":"2024-09-09T08:04:47.50131158Z","response":",","done":false}
{"model":"adrienbrault/nous-hermes2theta-llama3-8b:q8_0","created_at":"2024-09-09T08:04:47.51400078Z","response":" Gor","done":false}
{"model":"adrienbrault/nous-hermes2theta-llama3-8b:q8_0","created_at":"2024-09-09T08:04:47.539481043Z","response":"","done":true,"done_reason":"stop","total_duration":8790953777,"load_duration":8080650494,"
```
In the above example the token that should be the end of "Gorilla" is not emitted before the done response and we just get "Gor".
here's the curl command to reproduce this
```sh
curl -H 'Host: 127.0.0.1:11434' -H 'Content-Type: application/json' -H 'Connection: Keep-Alive' --compressed -H 'Accept-Language: en-GB,*' -H 'User-Agent: Mozilla/5.0' -X POST http://127.0.0.1:11434/a
pi/generate -d '{"model": "adrienbrault/nous-hermes2theta-llama3-8b:q8_0", "prompt": "\n<|im_start|>user\nYou will think of a number. Then you will list that many animals. Do not write any other words only
the animal. Be terse in your response.<|im_end|>\n<|im_start|>assistant", "raw": true, "stream": true, "keep_alive": -1, "options": {"seed": 99, "num_predict": 1024, "num_ctx": 4096, "stop": ["<end>", "user
:", "assistant:"], "num_batch": 1, "temperature": 0.5, "top_k": 40, "top_p": 0.9}}'
```
I have only seen this with one model so far (adrienbrault/nous-hermes2theta-llama3-8b:q8_0) so the model may well be a factor however I don't get this problem with the chat endpoint for that model but I do get it with the generate endpoint. I'm using raw mode and stream=true
### OS
Linux
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.3.9
|
{
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users/jessegross/followers",
"following_url": "https://api.github.com/users/jessegross/following{/other_user}",
"gists_url": "https://api.github.com/users/jessegross/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jessegross/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jessegross/subscriptions",
"organizations_url": "https://api.github.com/users/jessegross/orgs",
"repos_url": "https://api.github.com/users/jessegross/repos",
"events_url": "https://api.github.com/users/jessegross/events{/privacy}",
"received_events_url": "https://api.github.com/users/jessegross/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6707/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6707/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3998
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3998/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3998/comments
|
https://api.github.com/repos/ollama/ollama/issues/3998/events
|
https://github.com/ollama/ollama/issues/3998
| 2,267,413,766
|
I_kwDOJ0Z1Ps6HJf0G
| 3,998
|
Phi-3-mini-128k no load
|
{
"login": "bambooqj",
"id": 20792621,
"node_id": "MDQ6VXNlcjIwNzkyNjIx",
"avatar_url": "https://avatars.githubusercontent.com/u/20792621?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bambooqj",
"html_url": "https://github.com/bambooqj",
"followers_url": "https://api.github.com/users/bambooqj/followers",
"following_url": "https://api.github.com/users/bambooqj/following{/other_user}",
"gists_url": "https://api.github.com/users/bambooqj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bambooqj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bambooqj/subscriptions",
"organizations_url": "https://api.github.com/users/bambooqj/orgs",
"repos_url": "https://api.github.com/users/bambooqj/repos",
"events_url": "https://api.github.com/users/bambooqj/events{/privacy}",
"received_events_url": "https://api.github.com/users/bambooqj/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
closed
| false
| null |
[] | null | 7
| 2024-04-28T07:54:40
| 2024-07-05T04:05:50
| 2024-07-05T04:05:50
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
model download: `https://huggingface.co/PrunaAI/Phi-3-mini-128k-instruct-GGUF-Imatrix-smashed`
modfile:
```
FROM ./Phi-3-mini-128k-instruct.Q4_K_M.gguf
PARAMETER num_ctx 65536
PARAMETER num_keep 4
PARAMETER stop <|user|>
PARAMETER stop <|assistant|>
PARAMETER stop <|system|>
PARAMETER stop <|end|>
PARAMETER stop <|endoftext|>
TEMPLATE """
{{ if .System }}<|system|>
{{ .System }}<|end|>
{{ end }}{{ if .Prompt }}<|user|>
{{ .Prompt }}<|end|>
{{ end }}<|assistant|>
{{ .Response }}<|end|>
"""
```
error: `Error: llama runner process no longer running: 3221226505`
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3998/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3998/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7964
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7964/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7964/comments
|
https://api.github.com/repos/ollama/ollama/issues/7964/events
|
https://github.com/ollama/ollama/pull/7964
| 2,722,202,817
|
PR_kwDOJ0Z1Ps6ER9F8
| 7,964
|
Fix message truncation logic and ensure at least one system message i…
|
{
"login": "youyou301",
"id": 162660372,
"node_id": "U_kgDOCbIAFA",
"avatar_url": "https://avatars.githubusercontent.com/u/162660372?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/youyou301",
"html_url": "https://github.com/youyou301",
"followers_url": "https://api.github.com/users/youyou301/followers",
"following_url": "https://api.github.com/users/youyou301/following{/other_user}",
"gists_url": "https://api.github.com/users/youyou301/gists{/gist_id}",
"starred_url": "https://api.github.com/users/youyou301/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/youyou301/subscriptions",
"organizations_url": "https://api.github.com/users/youyou301/orgs",
"repos_url": "https://api.github.com/users/youyou301/repos",
"events_url": "https://api.github.com/users/youyou301/events{/privacy}",
"received_events_url": "https://api.github.com/users/youyou301/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 0
| 2024-12-06T06:48:21
| 2024-12-10T01:06:54
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7964",
"html_url": "https://github.com/ollama/ollama/pull/7964",
"diff_url": "https://github.com/ollama/ollama/pull/7964.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7964.patch",
"merged_at": null
}
|
### Changes:
- Fixed the message truncation logic to ensure that at least one `system` message is included.
- Adjusted the message handling to always include the last message, even if the context window is exceeded.
### Motivation:
- This change ensures that the context window truncation logic respects the system messages and guarantees that at least one `system` message is always included in the prompt.
### Related Issue(s):
- https://github.com/ollama/ollama/issues/6176
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7964/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7964/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3597
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3597/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3597/comments
|
https://api.github.com/repos/ollama/ollama/issues/3597/events
|
https://github.com/ollama/ollama/issues/3597
| 2,237,774,788
|
I_kwDOJ0Z1Ps6FYbvE
| 3,597
|
there is Massive Text Embedding Benchmark (MTEB) Leaderboard,could u support those mod?
|
{
"login": "doriszhang2020",
"id": 104901283,
"node_id": "U_kgDOBkCqow",
"avatar_url": "https://avatars.githubusercontent.com/u/104901283?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/doriszhang2020",
"html_url": "https://github.com/doriszhang2020",
"followers_url": "https://api.github.com/users/doriszhang2020/followers",
"following_url": "https://api.github.com/users/doriszhang2020/following{/other_user}",
"gists_url": "https://api.github.com/users/doriszhang2020/gists{/gist_id}",
"starred_url": "https://api.github.com/users/doriszhang2020/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/doriszhang2020/subscriptions",
"organizations_url": "https://api.github.com/users/doriszhang2020/orgs",
"repos_url": "https://api.github.com/users/doriszhang2020/repos",
"events_url": "https://api.github.com/users/doriszhang2020/events{/privacy}",
"received_events_url": "https://api.github.com/users/doriszhang2020/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-04-11T13:49:28
| 2024-04-15T19:14:10
| 2024-04-15T19:14:09
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What model would you like?


https://huggingface.co/spaces/mteb/leaderboard
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3597/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3597/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1207
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1207/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1207/comments
|
https://api.github.com/repos/ollama/ollama/issues/1207/events
|
https://github.com/ollama/ollama/issues/1207
| 2,002,340,291
|
I_kwDOJ0Z1Ps53WUnD
| 1,207
|
it is possible to have multiple ssh on linux (due to ollama running as a service)
|
{
"login": "eramax",
"id": 542413,
"node_id": "MDQ6VXNlcjU0MjQxMw==",
"avatar_url": "https://avatars.githubusercontent.com/u/542413?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eramax",
"html_url": "https://github.com/eramax",
"followers_url": "https://api.github.com/users/eramax/followers",
"following_url": "https://api.github.com/users/eramax/following{/other_user}",
"gists_url": "https://api.github.com/users/eramax/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eramax/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eramax/subscriptions",
"organizations_url": "https://api.github.com/users/eramax/orgs",
"repos_url": "https://api.github.com/users/eramax/repos",
"events_url": "https://api.github.com/users/eramax/events{/privacy}",
"received_events_url": "https://api.github.com/users/eramax/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 3
| 2023-11-20T14:33:47
| 2023-12-05T23:24:52
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I guess still there is an issue in the push function
this is my repo https://ollama.ai/eramax/nous-capybara-7b-1.9
the ssh pub key shown at `cat ~/.ollama/id_ed25519.pub` is already set and added to my profile
*md is the directory
```bash
➜ md llm -v
ollama version 0.1.10
➜ md l
.0644 root root 4.8 GB Wed Nov 15 17:32:01 2023 🗋 Capybara-7B-V1.9-Q5_K_M.gguf
.0644 root root 139 B Sat Nov 18 01:31:40 2023 🗋 Modelfile
➜ md llm create eramax/nous-capybara-7b-1.9:Q5_K_M -f Modelfile
transferring context
creating model layer
creating template layer
creating parameters layer
creating config layer
using already created layer sha256:08323667b50ceb4ddf208f475b6101857c26688cf413e80329f174fe34f53e9a
using already created layer sha256:a8ac3515452d80041d2c3ed2ebf79f2b9a1ac4468e201a1b661ceb90c20c1a93
writing layer sha256:f4c99b0ffe2c4d82a82fcc83294c8603984598f5a77d2e1ddaedabc50bbf9ad6
writing layer sha256:e6d5ee0679e5d1afe5b2b66a38ebc0f8475801b210aea9734e626bb63f00f9bf
writing manifest
success
➜ md llm ls
NAME ID SIZE MODIFIED
eramax/nous-capybara-7b-1.9:Q5_K_M 6a898ba40903 5.1 GB 3 seconds ago
➜ md llm run eramax/nous-capybara-7b-1.9:Q5_K_M
>>> who are you
I am a helpful AI-powered digital assistant.
➜ md llm push eramax/nous-capybara-7b-1.9:Q5_K_M
retrieving manifest
Error: unable to push eramax/nous-capybara-7b-1.9, make sure this namespace exists and you are authorized to push to it
➜ md llm push eramax/nous-capybara-7b-1.9
retrieving manifest
couldn't retrieve manifest
Error: stat /usr/share/ollama/.ollama/models/manifests/registry.ollama.ai/eramax/nous-capybara-7b-1.9/latest: no such file or directory
➜ md llm cp eramax/nous-capybara-7b-1.9:Q5_K_M eramax/nous-capybara-7b-1.9
copied 'eramax/nous-capybara-7b-1.9:Q5_K_M' to 'eramax/nous-capybara-7b-1.9'
➜ md llm ls
NAME ID SIZE MODIFIED
eramax/nous-capybara-7b-1.9:Q5_K_M 6a898ba40903 5.1 GB 9 minutes ago
eramax/nous-capybara-7b-1.9:latest 6a898ba40903 5.1 GB 4 seconds ago
➜ md llm push eramax/nous-capybara-7b-1.9
retrieving manifest
Error: unable to push eramax/nous-capybara-7b-1.9, make sure this namespace exists and you are authorized to push to it
➜ md
```
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1207/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1207/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/5349
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5349/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5349/comments
|
https://api.github.com/repos/ollama/ollama/issues/5349/events
|
https://github.com/ollama/ollama/issues/5349
| 2,379,353,269
|
I_kwDOJ0Z1Ps6N0gy1
| 5,349
|
Ollama stderr returns info logs
|
{
"login": "metaspartan",
"id": 10162347,
"node_id": "MDQ6VXNlcjEwMTYyMzQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/10162347?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/metaspartan",
"html_url": "https://github.com/metaspartan",
"followers_url": "https://api.github.com/users/metaspartan/followers",
"following_url": "https://api.github.com/users/metaspartan/following{/other_user}",
"gists_url": "https://api.github.com/users/metaspartan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/metaspartan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/metaspartan/subscriptions",
"organizations_url": "https://api.github.com/users/metaspartan/orgs",
"repos_url": "https://api.github.com/users/metaspartan/repos",
"events_url": "https://api.github.com/users/metaspartan/events{/privacy}",
"received_events_url": "https://api.github.com/users/metaspartan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 0
| 2024-06-28T01:17:58
| 2024-06-28T01:17:58
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Ollama is outputting regular logs that should be in `stdout` but they are outputting in `stderr` when running it through a subprocess, these logs should be outputting via stdout and only errors through stderr
This is for all supported OS.
### OS
Linux, macOS, Windows, Docker, WSL2
### GPU
Nvidia, AMD, Intel, Apple
### CPU
Intel, AMD, Apple
### Ollama version
0.1.47
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5349/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5349/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/4052
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4052/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4052/comments
|
https://api.github.com/repos/ollama/ollama/issues/4052/events
|
https://github.com/ollama/ollama/issues/4052
| 2,271,490,246
|
I_kwDOJ0Z1Ps6HZDDG
| 4,052
|
Unable to create gguf file for my finetuned mixtral8x7b model
|
{
"login": "Nimmalapudi-Pratyusha",
"id": 129523872,
"node_id": "U_kgDOB7hgoA",
"avatar_url": "https://avatars.githubusercontent.com/u/129523872?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Nimmalapudi-Pratyusha",
"html_url": "https://github.com/Nimmalapudi-Pratyusha",
"followers_url": "https://api.github.com/users/Nimmalapudi-Pratyusha/followers",
"following_url": "https://api.github.com/users/Nimmalapudi-Pratyusha/following{/other_user}",
"gists_url": "https://api.github.com/users/Nimmalapudi-Pratyusha/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Nimmalapudi-Pratyusha/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Nimmalapudi-Pratyusha/subscriptions",
"organizations_url": "https://api.github.com/users/Nimmalapudi-Pratyusha/orgs",
"repos_url": "https://api.github.com/users/Nimmalapudi-Pratyusha/repos",
"events_url": "https://api.github.com/users/Nimmalapudi-Pratyusha/events{/privacy}",
"received_events_url": "https://api.github.com/users/Nimmalapudi-Pratyusha/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 6
| 2024-04-30T13:31:42
| 2024-05-07T17:24:27
| 2024-05-07T17:24:07
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I am trying to create a gguf file for my finetuned mixtral model but it keeps throwing following error:
Command :` python llm/llama.cpp/convert.py /home/raft_mixtral_2epochs_v1 --outtype q8_0 --outfile converted.bin`
Error:
```
raise FileNotFoundError(f"Can't find model in directory {path}")
FileNotFoundError: Can't find model in directory /home/raft_mixtral_2epochs_v1
```
Below are the content's of my raft_mixtral_2epochs_v1 folder :

### OS
_No response_
### GPU
_No response_
### CPU
_No response_
### Ollama version
0.1.32
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4052/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4052/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2770
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2770/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2770/comments
|
https://api.github.com/repos/ollama/ollama/issues/2770/events
|
https://github.com/ollama/ollama/pull/2770
| 2,155,110,949
|
PR_kwDOJ0Z1Ps5n9zZc
| 2,770
|
expand user home dir in OLLAMA_MODELS
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-02-26T20:59:23
| 2024-11-21T18:23:47
| 2024-11-21T18:23:47
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/2770",
"html_url": "https://github.com/ollama/ollama/pull/2770",
"diff_url": "https://github.com/ollama/ollama/pull/2770.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2770.patch",
"merged_at": null
}
|
This allows the `OLLAMA_MODELS` env var to contain a tilde, the same way other paths can be specified in ollama models.
Ex: `OLLAMA_MODELS="~/models" ollama serve` now puts models in the proper location
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2770/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2770/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/391
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/391/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/391/comments
|
https://api.github.com/repos/ollama/ollama/issues/391/events
|
https://github.com/ollama/ollama/issues/391
| 1,860,369,757
|
I_kwDOJ0Z1Ps5u4v1d
| 391
|
Min device that llama 70b require?
|
{
"login": "SaraiQX",
"id": 73533505,
"node_id": "MDQ6VXNlcjczNTMzNTA1",
"avatar_url": "https://avatars.githubusercontent.com/u/73533505?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SaraiQX",
"html_url": "https://github.com/SaraiQX",
"followers_url": "https://api.github.com/users/SaraiQX/followers",
"following_url": "https://api.github.com/users/SaraiQX/following{/other_user}",
"gists_url": "https://api.github.com/users/SaraiQX/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SaraiQX/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SaraiQX/subscriptions",
"organizations_url": "https://api.github.com/users/SaraiQX/orgs",
"repos_url": "https://api.github.com/users/SaraiQX/repos",
"events_url": "https://api.github.com/users/SaraiQX/events{/privacy}",
"received_events_url": "https://api.github.com/users/SaraiQX/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396191,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aXw",
"url": "https://api.github.com/repos/ollama/ollama/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] |
closed
| false
| null |
[] | null | 3
| 2023-08-22T00:48:36
| 2023-08-22T00:58:48
| 2023-08-22T00:58:48
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Love Ollama which made my intel mac can run llama 7b 😄.
Just wonder what kind of mac device are required to run llama 2 70B?
Will M2 ultra with 64G vRam be satisfying? Thx.
|
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/391/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/391/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7811
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7811/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7811/comments
|
https://api.github.com/repos/ollama/ollama/issues/7811/events
|
https://github.com/ollama/ollama/pull/7811
| 2,686,879,982
|
PR_kwDOJ0Z1Ps6C6-xg
| 7,811
|
Add Observability section and OpenLIT in README
|
{
"login": "patcher9",
"id": 165258753,
"node_id": "U_kgDOCdmmAQ",
"avatar_url": "https://avatars.githubusercontent.com/u/165258753?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patcher9",
"html_url": "https://github.com/patcher9",
"followers_url": "https://api.github.com/users/patcher9/followers",
"following_url": "https://api.github.com/users/patcher9/following{/other_user}",
"gists_url": "https://api.github.com/users/patcher9/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patcher9/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patcher9/subscriptions",
"organizations_url": "https://api.github.com/users/patcher9/orgs",
"repos_url": "https://api.github.com/users/patcher9/repos",
"events_url": "https://api.github.com/users/patcher9/events{/privacy}",
"received_events_url": "https://api.github.com/users/patcher9/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-11-24T02:01:49
| 2024-11-24T02:09:10
| 2024-11-24T02:03:12
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7811",
"html_url": "https://github.com/ollama/ollama/pull/7811",
"diff_url": "https://github.com/ollama/ollama/pull/7811.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7811.patch",
"merged_at": "2024-11-24T02:03:12"
}
|
Adding OpenLIT to the README as an integration. Did not find a proper category so added `Observability`
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7811/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7811/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2067
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2067/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2067/comments
|
https://api.github.com/repos/ollama/ollama/issues/2067/events
|
https://github.com/ollama/ollama/pull/2067
| 2,089,582,823
|
PR_kwDOJ0Z1Ps5kfo6D
| 2,067
|
Use `gzip` for embedded files
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-01-19T04:55:36
| 2024-01-19T18:23:05
| 2024-01-19T18:23:04
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/2067",
"html_url": "https://github.com/ollama/ollama/pull/2067",
"diff_url": "https://github.com/ollama/ollama/pull/2067.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2067.patch",
"merged_at": "2024-01-19T18:23:04"
}
| null |
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2067/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2067/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5998
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5998/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5998/comments
|
https://api.github.com/repos/ollama/ollama/issues/5998/events
|
https://github.com/ollama/ollama/issues/5998
| 2,432,975,911
|
I_kwDOJ0Z1Ps6RBEQn
| 5,998
|
"Error loading llama server" when using a T5ForConditionalGeneration architucture model, converted to GGUF format
|
{
"login": "iG8R",
"id": 11407417,
"node_id": "MDQ6VXNlcjExNDA3NDE3",
"avatar_url": "https://avatars.githubusercontent.com/u/11407417?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iG8R",
"html_url": "https://github.com/iG8R",
"followers_url": "https://api.github.com/users/iG8R/followers",
"following_url": "https://api.github.com/users/iG8R/following{/other_user}",
"gists_url": "https://api.github.com/users/iG8R/gists{/gist_id}",
"starred_url": "https://api.github.com/users/iG8R/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iG8R/subscriptions",
"organizations_url": "https://api.github.com/users/iG8R/orgs",
"repos_url": "https://api.github.com/users/iG8R/repos",
"events_url": "https://api.github.com/users/iG8R/events{/privacy}",
"received_events_url": "https://api.github.com/users/iG8R/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 0
| 2024-07-26T21:04:33
| 2024-07-26T21:04:33
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
With the help of https://huggingface.co/spaces/ggml-org/gguf-my-repo I made the https://huggingface.co/iG8R/t5_translate_en_ru_zh_large_1024_v2-Q8_0-GGUF model which was successfully imported into `ollama`.
But when I try to use it, I always get the following error, while all other models work almost perfectly:
```
GGML_ASSERT: C:\a\ollama\ollama\llm\llama.cpp\src\llama.cpp:14882: strcmp(embd->name, "result_norm") == 0
time=2024-07-26T23:33:10.669+03:00 level=INFO source=server.go:617 msg="waiting for server to become available" status="llm server error"
time=2024-07-26T23:33:10.934+03:00 level=ERROR source=sched.go:443 msg="error loading llama server" error="llama runner process has terminated: exit status 0xc0000409"
[GIN] 2024/07/26 - 23:33:10 | 500 | 9.3372066s | 127.0.0.1 | POST "/v1/chat/completions"
```
Here is the full log:
```
time=2024-07-26T23:33:09.450+03:00 level=WARN source=memory.go:115 msg="model missing blk.0 layer size"
time=2024-07-26T23:33:09.451+03:00 level=INFO source=sched.go:701 msg="new model will fit in available VRAM in single GPU, loading" model=H:\OllamaModels\blobs\sha256-cca50b43a8d0071238d9cb22864768dec5a8146f0b9969b83e69a076e267b17e gpu=GPU-60a344b3-0290-00b9-ed05-6b799407d228 parallel=4 available=10883338240 required="702.5 MiB"
time=2024-07-26T23:33:09.451+03:00 level=WARN source=memory.go:115 msg="model missing blk.0 layer size"
time=2024-07-26T23:33:09.451+03:00 level=INFO source=memory.go:309 msg="offload to cuda" layers.requested=-1 layers.model=25 layers.offload=25 layers.split="" memory.available="[10.1 GiB]" memory.required.full="702.5 MiB" memory.required.partial="702.5 MiB" memory.required.kv="48.0 MiB" memory.required.allocations="[702.5 MiB]" memory.weights.total="48.0 MiB" memory.weights.repeating="17179869184.0 GiB" memory.weights.nonrepeating="67.5 MiB" memory.graph.full="128.0 MiB" memory.graph.partial="128.0 MiB"
time=2024-07-26T23:33:09.455+03:00 level=INFO source=server.go:383 msg="starting llama server" cmd="f:\\Ollama\\ollama_runners\\cuda_v11.3\\ollama_llama_server.exe --model H:\\OllamaModels\\blobs\\sha256-cca50b43a8d0071238d9cb22864768dec5a8146f0b9969b83e69a076e267b17e --ctx-size 8192 --batch-size 512 --embedding --log-disable --n-gpu-layers 25 --no-mmap --parallel 4 --port 62423"
time=2024-07-26T23:33:09.459+03:00 level=INFO source=sched.go:437 msg="loaded runners" count=1
time=2024-07-26T23:33:09.459+03:00 level=INFO source=server.go:583 msg="waiting for llama runner to start responding"
time=2024-07-26T23:33:09.461+03:00 level=INFO source=server.go:617 msg="waiting for server to become available" status="llm server error"
INFO [wmain] build info | build=3440 commit="d94c6e0c" tid="17472" timestamp=1722025989
INFO [wmain] system info | n_threads=4 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 0 | " tid="17472" timestamp=1722025989 total_threads=8
INFO [wmain] HTTP server listening | hostname="127.0.0.1" n_threads_http="7" port="62423" tid="17472" timestamp=1722025989
llama_model_loader: loaded meta data with 33 key-value pairs and 558 tensors from H:\OllamaModels\blobs\sha256-cca50b43a8d0071238d9cb22864768dec5a8146f0b9969b83e69a076e267b17e (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = t5
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = T5_Translate_En_Ru_Zh_Large_1024_V2
llama_model_loader: - kv 3: general.size_label str = 851M
llama_model_loader: - kv 4: general.license str = apache-2.0
llama_model_loader: - kv 5: general.tags arr[str,1] = ["translation"]
llama_model_loader: - kv 6: general.languages arr[str,3] = ["ru", "zh", "en"]
llama_model_loader: - kv 7: general.datasets arr[str,1] = ["ccmatrix"]
llama_model_loader: - kv 8: t5.context_length u32 = 512
llama_model_loader: - kv 9: t5.embedding_length u32 = 1024
llama_model_loader: - kv 10: t5.feed_forward_length u32 = 2816
llama_model_loader: - kv 11: t5.block_count u32 = 24
llama_model_loader: - kv 12: t5.attention.head_count u32 = 16
llama_model_loader: - kv 13: t5.attention.key_length u32 = 64
llama_model_loader: - kv 14: t5.attention.value_length u32 = 64
llama_model_loader: - kv 15: t5.attention.layer_norm_epsilon f32 = 0.000001
llama_model_loader: - kv 16: t5.attention.relative_buckets_count u32 = 32
llama_model_loader: - kv 17: t5.attention.layer_norm_rms_epsilon f32 = 0.000001
llama_model_loader: - kv 18: t5.decoder_start_token_id u32 = 0
llama_model_loader: - kv 19: general.file_type u32 = 7
llama_model_loader: - kv 20: tokenizer.ggml.model str = t5
llama_model_loader: - kv 21: tokenizer.ggml.pre str = default
llama_model_loader: - kv 22: tokenizer.ggml.tokens arr[str,65100] = ["<pad>", "</s>", "<unk>", ",", "▁"...
llama_model_loader: - kv 23: tokenizer.ggml.scores arr[f32,65100] = [0.000000, 0.000000, 0.000000, -3.144...
llama_model_loader: - kv 24: tokenizer.ggml.token_type arr[i32,65100] = [3, 3, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 25: tokenizer.ggml.add_space_prefix bool = true
llama_model_loader: - kv 26: tokenizer.ggml.remove_extra_whitespaces bool = true
llama_model_loader: - kv 27: tokenizer.ggml.precompiled_charsmap arr[u8,237561] = [0, 180, 2, 0, 0, 132, 0, 0, 0, 0, 0,...
llama_model_loader: - kv 28: tokenizer.ggml.eos_token_id u32 = 1
llama_model_loader: - kv 29: tokenizer.ggml.padding_token_id u32 = 0
llama_model_loader: - kv 30: tokenizer.ggml.add_bos_token bool = false
llama_model_loader: - kv 31: tokenizer.ggml.add_eos_token bool = true
llama_model_loader: - kv 32: general.quantization_version u32 = 2
llama_model_loader: - type f32: 122 tensors
llama_model_loader: - type f16: 2 tensors
llama_model_loader: - type q8_0: 434 tensors
time=2024-07-26T23:33:09.722+03:00 level=INFO source=server.go:617 msg="waiting for server to become available" status="llm server loading model"
llm_load_vocab: special tokens cache size = 103
llm_load_vocab: token to piece cache size = 0.5577 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = t5
llm_load_print_meta: vocab type = UGM
llm_load_print_meta: n_vocab = 65100
llm_load_print_meta: n_merges = 0
llm_load_print_meta: vocab_only = 0
llm_load_print_meta: n_ctx_train = 512
llm_load_print_meta: n_embd = 1024
llm_load_print_meta: n_layer = 24
llm_load_print_meta: n_head = 16
llm_load_print_meta: n_head_kv = 16
llm_load_print_meta: n_rot = 64
llm_load_print_meta: n_swa = 0
llm_load_print_meta: n_embd_head_k = 64
llm_load_print_meta: n_embd_head_v = 64
llm_load_print_meta: n_gqa = 1
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-06
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 2816
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = -1
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 512
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: model type = 780M
llm_load_print_meta: model ftype = Q8_0
llm_load_print_meta: model params = 850.68 M
llm_load_print_meta: model size = 862.32 MiB (8.50 BPW)
llm_load_print_meta: general.name = T5_Translate_En_Ru_Zh_Large_1024_V2
llm_load_print_meta: EOS token = 1 '</s>'
llm_load_print_meta: UNK token = 2 '<unk>'
llm_load_print_meta: PAD token = 0 '<pad>'
llm_load_print_meta: LF token = 4 '▁'
llm_load_print_meta: max token length = 48
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
Device 0: NVIDIA GeForce RTX 3080 Ti, compute capability 8.6, VMM: yes
llm_load_tensors: ggml ctx size = 0.44 MiB
llm_load_tensors: offloading 24 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 25/25 layers to GPU
llm_load_tensors: CUDA_Host buffer size = 67.55 MiB
llm_load_tensors: CUDA0 buffer size = 794.79 MiB
llama_new_context_with_model: n_ctx = 8192
llama_new_context_with_model: n_batch = 512
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 10000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: CUDA0 KV buffer size = 768.00 MiB
llama_new_context_with_model: KV self size = 768.00 MiB, K (f16): 384.00 MiB, V (f16): 384.00 MiB
llama_new_context_with_model: CUDA_Host output buffer size = 1.01 MiB
llama_new_context_with_model: CUDA0 compute buffer size = 1046.00 MiB
llama_new_context_with_model: CUDA_Host compute buffer size = 291.00 MiB
llama_new_context_with_model: graph nodes = 1350
llama_new_context_with_model: graph splits = 50
GGML_ASSERT: C:\a\ollama\ollama\llm\llama.cpp\src\llama.cpp:14882: strcmp(embd->name, "result_norm") == 0
time=2024-07-26T23:33:10.669+03:00 level=INFO source=server.go:617 msg="waiting for server to become available" status="llm server error"
time=2024-07-26T23:33:10.934+03:00 level=ERROR source=sched.go:443 msg="error loading llama server" error="llama runner process has terminated: exit status 0xc0000409"
[GIN] 2024/07/26 - 23:33:10 | 500 | 9.3372066s | 127.0.0.1 | POST "/v1/chat/completions"
```
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.3.0
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5998/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5998/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/258
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/258/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/258/comments
|
https://api.github.com/repos/ollama/ollama/issues/258/events
|
https://github.com/ollama/ollama/issues/258
| 1,833,465,017
|
I_kwDOJ0Z1Ps5tSHS5
| 258
|
Ollama running in Dockerfile
|
{
"login": "osamanatouf2",
"id": 70172406,
"node_id": "MDQ6VXNlcjcwMTcyNDA2",
"avatar_url": "https://avatars.githubusercontent.com/u/70172406?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/osamanatouf2",
"html_url": "https://github.com/osamanatouf2",
"followers_url": "https://api.github.com/users/osamanatouf2/followers",
"following_url": "https://api.github.com/users/osamanatouf2/following{/other_user}",
"gists_url": "https://api.github.com/users/osamanatouf2/gists{/gist_id}",
"starred_url": "https://api.github.com/users/osamanatouf2/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/osamanatouf2/subscriptions",
"organizations_url": "https://api.github.com/users/osamanatouf2/orgs",
"repos_url": "https://api.github.com/users/osamanatouf2/repos",
"events_url": "https://api.github.com/users/osamanatouf2/events{/privacy}",
"received_events_url": "https://api.github.com/users/osamanatouf2/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] |
closed
| false
| null |
[] | null | 14
| 2023-08-02T16:00:14
| 2023-12-12T21:49:23
| 2023-09-07T13:31:57
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
@jmorganca @mxyng I got ./ollama serve to work in docker. The only issue is that I am not able to pull down the files for other models like llama2 via the commad ./ollama pull llama2. I have tested the same configuration on ubuntu and works fine. Just inside the docker I get the following issue: ```Error: Post "http://127.0.0.1:11434/api/pull": dial tcp 127.0.0.1:11434: connect: connection refused```
Here is my Dockerfile:
```
ROM ubuntu:latest
RUN apt-get update
RUN apt-get install -y git
WORKDIR /home
RUN (cd /home; git clone https://github.com/jmorganca/ollama.git)
RUN apt-get install -y wget
RUN wget https://golang.org/dl/go1.20.7.linux-amd64.tar.gz -O go.tar.gz
RUN apt-get install -y gcc
RUN apt-get install -y g++
RUN tar -C /usr/local -xzf go.tar.gz
ENV PATH=$PATH:/usr/local/go/bin
RUN go version
RUN rm go.tar.gz
WORKDIR /home/ollama
RUN go build .
RUN ./ollama pull llama2
EXPOSE 11434
RUN ./ollama serve &
```
and it only fails on the line RUN ./ollama pull llama2 and I have no clue where the issue is coming from:
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/258/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/258/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4753
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4753/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4753/comments
|
https://api.github.com/repos/ollama/ollama/issues/4753/events
|
https://github.com/ollama/ollama/issues/4753
| 2,328,191,658
|
I_kwDOJ0Z1Ps6KxWKq
| 4,753
|
FROM is not recognized
|
{
"login": "EugeoSynthesisThirtyTwo",
"id": 24735555,
"node_id": "MDQ6VXNlcjI0NzM1NTU1",
"avatar_url": "https://avatars.githubusercontent.com/u/24735555?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/EugeoSynthesisThirtyTwo",
"html_url": "https://github.com/EugeoSynthesisThirtyTwo",
"followers_url": "https://api.github.com/users/EugeoSynthesisThirtyTwo/followers",
"following_url": "https://api.github.com/users/EugeoSynthesisThirtyTwo/following{/other_user}",
"gists_url": "https://api.github.com/users/EugeoSynthesisThirtyTwo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/EugeoSynthesisThirtyTwo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/EugeoSynthesisThirtyTwo/subscriptions",
"organizations_url": "https://api.github.com/users/EugeoSynthesisThirtyTwo/orgs",
"repos_url": "https://api.github.com/users/EugeoSynthesisThirtyTwo/repos",
"events_url": "https://api.github.com/users/EugeoSynthesisThirtyTwo/events{/privacy}",
"received_events_url": "https://api.github.com/users/EugeoSynthesisThirtyTwo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-05-31T16:24:21
| 2024-06-24T16:43:36
| 2024-06-24T16:43:36
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I followed the instructions to make a gguf model work but FROM doesn't work
```
C:\Users\Armaguedin\Documents\dev\python\text-generation-webui\models>ollama
Usage:
ollama [flags]
ollama [command]
Available Commands:
serve Start ollama
create Create a model from a Modelfile
show Show information for a model
run Run a model
pull Pull a model from a registry
push Push a model to a registry
list List models
ps List running models
cp Copy a model
rm Remove a model
help Help about any command
Flags:
-h, --help help for ollama
-v, --version Show version information
Use "ollama [command] --help" for more information about a command.
C:\Users\Armaguedin\Documents\dev\python\text-generation-webui\models>FROM ./c4ai-command-r-v01-Q4_K_M.gguf
'FROM' n’est pas reconnu en tant que commande interne
ou externe, un programme exécutable ou un fichier de commandes.
```
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.39
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4753/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4753/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6746
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6746/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6746/comments
|
https://api.github.com/repos/ollama/ollama/issues/6746/events
|
https://github.com/ollama/ollama/issues/6746
| 2,518,937,874
|
I_kwDOJ0Z1Ps6WI_ES
| 6,746
|
add support for Reflection-Llama-3.1
|
{
"login": "clipsheep6",
"id": 113185666,
"node_id": "U_kgDOBr8Tgg",
"avatar_url": "https://avatars.githubusercontent.com/u/113185666?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/clipsheep6",
"html_url": "https://github.com/clipsheep6",
"followers_url": "https://api.github.com/users/clipsheep6/followers",
"following_url": "https://api.github.com/users/clipsheep6/following{/other_user}",
"gists_url": "https://api.github.com/users/clipsheep6/gists{/gist_id}",
"starred_url": "https://api.github.com/users/clipsheep6/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/clipsheep6/subscriptions",
"organizations_url": "https://api.github.com/users/clipsheep6/orgs",
"repos_url": "https://api.github.com/users/clipsheep6/repos",
"events_url": "https://api.github.com/users/clipsheep6/events{/privacy}",
"received_events_url": "https://api.github.com/users/clipsheep6/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-09-11T08:11:48
| 2024-09-11T23:57:44
| 2024-09-11T23:57:44
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
any idea that we add Reflection-Llama-3.1 model?
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6746/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6746/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/480
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/480/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/480/comments
|
https://api.github.com/repos/ollama/ollama/issues/480/events
|
https://github.com/ollama/ollama/issues/480
| 1,884,842,197
|
I_kwDOJ0Z1Ps5wWGjV
| 480
|
Build failure with v0.0.18
|
{
"login": "p-linnane",
"id": 105994585,
"node_id": "U_kgDOBlFZWQ",
"avatar_url": "https://avatars.githubusercontent.com/u/105994585?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/p-linnane",
"html_url": "https://github.com/p-linnane",
"followers_url": "https://api.github.com/users/p-linnane/followers",
"following_url": "https://api.github.com/users/p-linnane/following{/other_user}",
"gists_url": "https://api.github.com/users/p-linnane/gists{/gist_id}",
"starred_url": "https://api.github.com/users/p-linnane/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/p-linnane/subscriptions",
"organizations_url": "https://api.github.com/users/p-linnane/orgs",
"repos_url": "https://api.github.com/users/p-linnane/repos",
"events_url": "https://api.github.com/users/p-linnane/events{/privacy}",
"received_events_url": "https://api.github.com/users/p-linnane/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 8
| 2023-09-06T22:31:32
| 2023-09-07T03:34:28
| 2023-09-07T03:08:32
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hello 👋 . I'm a maintainer for the [Homebrew](https://brew.sh) project. While packaging v0.0.18 of ollama, we're encountering a build failure. Here is the error:
```shell
go build -trimpath -o=/home/linuxbrew/.linuxbrew/Cellar/ollama/0.0.18/bin/ollama -ldflags=-s -w
go: downloading github.com/spf13/cobra v1.7.0
go: downloading github.com/chzyer/readline v1.5.1
go: downloading github.com/dustin/go-humanize v1.0.1
go: downloading github.com/olekukonko/tablewriter v0.0.5
go: downloading golang.org/x/crypto v0.10.0
go: downloading github.com/mattn/go-runewidth v0.0.14
go: downloading github.com/mitchellh/colorstring v0.0.0-20190213212951-d06e56a500db
go: downloading golang.org/x/term v0.10.0
go: downloading github.com/gin-contrib/cors v1.4.0
go: downloading github.com/gin-gonic/gin v1.9.1
go: downloading golang.org/x/exp v0.0.0-20230817173708-d852ddb80c63
go: downloading gonum.org/v1/gonum v0.13.0
go: downloading github.com/spf13/pflag v1.0.5
go: downloading github.com/rivo/uniseg v0.2.0
go: downloading golang.org/x/sys v0.11.0
go: downloading github.com/pbnjay/memory v0.0.0-20210728143218-7b4eea64cf58
go: downloading github.com/gin-contrib/sse v0.1.0
go: downloading github.com/mattn/go-isatty v0.0.19
go: downloading golang.org/x/net v0.10.0
go: downloading github.com/go-playground/validator/v10 v10.14.0
go: downloading github.com/pelletier/go-toml/v2 v2.0.8
go: downloading github.com/ugorji/go/codec v1.2.11
go: downloading google.golang.org/protobuf v1.30.0
go: downloading gopkg.in/yaml.v3 v3.0.1
go: downloading github.com/gabriel-vasile/mimetype v1.4.2
go: downloading github.com/go-playground/universal-translator v0.18.1
go: downloading github.com/leodido/go-urn v1.2.4
go: downloading golang.org/x/text v0.10.0
go: downloading github.com/go-playground/locales v0.14.1
llm/ggml_llama.go:31:12: pattern llama.cpp/ggml/build/*/bin/*: no matching files found
```
Relates to https://github.com/Homebrew/homebrew-core/pull/141639
|
{
"login": "p-linnane",
"id": 105994585,
"node_id": "U_kgDOBlFZWQ",
"avatar_url": "https://avatars.githubusercontent.com/u/105994585?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/p-linnane",
"html_url": "https://github.com/p-linnane",
"followers_url": "https://api.github.com/users/p-linnane/followers",
"following_url": "https://api.github.com/users/p-linnane/following{/other_user}",
"gists_url": "https://api.github.com/users/p-linnane/gists{/gist_id}",
"starred_url": "https://api.github.com/users/p-linnane/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/p-linnane/subscriptions",
"organizations_url": "https://api.github.com/users/p-linnane/orgs",
"repos_url": "https://api.github.com/users/p-linnane/repos",
"events_url": "https://api.github.com/users/p-linnane/events{/privacy}",
"received_events_url": "https://api.github.com/users/p-linnane/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/480/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/480/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5690
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5690/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5690/comments
|
https://api.github.com/repos/ollama/ollama/issues/5690/events
|
https://github.com/ollama/ollama/issues/5690
| 2,407,533,519
|
I_kwDOJ0Z1Ps6PgAvP
| 5,690
|
Ollama
|
{
"login": "Amir231123",
"id": 173946415,
"node_id": "U_kgDOCl42Lw",
"avatar_url": "https://avatars.githubusercontent.com/u/173946415?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Amir231123",
"html_url": "https://github.com/Amir231123",
"followers_url": "https://api.github.com/users/Amir231123/followers",
"following_url": "https://api.github.com/users/Amir231123/following{/other_user}",
"gists_url": "https://api.github.com/users/Amir231123/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Amir231123/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Amir231123/subscriptions",
"organizations_url": "https://api.github.com/users/Amir231123/orgs",
"repos_url": "https://api.github.com/users/Amir231123/repos",
"events_url": "https://api.github.com/users/Amir231123/events{/privacy}",
"received_events_url": "https://api.github.com/users/Amir231123/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
closed
| false
| null |
[] | null | 3
| 2024-07-14T17:53:02
| 2024-07-15T02:24:20
| 2024-07-14T23:07:23
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null | null |
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5690/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5690/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7513
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7513/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7513/comments
|
https://api.github.com/repos/ollama/ollama/issues/7513/events
|
https://github.com/ollama/ollama/pull/7513
| 2,635,988,914
|
PR_kwDOJ0Z1Ps6A9M97
| 7,513
|
grammar: surgically wrenching gbnf from system messages
|
{
"login": "tucnak",
"id": 934682,
"node_id": "MDQ6VXNlcjkzNDY4Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/934682?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tucnak",
"html_url": "https://github.com/tucnak",
"followers_url": "https://api.github.com/users/tucnak/followers",
"following_url": "https://api.github.com/users/tucnak/following{/other_user}",
"gists_url": "https://api.github.com/users/tucnak/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tucnak/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tucnak/subscriptions",
"organizations_url": "https://api.github.com/users/tucnak/orgs",
"repos_url": "https://api.github.com/users/tucnak/repos",
"events_url": "https://api.github.com/users/tucnak/events{/privacy}",
"received_events_url": "https://api.github.com/users/tucnak/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-11-05T16:53:53
| 2024-12-05T00:33:51
| 2024-12-05T00:33:51
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7513",
"html_url": "https://github.com/ollama/ollama/pull/7513",
"diff_url": "https://github.com/ollama/ollama/pull/7513.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7513.patch",
"merged_at": null
}
|
Some people have reached out to me re: my comment from earlier https://github.com/ollama/ollama/issues/6237#issuecomment-2428338129 so I'd decided it might be worth a shot. To re-cap: this pull request implements wrenching GBNF's (only one at a time!) from the system prompt. I know a bunch of pull requests to similar effect already exist.
However, I also believe that the prior efforts are completely misguided. What they should be doing instead is parsing the system prompt for **\`\`\`gbnf** code blocks. This approach does not impact the API surface, and it would also allow for dynamically generating the grammar on the fly from _any_ existing Ollama client.
More details in the linked comment.
```bash
set GRAMMAR '
```gbnf
root ::= (expr "=" ws term "\n")+
expr ::= term ([-+*/] term)*
term ::= ident | num | "(" ws expr ")" ws
ident ::= [a-z] [a-z0-9_]* ws
num ::= [0-9]+ ws
ws ::= [ \t\n]*
```'
curl http://ollama.lan/chat -d '{
"model": "llama3.2",
"messages": [
{
"role": "system",
"content": "You are helpful assistant.\n$GRAMMAR"
},
{
"role": "user",
"content": "why is the sky blue?"
}
]
}'
```
I've so far only committed the bare-bones portion (i.e. vanilla GBNF only) and some tests I had lying around—due to obvious reasons. I don't believe a `jsonschema` and/or `openapi` function-calling implementation is necessary, as it may as well be done trivially on the client-side, but hey, that may be fun for all I care.
Otherwise, happy to bestow my genius upon this world.
|
{
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "https://api.github.com/users/ParthSareen/followers",
"following_url": "https://api.github.com/users/ParthSareen/following{/other_user}",
"gists_url": "https://api.github.com/users/ParthSareen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ParthSareen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ParthSareen/subscriptions",
"organizations_url": "https://api.github.com/users/ParthSareen/orgs",
"repos_url": "https://api.github.com/users/ParthSareen/repos",
"events_url": "https://api.github.com/users/ParthSareen/events{/privacy}",
"received_events_url": "https://api.github.com/users/ParthSareen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7513/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/ollama/ollama/issues/7513/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1566
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1566/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1566/comments
|
https://api.github.com/repos/ollama/ollama/issues/1566/events
|
https://github.com/ollama/ollama/issues/1566
| 2,044,925,127
|
I_kwDOJ0Z1Ps554xTH
| 1,566
|
Error: llama runner exited, you may not have enough available memory to run this model
|
{
"login": "baardove",
"id": 3517788,
"node_id": "MDQ6VXNlcjM1MTc3ODg=",
"avatar_url": "https://avatars.githubusercontent.com/u/3517788?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/baardove",
"html_url": "https://github.com/baardove",
"followers_url": "https://api.github.com/users/baardove/followers",
"following_url": "https://api.github.com/users/baardove/following{/other_user}",
"gists_url": "https://api.github.com/users/baardove/gists{/gist_id}",
"starred_url": "https://api.github.com/users/baardove/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/baardove/subscriptions",
"organizations_url": "https://api.github.com/users/baardove/orgs",
"repos_url": "https://api.github.com/users/baardove/repos",
"events_url": "https://api.github.com/users/baardove/events{/privacy}",
"received_events_url": "https://api.github.com/users/baardove/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 7
| 2023-12-16T19:41:46
| 2024-01-08T21:42:04
| 2024-01-08T21:42:04
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi,
When I have run a modell and try to communicate with it, I always get same response, no matter which model (or small or big)...
'
Error: llama runner exited, you may not have enough available memory to run this model
'
Any clues on this one?
My host is running ubuntu 20.04 on proxmox with approx 56 gb memory free, nvidia m40 24 gb gpu
'
free
total used free shared buff/cache available
Mem: 58212660 641572 54462900 5692 3108188 56950236
Swap: 8388604 0 8388604
'
'
nvidia-smi
Sat Dec 16 19:39:44 2023
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.129.03 Driver Version: 535.129.03 CUDA Version: 12.2 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 Tesla M40 24GB Off | 00000000:01:00.0 Off | 0 |
| N/A 37C P8 16W / 250W | 0MiB / 23040MiB | 0% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
+---------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
| No running processes found |
+---------------------------------------------------------------------------------------+
'
Seems ollama finds the gpu:
journalctl:
`
Dec 16 18:30:05 tesla ollama[2245]: 2023/12/16 18:30:05 llama.go:300: 22939 MB VRAM available, loading up to 150 GPU layers
Dec 16 18:30:05 tesla ollama[2245]: 2023/12/16 18:30:05 llama.go:436: starting llama runner
Dec 16 18:30:05 tesla ollama[2245]: 2023/12/16 18:30:05 llama.go:494: waiting for llama runner to start responding
Dec 16 18:30:05 tesla ollama[2245]: ggml_init_cublas: GGML_CUDA_FORCE_MMQ: no
Dec 16 18:30:05 tesla ollama[2245]: ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes
Dec 16 18:30:05 tesla ollama[2245]: ggml_init_cublas: found 1 CUDA devices:
Dec 16 18:30:05 tesla ollama[2245]: Device 0: Tesla M40 24GB, compute capability 5.2
Dec 16 18:30:05 tesla ollama[2326]: {"timestamp":1702751405,"level":"INFO","function":"main","line":2652,"message":"build info","build":441,"commit":"948ff1>
Dec 16 18:30:05 tesla ollama[2326]: {"timestamp":1702751405,"level":"INFO","function":"main","line":2655,"message":"system info","n_threads":8,"n_threads_ba>
Dec 16 18:30:05 tesla ollama[2245]: llama_model_loader: loaded meta data with 19 key-value pairs and 291 tensors from /usr/share/ollama/.ollama/models/blobs>
--- ---
Dec 16 18:31:49 tesla ollama[2245]: llm_load_tensors: ggml ctx size = 0.12 MiB
Dec 16 18:31:49 tesla ollama[2245]: llm_load_tensors: using CUDA for GPU acceleration
Dec 16 18:31:49 tesla ollama[2245]: llm_load_tensors: mem required = 70.43 MiB
Dec 16 18:31:49 tesla ollama[2245]: llm_load_tensors: offloading 32 repeating layers to GPU
Dec 16 18:31:49 tesla ollama[2245]: llm_load_tensors: offloading non-repeating layers to GPU
Dec 16 18:31:49 tesla ollama[2245]: llm_load_tensors: offloaded 33/33 layers to GPU
Dec 16 18:31:49 tesla ollama[2245]: llm_load_tensors: VRAM used: 3577.56 MiB
`
Loading a modell works fine, but error comes when trying to communicate, happens with any modell, even the smallest.
Error: llama runner exited, you may not have enough available memory to run this model
journalctl:
'
Dec 16 18:31:50 tesla ollama[2245]: ..................................................................................................
Dec 16 18:31:50 tesla ollama[2245]: llama_new_context_with_model: n_ctx = 4096
Dec 16 18:31:50 tesla ollama[2245]: llama_new_context_with_model: freq_base = 10000.0
Dec 16 18:31:50 tesla ollama[2245]: llama_new_context_with_model: freq_scale = 1
Dec 16 18:31:51 tesla ollama[2245]: llama_kv_cache_init: VRAM kv self = 2048.00 MB
Dec 16 18:31:51 tesla ollama[2245]: llama_new_context_with_model: KV self size = 2048.00 MiB, K (f16): 1024.00 MiB, V (f16): 1024.00 MiB
Dec 16 18:31:51 tesla ollama[2245]: llama_build_graph: non-view tensors processed: 676/676
Dec 16 18:31:51 tesla ollama[2245]: llama_new_context_with_model: compute buffer total size = 291.32 MiB
Dec 16 18:31:51 tesla ollama[2245]: llama_new_context_with_model: VRAM scratch buffer: 288.00 MiB
Dec 16 18:31:51 tesla ollama[2245]: llama_new_context_with_model: total VRAM used: 5913.57 MiB (model: 3577.56 MiB, context: 2336.00 MiB)
Dec 16 18:31:51 tesla ollama[2588]: {"timestamp":1702751511,"level":"INFO","function":"main","line":3035,"message":"HTTP server listening","hostname":"127.0>
Dec 16 18:31:51 tesla ollama[2588]: {"timestamp":1702751511,"level":"INFO","function":"log_server_request","line":2596,"message":"request","remote_addr":"12>
Dec 16 18:31:51 tesla ollama[2245]: 2023/12/16 18:31:51 llama.go:508: llama runner started in 2.201689 seconds
Dec 16 18:31:51 tesla ollama[2245]: [GIN] 2023/12/16 - 18:31:51 | 200 | 2.311479662s | 127.0.0.1 | POST "/api/generate"
Dec 16 18:32:14 tesla ollama[2588]: {"timestamp":1702751534,"level":"INFO","function":"log_server_request","line":2596,"message":"request","remote_addr":"12>
Dec 16 18:32:14 tesla ollama[2245]: 2023/12/16 18:32:14 llama.go:577: loaded 0 images
**Dec 16 18:32:14 tesla ollama[2245]: cuBLAS error 15 at /go/src/github.com/jmorganca/ollama/llm/llama.cpp/gguf/ggml-cuda.cu:8448**
Dec 16 18:32:14 tesla ollama[2245]: current device: 0
**Dec 16 18:32:14 tesla ollama[2245]: GGML_ASSERT: /go/src/github.com/jmorganca/ollama/llm/llama.cpp/gguf/ggml-cuda.cu:8448: !"cuBLAS error"**
Dec 16 18:32:14 tesla ollama[2245]: 2023/12/16 18:32:14 llama.go:451: signal: aborted (core dumped)
Dec 16 18:32:14 tesla ollama[2245]: 2023/12/16 18:32:14 llama.go:525: llama runner stopped successfully
Dec 16 18:32:14 tesla ollama[2245]: [GIN] 2023/12/16 - 18:32:14 | 200 | 601.813679ms | 127.0.0.1 | POST "/api/generate"
'
Full log:
https://www.evernote.com/shard/s16/sh/6d2eab19-c11f-7cf4-148c-9a5cd04dc944/Zwy3R7zsW8TvzDquK5Devnpko4BPwqNquvDt4nHLGCiecB_luwmk3sH8ug
The gpu is a bit dated, so it might miss some features newer nvidia cards have. It is a affordable option to run with a lot of vram so would be nice if it was supported.
When running ComfyUI i have to start with --disable-cuda-malloc.
Regards,
Bård Ove Myhr
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1566/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1566/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/807
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/807/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/807/comments
|
https://api.github.com/repos/ollama/ollama/issues/807/events
|
https://github.com/ollama/ollama/issues/807
| 1,945,579,628
|
I_kwDOJ0Z1Ps5z9zBs
| 807
|
Feature request: Add CLI option to specify a system prompt
|
{
"login": "louisabraham",
"id": 13174805,
"node_id": "MDQ6VXNlcjEzMTc0ODA1",
"avatar_url": "https://avatars.githubusercontent.com/u/13174805?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/louisabraham",
"html_url": "https://github.com/louisabraham",
"followers_url": "https://api.github.com/users/louisabraham/followers",
"following_url": "https://api.github.com/users/louisabraham/following{/other_user}",
"gists_url": "https://api.github.com/users/louisabraham/gists{/gist_id}",
"starred_url": "https://api.github.com/users/louisabraham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/louisabraham/subscriptions",
"organizations_url": "https://api.github.com/users/louisabraham/orgs",
"repos_url": "https://api.github.com/users/louisabraham/repos",
"events_url": "https://api.github.com/users/louisabraham/events{/privacy}",
"received_events_url": "https://api.github.com/users/louisabraham/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 6100196012,
"node_id": "LA_kwDOJ0Z1Ps8AAAABa5marA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feedback%20wanted",
"name": "feedback wanted",
"color": "0e8a16",
"default": false,
"description": ""
}
] |
closed
| false
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 2
| 2023-10-16T15:54:16
| 2023-12-04T20:26:44
| 2023-12-04T20:26:43
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null | null |
{
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.github.com/users/technovangelist/followers",
"following_url": "https://api.github.com/users/technovangelist/following{/other_user}",
"gists_url": "https://api.github.com/users/technovangelist/gists{/gist_id}",
"starred_url": "https://api.github.com/users/technovangelist/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/technovangelist/subscriptions",
"organizations_url": "https://api.github.com/users/technovangelist/orgs",
"repos_url": "https://api.github.com/users/technovangelist/repos",
"events_url": "https://api.github.com/users/technovangelist/events{/privacy}",
"received_events_url": "https://api.github.com/users/technovangelist/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/807/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/807/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1189
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1189/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1189/comments
|
https://api.github.com/repos/ollama/ollama/issues/1189/events
|
https://github.com/ollama/ollama/pull/1189
| 2,000,297,803
|
PR_kwDOJ0Z1Ps5f0CFv
| 1,189
|
upload: retry complete upload
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2023-11-18T07:52:44
| 2023-11-18T07:54:32
| 2023-11-18T07:54:27
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1189",
"html_url": "https://github.com/ollama/ollama/pull/1189",
"diff_url": "https://github.com/ollama/ollama/pull/1189.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1189.patch",
"merged_at": null
}
| null |
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1189/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1189/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6031
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6031/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6031/comments
|
https://api.github.com/repos/ollama/ollama/issues/6031/events
|
https://github.com/ollama/ollama/issues/6031
| 2,434,078,834
|
I_kwDOJ0Z1Ps6RFRhy
| 6,031
|
Timeout to start model too little - progress stalls at 100% for 5 minutes when loading with swap
|
{
"login": "forReason",
"id": 12736950,
"node_id": "MDQ6VXNlcjEyNzM2OTUw",
"avatar_url": "https://avatars.githubusercontent.com/u/12736950?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/forReason",
"html_url": "https://github.com/forReason",
"followers_url": "https://api.github.com/users/forReason/followers",
"following_url": "https://api.github.com/users/forReason/following{/other_user}",
"gists_url": "https://api.github.com/users/forReason/gists{/gist_id}",
"starred_url": "https://api.github.com/users/forReason/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/forReason/subscriptions",
"organizations_url": "https://api.github.com/users/forReason/orgs",
"repos_url": "https://api.github.com/users/forReason/repos",
"events_url": "https://api.github.com/users/forReason/events{/privacy}",
"received_events_url": "https://api.github.com/users/forReason/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 3
| 2024-07-28T19:29:54
| 2024-09-05T21:00:09
| 2024-09-05T21:00:09
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I am trying to run llama:405b on a hardware with only little power and through a swap file.
Im not concerned about its speed.
Though, the Model cant load because:
```
ollama run llama3.1:405b --keepalive 5h
Error: timed out waiting for llama runner to start - progress 1.00 -
```
is it possible to disable this timeout, or increase it? Im quite certain it would load after (a long) while
### OS
Linux
### GPU
Other
### CPU
Other
### Ollama version
0.3.0
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6031/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6031/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4137
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4137/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4137/comments
|
https://api.github.com/repos/ollama/ollama/issues/4137/events
|
https://github.com/ollama/ollama/issues/4137
| 2,278,343,315
|
I_kwDOJ0Z1Ps6HzMKT
| 4,137
|
Support for HyperGAI/HPT1_5-Air-Llama-3-8B-Instruct-multimodal
|
{
"login": "Extremys",
"id": 7710663,
"node_id": "MDQ6VXNlcjc3MTA2NjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7710663?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Extremys",
"html_url": "https://github.com/Extremys",
"followers_url": "https://api.github.com/users/Extremys/followers",
"following_url": "https://api.github.com/users/Extremys/following{/other_user}",
"gists_url": "https://api.github.com/users/Extremys/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Extremys/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Extremys/subscriptions",
"organizations_url": "https://api.github.com/users/Extremys/orgs",
"repos_url": "https://api.github.com/users/Extremys/repos",
"events_url": "https://api.github.com/users/Extremys/events{/privacy}",
"received_events_url": "https://api.github.com/users/Extremys/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
open
| false
| null |
[] | null | 1
| 2024-05-03T20:02:58
| 2024-05-11T08:26:17
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hello team,
It would be so great so have this new multimodal based llama3 model supported with ollama! thanks!
https://huggingface.co/HyperGAI/HPT1_5-Air-Llama-3-8B-Instruct-multimodal
https://github.com/HyperGAI/HPT?tab=readme-ov-file#installation
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4137/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4137/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/7261
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7261/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7261/comments
|
https://api.github.com/repos/ollama/ollama/issues/7261/events
|
https://github.com/ollama/ollama/issues/7261
| 2,598,233,203
|
I_kwDOJ0Z1Ps6a3eRz
| 7,261
|
Install on any drive
|
{
"login": "DavidHF",
"id": 5684280,
"node_id": "MDQ6VXNlcjU2ODQyODA=",
"avatar_url": "https://avatars.githubusercontent.com/u/5684280?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DavidHF",
"html_url": "https://github.com/DavidHF",
"followers_url": "https://api.github.com/users/DavidHF/followers",
"following_url": "https://api.github.com/users/DavidHF/following{/other_user}",
"gists_url": "https://api.github.com/users/DavidHF/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DavidHF/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DavidHF/subscriptions",
"organizations_url": "https://api.github.com/users/DavidHF/orgs",
"repos_url": "https://api.github.com/users/DavidHF/repos",
"events_url": "https://api.github.com/users/DavidHF/events{/privacy}",
"received_events_url": "https://api.github.com/users/DavidHF/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-10-18T19:23:22
| 2024-10-18T22:29:04
| 2024-10-18T22:29:04
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
What about installing on other drives than c: ?
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7261/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7261/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4909
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4909/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4909/comments
|
https://api.github.com/repos/ollama/ollama/issues/4909/events
|
https://github.com/ollama/ollama/pull/4909
| 2,340,722,069
|
PR_kwDOJ0Z1Ps5x0Cnw
| 4,909
|
Add ability to skip oneapi generate
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-06-07T15:33:28
| 2024-06-07T21:07:18
| 2024-06-07T21:07:15
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4909",
"html_url": "https://github.com/ollama/ollama/pull/4909",
"diff_url": "https://github.com/ollama/ollama/pull/4909.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4909.patch",
"merged_at": "2024-06-07T21:07:15"
}
|
This follows the same pattern for cuda and rocm to allow disabling the build even when we detect the dependent libraries
Related to #4511
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4909/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4909/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/504
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/504/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/504/comments
|
https://api.github.com/repos/ollama/ollama/issues/504/events
|
https://github.com/ollama/ollama/issues/504
| 1,889,153,739
|
I_kwDOJ0Z1Ps5wmjLL
| 504
|
Python package
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 4
| 2023-09-10T13:50:17
| 2024-03-11T19:33:40
| 2024-03-11T19:33:40
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Quite a few folks have been running:
```
pip install ollama
```
However there isn't yet a python package (there was previously an old Ollama prototype from July). This issue tracks having a first-class python package for using Ollama.
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/504/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/504/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4060
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4060/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4060/comments
|
https://api.github.com/repos/ollama/ollama/issues/4060/events
|
https://github.com/ollama/ollama/pull/4060
| 2,272,340,961
|
PR_kwDOJ0Z1Ps5uMF0o
| 4,060
|
Update llama.cpp submodule to `f364eb6`
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-04-30T19:50:05
| 2024-04-30T21:25:40
| 2024-04-30T21:25:40
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4060",
"html_url": "https://github.com/ollama/ollama/pull/4060",
"diff_url": "https://github.com/ollama/ollama/pull/4060.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4060.patch",
"merged_at": "2024-04-30T21:25:40"
}
|
Also filters out stop words for now from being returned in the API as they will print on older clients
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4060/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4060/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7429
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7429/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7429/comments
|
https://api.github.com/repos/ollama/ollama/issues/7429/events
|
https://github.com/ollama/ollama/issues/7429
| 2,625,299,820
|
I_kwDOJ0Z1Ps6ceuVs
| 7,429
|
cuda device ordering inconsistent between runtime and management library
|
{
"login": "Nepherpitou",
"id": 6158945,
"node_id": "MDQ6VXNlcjYxNTg5NDU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6158945?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Nepherpitou",
"html_url": "https://github.com/Nepherpitou",
"followers_url": "https://api.github.com/users/Nepherpitou/followers",
"following_url": "https://api.github.com/users/Nepherpitou/following{/other_user}",
"gists_url": "https://api.github.com/users/Nepherpitou/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Nepherpitou/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Nepherpitou/subscriptions",
"organizations_url": "https://api.github.com/users/Nepherpitou/orgs",
"repos_url": "https://api.github.com/users/Nepherpitou/repos",
"events_url": "https://api.github.com/users/Nepherpitou/events{/privacy}",
"received_events_url": "https://api.github.com/users/Nepherpitou/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg",
"url": "https://api.github.com/repos/ollama/ollama/labels/windows",
"name": "windows",
"color": "0052CC",
"default": false,
"description": ""
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg",
"url": "https://api.github.com/repos/ollama/ollama/labels/nvidia",
"name": "nvidia",
"color": "8CDB00",
"default": false,
"description": "Issues relating to Nvidia GPUs and CUDA"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 1
| 2024-10-30T20:44:12
| 2024-11-02T23:35:42
| 2024-11-02T23:35:42
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
### My GPU setup is:
1. RTX 3090 - first PCIE 5.0 x16, but secondary GPU
2. RTX 4090 - second PCIE 4.0 x4, but primary GPU
So, I have a weird bug with memory estimations. There are two calls for device memory usage info:
1. `C.cudart_bootstrap(*cHandles.cudart, C.int(i), &memInfo)` here, `i` is device index (and `gpuInfo.index`). In my case it's `0` for 4090 on this step.
2. `C.nvml_get_free(*cHandles.nvml, C.int(gpuInfo.index), &memInfo.free, &memInfo.total, &memInfo.used)` here we get memory info for `gpuInfo.index`, but nvml device order is different and 4090 is `1`!
As a result I have estimated memory usage of 2Gb for RTX 3090 while nvidia-smi reported only 300mb, and 300mb usage for RTX 4090, while nvidia-smi reported 2Gb. This results in wrong layer split prediction.
While its fine for me since it not works well with flash attention, but still an issue.
### Ollama logs
```
time=2024-10-30T23:40:22.092+03:00 level=DEBUG source=gpu.go:562 msg="discovered GPU libraries" paths=[C:\Windows\system32\nvcuda.dll]
CUDA driver version: 12.7
time=2024-10-30T23:40:22.139+03:00 level=DEBUG source=gpu.go:129 msg="detected GPUs" count=2 library=C:\Windows\system32\nvcuda.dll
[GPU-4e64b2bc-98b0-d948-a660-7668c70aba4f] CUDA totalMem 24563 mb
[GPU-4e64b2bc-98b0-d948-a660-7668c70aba4f] CUDA freeMem 22994 mb
[GPU-4e64b2bc-98b0-d948-a660-7668c70aba4f] Compute Capability 8.9
time=2024-10-30T23:40:22.271+03:00 level=INFO source=gpu.go:326 msg="detected OS VRAM overhead" id=GPU-4e64b2bc-98b0-d948-a660-7668c70aba4f library=cuda compute=8.9 driver=12.7 name="NVIDIA GeForce RTX 4090" overhead="1.3 GiB"
[GPU-c06ff468-596d-5c2b-52ed-c764302de199] CUDA totalMem 24575 mb
[GPU-c06ff468-596d-5c2b-52ed-c764302de199] CUDA freeMem 23306 mb
[GPU-c06ff468-596d-5c2b-52ed-c764302de199] Compute Capability 8.6
time=2024-10-30T23:40:22.563+03:00 level=DEBUG source=amd_windows.go:35 msg="unable to load amdhip64_6.dll, please make sure to upgrade to the latest amd driver: The specified module could not be found."
releasing cuda driver library
releasing nvml library
time=2024-10-30T23:40:22.564+03:00 level=INFO source=types.go:123 msg="inference compute" id=GPU-4e64b2bc-98b0-d948-a660-7668c70aba4f library=cuda variant=v12 compute=8.9 driver=12.7 name="NVIDIA GeForce RTX 4090" total="24.0 GiB" available="22.5 GiB"
time=2024-10-30T23:40:22.565+03:00 level=INFO source=types.go:123 msg="inference compute" id=GPU-c06ff468-596d-5c2b-52ed-c764302de199 library=cuda variant=v12 compute=8.6 driver=12.7 name="NVIDIA GeForce RTX 3090" total="24.0 GiB" available="22.8 GiB"
time=2024-10-30T23:40:35.137+03:00 level=DEBUG source=gpu.go:396 msg="updating system memory data" before.total="127.2 GiB" before.free="95.7 GiB" before.free_swap="119.8 GiB" now.total="127.2 GiB" now.free="95.7 GiB" now.free_swap="119.6 GiB"
time=2024-10-30T23:40:35.152+03:00 level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-4e64b2bc-98b0-d948-a660-7668c70aba4f name="NVIDIA GeForce RTX 4090" overhead="1.3 GiB" before.total="24.0 GiB" before.free="22.5 GiB" now.total="24.0 GiB" now.free="22.5 GiB" now.used="286.3 MiB"
time=2024-10-30T23:40:35.167+03:00 level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-c06ff468-596d-5c2b-52ed-c764302de199 name="NVIDIA GeForce RTX 3090" overhead="0 B" before.total="24.0 GiB" before.free="22.8 GiB" now.total="24.0 GiB" now.free="21.9 GiB" now.used="2.1 GiB"
releasing nvml library
time=2024-10-30T23:40:35.187+03:00 level=DEBUG source=sched.go:225 msg="loading first model" model=I:\localai\models\ollama\blobs\sha256-9167b346a6e1f45064e0500cf8539572e5889ba631eecb40a3cab48338b6d7df
time=2024-10-30T23:40:35.187+03:00 level=DEBUG source=memory.go:107 msg=evaluating library=cuda gpu_count=1 available="[22.5 GiB]"
time=2024-10-30T23:40:35.188+03:00 level=DEBUG source=memory.go:107 msg=evaluating library=cuda gpu_count=1 available="[21.9 GiB]"
time=2024-10-30T23:40:35.188+03:00 level=DEBUG source=memory.go:107 msg=evaluating library=cuda gpu_count=2 available="[22.5 GiB 21.9 GiB]"
time=2024-10-30T23:40:35.189+03:00 level=DEBUG source=gpu.go:396 msg="updating system memory data" before.total="127.2 GiB" before.free="95.7 GiB" before.free_swap="119.6 GiB" now.total="127.2 GiB" now.free="95.7 GiB" now.free_swap="119.6 GiB"
time=2024-10-30T23:40:35.213+03:00 level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-4e64b2bc-98b0-d948-a660-7668c70aba4f name="NVIDIA GeForce RTX 4090" overhead="1.3 GiB" before.total="24.0 GiB" before.free="22.5 GiB" now.total="24.0 GiB" now.free="22.5 GiB" now.used="286.3 MiB"
time=2024-10-30T23:40:35.229+03:00 level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-c06ff468-596d-5c2b-52ed-c764302de199 name="NVIDIA GeForce RTX 3090" overhead="0 B" before.total="24.0 GiB" before.free="21.9 GiB" now.total="24.0 GiB" now.free="21.9 GiB" now.used="2.1 GiB"
releasing nvml library
time=2024-10-30T23:40:35.229+03:00 level=INFO source=llama-server.go:72 msg="system memory" total="127.2 GiB" free="95.7 GiB" free_swap="119.6 GiB"
time=2024-10-30T23:40:35.229+03:00 level=DEBUG source=memory.go:107 msg=evaluating library=cuda gpu_count=2 available="[22.5 GiB 21.9 GiB]"
time=2024-10-30T23:40:35.230+03:00 level=INFO source=memory.go:346 msg="offload to cuda" layers.requested=999 layers.model=81 layers.offload=55 layers.split=28,27 memory.available="[22.5 GiB 21.9 GiB]" memory.gpu_overhead="0 B" memory.required.full="58.6 GiB" memory.required.partial="43.4 GiB" memory.required.kv="10.0 GiB" memory.required.allocations="[22.0 GiB 21.4 GiB]" memory.weights.total="45.4 GiB" memory.weights.repeating="44.5 GiB" memory.weights.nonrepeating="974.6 MiB" memory.graph.full="5.1 GiB" memory.graph.partial="5.1 GiB"
```
### nvidia-smi output
```
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 566.03 Driver Version: 566.03 CUDA Version: 12.7 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Driver-Model | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA GeForce RTX 3090 WDDM | 00000000:01:00.0 Off | N/A |
| 0% 37C P8 15W / 370W | 37MiB / 24576MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
| 1 NVIDIA GeForce RTX 4090 WDDM | 00000000:16:00.0 On | Off |
| 30% 34C P0 61W / 450W | 1517MiB / 24564MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
```
### OS
Windows
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.4.0-rc5
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7429/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7429/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2380
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2380/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2380/comments
|
https://api.github.com/repos/ollama/ollama/issues/2380/events
|
https://github.com/ollama/ollama/issues/2380
| 2,122,171,715
|
I_kwDOJ0Z1Ps5-fcVD
| 2,380
|
Ollama is unstable recently
|
{
"login": "lestan",
"id": 1471736,
"node_id": "MDQ6VXNlcjE0NzE3MzY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1471736?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lestan",
"html_url": "https://github.com/lestan",
"followers_url": "https://api.github.com/users/lestan/followers",
"following_url": "https://api.github.com/users/lestan/following{/other_user}",
"gists_url": "https://api.github.com/users/lestan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lestan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lestan/subscriptions",
"organizations_url": "https://api.github.com/users/lestan/orgs",
"repos_url": "https://api.github.com/users/lestan/repos",
"events_url": "https://api.github.com/users/lestan/events{/privacy}",
"received_events_url": "https://api.github.com/users/lestan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 4
| 2024-02-07T04:38:04
| 2024-02-08T00:13:19
| 2024-02-08T00:13:19
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
As of at least the last two recent versions, I have been experiencing a lot of issues with Ollama. Primarily, it seems to report that it can't connect to the server when using the Ollama CLI commands, even though the server is running and I can curl it. Also when using the Ollama Python SDK, I often get a Connection Refused error, but retrying will eventually connect. I can't explain it.
I ran the following commands in succession. Ollama is launched via the Mac app (not command line) after killing it and no models have been loaded yet.
```
lestan@Lestans-MacBook-Pro ~ % ollama list
Error: could not connect to ollama app, is it running?
lestan@Lestans-MacBook-Pro ~ % curl http://localhost:11434/api/tags
{"models":[{"name":"mixtral:latest","model":"mixtral:latest","modified_at":"2024-01-15T16:11:18.289940736-06:00","size":26442481545,"digest":"7708c059a8bb4d950e5e679aef904fd4da96aa4d551a5cd14a7f7e2308a82f6d","details":{"parent_model":"","format":"gguf","family":"llama","families":["llama"],"parameter_size":"47B","quantization_level":"Q4_0"}},{"name":"nous-hermes2-mixtral:latest","model":"nous-hermes2-mixtral:latest","modified_at":"2024-01-15T22:13:37.546667086-06:00","size":26442493141,"digest":"599da8dce2c14e54737c51f9668961bbc3526674249d3850b0875638a3e5e268","details":{"parent_model":"","format":"gguf","family":"llama","families":["llama"],"parameter_size":"47B","quantization_level":"Q4_0"}},{"name":"orca2:latest","model":"orca2:latest","modified_at":"2023-12-22T19:44:49.948456023-06:00","size":3825836233,"digest":"ea98cc422de301a0714ee18d077d5c4ba4fd02f889234944bb2f45618fd5d5f7","details":{"parent_model":"","format":"gguf","family":"llama","families":null,"parameter_size":"7B","quantization_level":"Q4_0"}},{"name":"phi:latest","model":"phi:latest","modified_at":"2023-12-28T21:03:25.568996781-06:00","size":1602472424,"digest":"c651b7a89d7399ce7c52624e3cec9a0e0887c6e720f0d716da44c841bfcf9aeb","details":{"parent_model":"","format":"gguf","family":"phi2","families":["phi2"],"parameter_size":"3B","quantization_level":"Q4_0"}},{"name":"tinyllama:latest","model":"tinyllama:latest","modified_at":"2024-01-05T21:45:36.99553769-06:00","size":637700138,"digest":"2644915ede352ea7bdfaff0bfac0be74c719d5d5202acb63a6fb095b52f394a4","details":{"parent_model":"","format":"gguf","family":"llama","families":["llama"],"parameter_size":"1B","quantization_level":"Q4_0"}}]}
lestan@Lestans-MacBook-Pro ~ % ollama -v
Warning: could not connect to a running Ollama instance
Warning: client version is 0.1.23
lestan@Lestans-MacBook-Pro ~ % ps -ef | grep ollama
501 32212 32208 0 10:23PM ?? 0:00.04 /Applications/Ollama.app/Contents/Resources/ollama serve
501 32270 10253 0 10:33PM ttys014 0:00.00 grep ollama
```
I'm running on Apple M3 Max with 64GB RAM
Appreciate any help.
Thanks!
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2380/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2380/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5848
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5848/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5848/comments
|
https://api.github.com/repos/ollama/ollama/issues/5848/events
|
https://github.com/ollama/ollama/issues/5848
| 2,422,550,533
|
I_kwDOJ0Z1Ps6QZTAF
| 5,848
|
The logs do not contain the request content sent by the client.
|
{
"login": "H9990HH969",
"id": 133352113,
"node_id": "U_kgDOB_LKsQ",
"avatar_url": "https://avatars.githubusercontent.com/u/133352113?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/H9990HH969",
"html_url": "https://github.com/H9990HH969",
"followers_url": "https://api.github.com/users/H9990HH969/followers",
"following_url": "https://api.github.com/users/H9990HH969/following{/other_user}",
"gists_url": "https://api.github.com/users/H9990HH969/gists{/gist_id}",
"starred_url": "https://api.github.com/users/H9990HH969/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/H9990HH969/subscriptions",
"organizations_url": "https://api.github.com/users/H9990HH969/orgs",
"repos_url": "https://api.github.com/users/H9990HH969/repos",
"events_url": "https://api.github.com/users/H9990HH969/events{/privacy}",
"received_events_url": "https://api.github.com/users/H9990HH969/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 3
| 2024-07-22T10:44:31
| 2024-08-01T22:48:07
| 2024-08-01T22:48:07
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
To facilitate debugging of the program, I need to see the requests sent to the large model from the frontend. However, I've noticed that the request URLs and contents are not visible in the logs. Where can I find them?
I have deployed DBGPT using Docker.
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5848/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5848/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5868
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5868/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5868/comments
|
https://api.github.com/repos/ollama/ollama/issues/5868/events
|
https://github.com/ollama/ollama/issues/5868
| 2,424,504,024
|
I_kwDOJ0Z1Ps6Qgv7Y
| 5,868
|
webUI
|
{
"login": "812781385",
"id": 33051062,
"node_id": "MDQ6VXNlcjMzMDUxMDYy",
"avatar_url": "https://avatars.githubusercontent.com/u/33051062?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/812781385",
"html_url": "https://github.com/812781385",
"followers_url": "https://api.github.com/users/812781385/followers",
"following_url": "https://api.github.com/users/812781385/following{/other_user}",
"gists_url": "https://api.github.com/users/812781385/gists{/gist_id}",
"starred_url": "https://api.github.com/users/812781385/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/812781385/subscriptions",
"organizations_url": "https://api.github.com/users/812781385/orgs",
"repos_url": "https://api.github.com/users/812781385/repos",
"events_url": "https://api.github.com/users/812781385/events{/privacy}",
"received_events_url": "https://api.github.com/users/812781385/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-07-23T07:45:28
| 2024-07-26T08:42:26
| 2024-07-26T08:42:14
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I developed open source webUI and services based on ollama, including functioncall. Interested can see, think useful hope can give a star:
https://github.com/812781385/ollama-webUI
|
{
"login": "812781385",
"id": 33051062,
"node_id": "MDQ6VXNlcjMzMDUxMDYy",
"avatar_url": "https://avatars.githubusercontent.com/u/33051062?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/812781385",
"html_url": "https://github.com/812781385",
"followers_url": "https://api.github.com/users/812781385/followers",
"following_url": "https://api.github.com/users/812781385/following{/other_user}",
"gists_url": "https://api.github.com/users/812781385/gists{/gist_id}",
"starred_url": "https://api.github.com/users/812781385/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/812781385/subscriptions",
"organizations_url": "https://api.github.com/users/812781385/orgs",
"repos_url": "https://api.github.com/users/812781385/repos",
"events_url": "https://api.github.com/users/812781385/events{/privacy}",
"received_events_url": "https://api.github.com/users/812781385/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5868/reactions",
"total_count": 4,
"+1": 0,
"-1": 4,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5868/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8645
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8645/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8645/comments
|
https://api.github.com/repos/ollama/ollama/issues/8645/events
|
https://github.com/ollama/ollama/issues/8645
| 2,816,966,847
|
I_kwDOJ0Z1Ps6n54C_
| 8,645
|
Unsloth's dynamic quantizations of Deepseek R1
|
{
"login": "jjparady",
"id": 83677301,
"node_id": "MDQ6VXNlcjgzNjc3MzAx",
"avatar_url": "https://avatars.githubusercontent.com/u/83677301?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jjparady",
"html_url": "https://github.com/jjparady",
"followers_url": "https://api.github.com/users/jjparady/followers",
"following_url": "https://api.github.com/users/jjparady/following{/other_user}",
"gists_url": "https://api.github.com/users/jjparady/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jjparady/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jjparady/subscriptions",
"organizations_url": "https://api.github.com/users/jjparady/orgs",
"repos_url": "https://api.github.com/users/jjparady/repos",
"events_url": "https://api.github.com/users/jjparady/events{/privacy}",
"received_events_url": "https://api.github.com/users/jjparady/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
closed
| false
| null |
[] | null | 2
| 2025-01-29T00:20:43
| 2025-01-29T23:26:04
| 2025-01-29T23:26:04
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Would love to have these dynamic quantizations readily available in ollama: https://huggingface.co/unsloth/DeepSeek-R1-GGUF
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8645/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8645/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6215
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6215/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6215/comments
|
https://api.github.com/repos/ollama/ollama/issues/6215/events
|
https://github.com/ollama/ollama/issues/6215
| 2,451,983,310
|
I_kwDOJ0Z1Ps6SJkvO
| 6,215
|
Ollama update (0.3.3) prevents running llama3.1:70b or llama3.1:8b with tools
|
{
"login": "imsaumil",
"id": 66752084,
"node_id": "MDQ6VXNlcjY2NzUyMDg0",
"avatar_url": "https://avatars.githubusercontent.com/u/66752084?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/imsaumil",
"html_url": "https://github.com/imsaumil",
"followers_url": "https://api.github.com/users/imsaumil/followers",
"following_url": "https://api.github.com/users/imsaumil/following{/other_user}",
"gists_url": "https://api.github.com/users/imsaumil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/imsaumil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/imsaumil/subscriptions",
"organizations_url": "https://api.github.com/users/imsaumil/orgs",
"repos_url": "https://api.github.com/users/imsaumil/repos",
"events_url": "https://api.github.com/users/imsaumil/events{/privacy}",
"received_events_url": "https://api.github.com/users/imsaumil/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 3
| 2024-08-07T01:36:11
| 2024-11-11T07:46:49
| 2024-11-06T00:53:10
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I had an old version of Ollama (do not remember the previous version) and had llama3.1:70b installed which was running fine. But I wanted to install llama3.1:8b and it did not let me pull without updating my Ollama. After the update with fresh pull of llama3.1:70b does not work as expected and gives following error. It does use @tool decorator with a single output function and ChatOllama API from langchain.
I tried downgrading the Ollama to a previous version to re-test but then it does not allow me to pull models without upgrading Ollama.
Error message:
<img width="1289" alt="image" src="https://github.com/user-attachments/assets/ccb9b9e6-1d7f-4288-a97c-5c9039ccfa13">
### OS
Linux
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.3.3
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6215/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6215/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7397
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7397/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7397/comments
|
https://api.github.com/repos/ollama/ollama/issues/7397/events
|
https://github.com/ollama/ollama/issues/7397
| 2,618,381,007
|
I_kwDOJ0Z1Ps6cEVLP
| 7,397
|
Please update NuExtract to v1.5
|
{
"login": "KIC",
"id": 10957396,
"node_id": "MDQ6VXNlcjEwOTU3Mzk2",
"avatar_url": "https://avatars.githubusercontent.com/u/10957396?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KIC",
"html_url": "https://github.com/KIC",
"followers_url": "https://api.github.com/users/KIC/followers",
"following_url": "https://api.github.com/users/KIC/following{/other_user}",
"gists_url": "https://api.github.com/users/KIC/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KIC/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KIC/subscriptions",
"organizations_url": "https://api.github.com/users/KIC/orgs",
"repos_url": "https://api.github.com/users/KIC/repos",
"events_url": "https://api.github.com/users/KIC/events{/privacy}",
"received_events_url": "https://api.github.com/users/KIC/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
open
| false
| null |
[] | null | 2
| 2024-10-28T13:08:54
| 2024-11-18T09:13:17
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Please update [NuExtract](https://ollama.com/library/nuextract) to the newest version on [huggingface](https://huggingface.co/numind/NuExtract-v1.5/tree/main)
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7397/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/ollama/ollama/issues/7397/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/3095
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3095/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3095/comments
|
https://api.github.com/repos/ollama/ollama/issues/3095/events
|
https://github.com/ollama/ollama/issues/3095
| 2,183,235,371
|
I_kwDOJ0Z1Ps6CIYcr
| 3,095
|
Limit ollama usage of GPUs using CUDA_VISIBLE_DEVICES
|
{
"login": "fengbolan",
"id": 65692219,
"node_id": "MDQ6VXNlcjY1NjkyMjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/65692219?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fengbolan",
"html_url": "https://github.com/fengbolan",
"followers_url": "https://api.github.com/users/fengbolan/followers",
"following_url": "https://api.github.com/users/fengbolan/following{/other_user}",
"gists_url": "https://api.github.com/users/fengbolan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fengbolan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fengbolan/subscriptions",
"organizations_url": "https://api.github.com/users/fengbolan/orgs",
"repos_url": "https://api.github.com/users/fengbolan/repos",
"events_url": "https://api.github.com/users/fengbolan/events{/privacy}",
"received_events_url": "https://api.github.com/users/fengbolan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg",
"url": "https://api.github.com/repos/ollama/ollama/labels/nvidia",
"name": "nvidia",
"color": "8CDB00",
"default": false,
"description": "Issues relating to Nvidia GPUs and CUDA"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 12
| 2024-03-13T06:42:44
| 2024-04-12T22:26:09
| 2024-04-12T22:26:09
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I've read the updated docs. The previous issue regarding the inability to limit OLLAMA usage of GPUs using CUDA_VISIBLE_DEVICES has not been resolved. Despite setting the environment variable CUDA_VISIBLE_DEVICES to a specific range or list of GPU IDs, OLLIMA continues to use all available GPUs during training instead of only the specified ones.
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3095/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3095/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6487
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6487/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6487/comments
|
https://api.github.com/repos/ollama/ollama/issues/6487/events
|
https://github.com/ollama/ollama/issues/6487
| 2,484,288,763
|
I_kwDOJ0Z1Ps6UEzz7
| 6,487
|
When invoked from the command line in an active conversation session, missing model for `/load` shouldn't be fatal error
|
{
"login": "erkinalp",
"id": 5833034,
"node_id": "MDQ6VXNlcjU4MzMwMzQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/5833034?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/erkinalp",
"html_url": "https://github.com/erkinalp",
"followers_url": "https://api.github.com/users/erkinalp/followers",
"following_url": "https://api.github.com/users/erkinalp/following{/other_user}",
"gists_url": "https://api.github.com/users/erkinalp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/erkinalp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/erkinalp/subscriptions",
"organizations_url": "https://api.github.com/users/erkinalp/orgs",
"repos_url": "https://api.github.com/users/erkinalp/repos",
"events_url": "https://api.github.com/users/erkinalp/events{/privacy}",
"received_events_url": "https://api.github.com/users/erkinalp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 0
| 2024-08-24T06:32:06
| 2024-08-24T06:32:06
| null |
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
if you try to load a nonexistent model
```
Loading model 'nonexistent.'
Error: model "nonexistent." not found, try pulling it first
```
then quits the existing session
### OS
Linux
### GPU
AMD
### CPU
AMD
### Ollama version
0.3.6
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6487/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6487/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/1631
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1631/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1631/comments
|
https://api.github.com/repos/ollama/ollama/issues/1631/events
|
https://github.com/ollama/ollama/issues/1631
| 2,050,545,894
|
I_kwDOJ0Z1Ps56ONjm
| 1,631
|
WSL2: GPU not working anymore
|
{
"login": "mircomir",
"id": 19854897,
"node_id": "MDQ6VXNlcjE5ODU0ODk3",
"avatar_url": "https://avatars.githubusercontent.com/u/19854897?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mircomir",
"html_url": "https://github.com/mircomir",
"followers_url": "https://api.github.com/users/mircomir/followers",
"following_url": "https://api.github.com/users/mircomir/following{/other_user}",
"gists_url": "https://api.github.com/users/mircomir/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mircomir/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mircomir/subscriptions",
"organizations_url": "https://api.github.com/users/mircomir/orgs",
"repos_url": "https://api.github.com/users/mircomir/repos",
"events_url": "https://api.github.com/users/mircomir/events{/privacy}",
"received_events_url": "https://api.github.com/users/mircomir/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 6
| 2023-12-20T13:24:32
| 2024-01-13T19:50:00
| 2024-01-10T15:07:27
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I updated Ollama to latest version (0.1.17) on a Ubuntu WSL2 and the GPU support is not recognized anymore.
At the end of installation I have the followinf message: "WARNING: No NVIDIA GPU detected. Ollama will run in CPU-only mode."
Running nvidia-smi:
Wed Dec 20 14:23:15 2023
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.146.01 Driver Version: 537.99 CUDA Version: 12.2 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 Quadro P4000 On | 00000000:17:00.0 Off | N/A |
| 46% 34C P8 6W / 105W | 0MiB / 8192MiB | 0% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
| 1 Quadro P2200 On | 00000000:65:00.0 On | N/A |
| 47% 35C P8 5W / 75W | 318MiB / 5120MiB | 3% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1631/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1631/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7486
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7486/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7486/comments
|
https://api.github.com/repos/ollama/ollama/issues/7486/events
|
https://github.com/ollama/ollama/pull/7486
| 2,631,757,246
|
PR_kwDOJ0Z1Ps6Avz1V
| 7,486
|
I added my ollama web ui
|
{
"login": "samirgaire10",
"id": 118608337,
"node_id": "U_kgDOBxHR0Q",
"avatar_url": "https://avatars.githubusercontent.com/u/118608337?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/samirgaire10",
"html_url": "https://github.com/samirgaire10",
"followers_url": "https://api.github.com/users/samirgaire10/followers",
"following_url": "https://api.github.com/users/samirgaire10/following{/other_user}",
"gists_url": "https://api.github.com/users/samirgaire10/gists{/gist_id}",
"starred_url": "https://api.github.com/users/samirgaire10/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/samirgaire10/subscriptions",
"organizations_url": "https://api.github.com/users/samirgaire10/orgs",
"repos_url": "https://api.github.com/users/samirgaire10/repos",
"events_url": "https://api.github.com/users/samirgaire10/events{/privacy}",
"received_events_url": "https://api.github.com/users/samirgaire10/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-11-04T03:47:24
| 2024-11-05T01:45:13
| 2024-11-05T01:45:13
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7486",
"html_url": "https://github.com/ollama/ollama/pull/7486",
"diff_url": "https://github.com/ollama/ollama/pull/7486.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7486.patch",
"merged_at": null
}
| null |
{
"login": "samirgaire10",
"id": 118608337,
"node_id": "U_kgDOBxHR0Q",
"avatar_url": "https://avatars.githubusercontent.com/u/118608337?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/samirgaire10",
"html_url": "https://github.com/samirgaire10",
"followers_url": "https://api.github.com/users/samirgaire10/followers",
"following_url": "https://api.github.com/users/samirgaire10/following{/other_user}",
"gists_url": "https://api.github.com/users/samirgaire10/gists{/gist_id}",
"starred_url": "https://api.github.com/users/samirgaire10/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/samirgaire10/subscriptions",
"organizations_url": "https://api.github.com/users/samirgaire10/orgs",
"repos_url": "https://api.github.com/users/samirgaire10/repos",
"events_url": "https://api.github.com/users/samirgaire10/events{/privacy}",
"received_events_url": "https://api.github.com/users/samirgaire10/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7486/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7486/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6054
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6054/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6054/comments
|
https://api.github.com/repos/ollama/ollama/issues/6054/events
|
https://github.com/ollama/ollama/pull/6054
| 2,435,666,891
|
PR_kwDOJ0Z1Ps52woSh
| 6,054
|
Added reference to Llama.cpp docs for passed through API options
|
{
"login": "noggynoggy",
"id": 50501527,
"node_id": "MDQ6VXNlcjUwNTAxNTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/50501527?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/noggynoggy",
"html_url": "https://github.com/noggynoggy",
"followers_url": "https://api.github.com/users/noggynoggy/followers",
"following_url": "https://api.github.com/users/noggynoggy/following{/other_user}",
"gists_url": "https://api.github.com/users/noggynoggy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/noggynoggy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/noggynoggy/subscriptions",
"organizations_url": "https://api.github.com/users/noggynoggy/orgs",
"repos_url": "https://api.github.com/users/noggynoggy/repos",
"events_url": "https://api.github.com/users/noggynoggy/events{/privacy}",
"received_events_url": "https://api.github.com/users/noggynoggy/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-07-29T15:01:04
| 2024-11-21T11:15:22
| 2024-11-21T11:15:21
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6054",
"html_url": "https://github.com/ollama/ollama/pull/6054",
"diff_url": "https://github.com/ollama/ollama/pull/6054.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6054.patch",
"merged_at": null
}
|
The API docs do not explain what all options listed [here](https://github.com/ollama/ollama/blob/0e4d653687f81db40622e287a846245b319f1fbe/docs/api.md?plain=1#L334-L362) do, some are explained in [the modelfile](https://github.com/ollama/ollama/blob/main/docs/modelfile.md#valid-parameters-and-values) but all "passed through" ones are not.
This PR adds a reference to [a doc](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md#generation-flags) which explains them.
Relevant: #6045
|
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6054/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6054/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6227
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6227/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6227/comments
|
https://api.github.com/repos/ollama/ollama/issues/6227/events
|
https://github.com/ollama/ollama/issues/6227
| 2,452,825,229
|
I_kwDOJ0Z1Ps6SMySN
| 6,227
|
ollama cannot start on ubuntu 22.04
|
{
"login": "garyyang85",
"id": 20335728,
"node_id": "MDQ6VXNlcjIwMzM1NzI4",
"avatar_url": "https://avatars.githubusercontent.com/u/20335728?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/garyyang85",
"html_url": "https://github.com/garyyang85",
"followers_url": "https://api.github.com/users/garyyang85/followers",
"following_url": "https://api.github.com/users/garyyang85/following{/other_user}",
"gists_url": "https://api.github.com/users/garyyang85/gists{/gist_id}",
"starred_url": "https://api.github.com/users/garyyang85/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/garyyang85/subscriptions",
"organizations_url": "https://api.github.com/users/garyyang85/orgs",
"repos_url": "https://api.github.com/users/garyyang85/repos",
"events_url": "https://api.github.com/users/garyyang85/events{/privacy}",
"received_events_url": "https://api.github.com/users/garyyang85/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 9
| 2024-08-07T07:55:54
| 2024-08-11T12:42:35
| 2024-08-11T12:00:21
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
First time to run ollama, follow the guide:
https://github.com/ollama/ollama/blob/main/docs/linux.md#manual-install
service cannot start, logs:
```
journalctl -u ollama --no-pager
Aug 07 15:35:48 i-2y1kobn5 systemd[1]: Started Ollama Service.
Aug 07 15:35:48 i-2y1kobn5 systemd[1]: ollama.service: Main process exited, code=dumped, status=11/SEGV
Aug 07 15:35:48 i-2y1kobn5 systemd[1]: ollama.service: Failed with result 'core-dump'.
Aug 07 15:35:51 i-2y1kobn5 systemd[1]: ollama.service: Scheduled restart job, restart counter is at 1.
Aug 07 15:35:51 i-2y1kobn5 systemd[1]: Stopped Ollama Service.
Aug 07 15:35:51 i-2y1kobn5 systemd[1]: Started Ollama Service.
Aug 07 15:35:51 i-2y1kobn5 systemd[1]: ollama.service: Main process exited, code=dumped, status=11/SEGV
Aug 07 15:35:51 i-2y1kobn5 systemd[1]: ollama.service: Failed with result 'core-dump'.
Aug 07 15:35:54 i-2y1kobn5 systemd[1]: ollama.service: Scheduled restart job, restart counter is at 2.
Aug 07 15:35:54 i-2y1kobn5 systemd[1]: Stopped Ollama Service.
Aug 07 15:35:54 i-2y1kobn5 systemd[1]: Started Ollama Service.
Aug 07 15:35:54 i-2y1kobn5 systemd[1]: ollama.service: Main process exited, code=dumped, status=11/SEGV
Aug 07 15:35:54 i-2y1kobn5 systemd[1]: ollama.service: Failed with result 'core-dump'.
Aug 07 15:35:57 i-2y1kobn5 systemd[1]: ollama.service: Scheduled restart job, restart counter is at 3.
Aug 07 15:35:57 i-2y1kobn5 systemd[1]: Stopped Ollama Service.
```
command return error:
```
ollama server
Segmentation fault (core dumped)
```
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
should be the latest one download from here:
https://ollama.com/download/ollama-linux-amd64
ollama --version also reports error: Segmentation fault (core dumped)
|
{
"login": "garyyang85",
"id": 20335728,
"node_id": "MDQ6VXNlcjIwMzM1NzI4",
"avatar_url": "https://avatars.githubusercontent.com/u/20335728?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/garyyang85",
"html_url": "https://github.com/garyyang85",
"followers_url": "https://api.github.com/users/garyyang85/followers",
"following_url": "https://api.github.com/users/garyyang85/following{/other_user}",
"gists_url": "https://api.github.com/users/garyyang85/gists{/gist_id}",
"starred_url": "https://api.github.com/users/garyyang85/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/garyyang85/subscriptions",
"organizations_url": "https://api.github.com/users/garyyang85/orgs",
"repos_url": "https://api.github.com/users/garyyang85/repos",
"events_url": "https://api.github.com/users/garyyang85/events{/privacy}",
"received_events_url": "https://api.github.com/users/garyyang85/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6227/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6227/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3761
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3761/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3761/comments
|
https://api.github.com/repos/ollama/ollama/issues/3761/events
|
https://github.com/ollama/ollama/issues/3761
| 2,253,640,461
|
I_kwDOJ0Z1Ps6GU9MN
| 3,761
|
GPU not detected in Kubernetes.
|
{
"login": "dylanbstorey",
"id": 6005970,
"node_id": "MDQ6VXNlcjYwMDU5NzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6005970?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dylanbstorey",
"html_url": "https://github.com/dylanbstorey",
"followers_url": "https://api.github.com/users/dylanbstorey/followers",
"following_url": "https://api.github.com/users/dylanbstorey/following{/other_user}",
"gists_url": "https://api.github.com/users/dylanbstorey/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dylanbstorey/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dylanbstorey/subscriptions",
"organizations_url": "https://api.github.com/users/dylanbstorey/orgs",
"repos_url": "https://api.github.com/users/dylanbstorey/repos",
"events_url": "https://api.github.com/users/dylanbstorey/events{/privacy}",
"received_events_url": "https://api.github.com/users/dylanbstorey/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg",
"url": "https://api.github.com/repos/ollama/ollama/labels/nvidia",
"name": "nvidia",
"color": "8CDB00",
"default": false,
"description": "Issues relating to Nvidia GPUs and CUDA"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 17
| 2024-04-19T18:28:29
| 2024-10-07T11:21:24
| 2024-05-08T12:31:07
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
When deploying into kubernetes the container is complaining about being unable to load the cudart library. (Or maybe its out of date)
Based on the documentation and provided examples I expect it to detect and utilize the GPU in container.
Every test I can think of (which is limited) seems to indicate this should be working but I'll bet I'm missing some nuance in the stack here - any advice would be appreciated.
Host Configuration :
```bash
uname -a
Linux overseer 6.5.0-28-generic #29~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Thu Apr 4 14:39:20 UTC 2 x86_64 x86_64 x86_64 GNU/Linux
```
nvcc
```bash
vcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2024 NVIDIA Corporation
Built on Thu_Mar_28_02:18:24_PDT_2024
Cuda compilation tools, release 12.4, V12.4.131
Build cuda_12.4.r12.4/compiler.34097967_0
```
Docker Run outputs :
```bash
docker run -d --gpus=all -v ollama:/root/.ollama -p 11435:11434 --name ollama ollama/ollama docker run -d --gpus=all -v ollama:/root/.ollama -p 11435:11434 --name ollama ollama/ollama
docker logs ollama
time=2024-04-19T17:59:48.712Z level=INFO source=images.go:817 msg="total blobs: 0"
time=2024-04-19T17:59:48.712Z level=INFO source=images.go:824 msg="total unused blobs removed: 0"
time=2024-04-19T17:59:48.712Z level=INFO source=routes.go:1143 msg="Listening on [::]:11434 (version 0.1.32)"
time=2024-04-19T17:59:48.712Z level=INFO source=payload.go:28 msg="extracting embedded files" dir=/tmp/ollama4206527122/runners
time=2024-04-19T17:59:50.712Z level=INFO source=payload.go:41 msg="Dynamic LLM libraries [cpu_avx cpu_avx2 cuda_v11 rocm_v60002 cpu]"
time=2024-04-19T17:59:50.712Z level=INFO source=gpu.go:121 msg="Detecting GPU type"
time=2024-04-19T17:59:50.712Z level=INFO source=gpu.go:268 msg="Searching for GPU management library libcudart.so*"
time=2024-04-19T17:59:50.713Z level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [/tmp/ollama4206527122/runners/cuda_v11/libcudart.so.11.0]"
time=2024-04-19T17:59:50.746Z level=INFO source=gpu.go:126 msg="Nvidia GPU detected via cudart"
time=2024-04-19T17:59:50.746Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-04-19T17:59:50.855Z level=INFO source=gpu.go:202 msg="[cudart] CUDART CUDA Compute Capability detected: 8.6"
[GIN] 2024/04/19 - 18:00:35 | 404 | 154.523µs | 172.17.0.1 | POST "/api/generate"
```
Deployment Configuration:
```bash
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: ollama
namespace: ollama
spec:
runtimeClassName: nvidia
selector:
matchLabels:
name: ollama
template:
metadata:
labels:
name: ollama
spec:
containers:
- name: ollama
image: ollama/ollama
resources:
limits:
nvidia.com/gpu: 1
tolerations:
- key: nvidia.com/gpu
operator: Exists
effect: NoSchedule
env:
- name: PATH
value: /usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
- name: LD_LIBRARY_PATH
value: /usr/local/nvidia/lib:/usr/local/nvidia/lib64
- name: NVIDIA_VISIBLE_DEVICES
value: all
- name: NVIDIA_DRIVER_CAPABILITIES
value: compute,utility
- name: OLLAMA_DEBUG
value: "1"
ports:
- name: http
containerPort: 11434
protocol: TCP
```
Deployment Logs:
```bash
│ Autoscroll:On FullScreen:Off Timestamps:Off Wrap:On │
│ Couldn't find '/root/.ollama/id_ed25519'. Generating new private key. │
│ Your new public key is: │
│ │
│ ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMp1XONYlspAEBzMEJNATgAMm39ctFUiN3XZxLwlzVMB │
│ │
│ time=2024-04-19T17:27:12.252Z level=INFO source=images.go:817 msg="total blobs: 0" │
│ time=2024-04-19T17:27:12.252Z level=INFO source=images.go:824 msg="total unused blobs removed: 0" │
│ time=2024-04-19T17:27:12.253Z level=INFO source=routes.go:1143 msg="Listening on :11434 (version 0.1.32)" │
│ time=2024-04-19T17:27:12.253Z level=INFO source=payload.go:28 msg="extracting embedded files" dir=/tmp/ollama2307533246/runners │
│ time=2024-04-19T17:27:12.253Z level=DEBUG source=payload.go:160 msg=extracting variant=cpu file=build/linux/x86_64/cpu/bin/ollama_llama_server.gz │
│ time=2024-04-19T17:27:12.253Z level=DEBUG source=payload.go:160 msg=extracting variant=cpu_avx file=build/linux/x86_64/cpu_avx/bin/ollama_llama_server.gz │
│ time=2024-04-19T17:27:12.253Z level=DEBUG source=payload.go:160 msg=extracting variant=cpu_avx2 file=build/linux/x86_64/cpu_avx2/bin/ollama_llama_server.gz │
│ time=2024-04-19T17:27:12.253Z level=DEBUG source=payload.go:160 msg=extracting variant=cuda_v11 file=build/linux/x86_64/cuda_v11/bin/libcublas.so.11.gz │
│ time=2024-04-19T17:27:12.253Z level=DEBUG source=payload.go:160 msg=extracting variant=cuda_v11 file=build/linux/x86_64/cuda_v11/bin/libcublasLt.so.11.gz │
│ time=2024-04-19T17:27:12.253Z level=DEBUG source=payload.go:160 msg=extracting variant=cuda_v11 file=build/linux/x86_64/cuda_v11/bin/libcudart.so.11.0.gz │
│ time=2024-04-19T17:27:12.253Z level=DEBUG source=payload.go:160 msg=extracting variant=cuda_v11 file=build/linux/x86_64/cuda_v11/bin/ollama_llama_server.gz │
│ time=2024-04-19T17:27:12.253Z level=DEBUG source=payload.go:160 msg=extracting variant=rocm_v60002 file=build/linux/x86_64/rocm_v60002/bin/deps.txt.gz │
│ time=2024-04-19T17:27:12.253Z level=DEBUG source=payload.go:160 msg=extracting variant=rocm_v60002 file=build/linux/x86_64/rocm_v60002/bin/ollama_llama_server.gz │
│ time=2024-04-19T17:27:14.217Z level=DEBUG source=payload.go:68 msg="availableServers : found" file=/tmp/ollama2307533246/runners/cpu │
│ time=2024-04-19T17:27:14.217Z level=DEBUG source=payload.go:68 msg="availableServers : found" file=/tmp/ollama2307533246/runners/cpu_avx │
│ time=2024-04-19T17:27:14.217Z level=DEBUG source=payload.go:68 msg="availableServers : found" file=/tmp/ollama2307533246/runners/cpu_avx2 │
│ time=2024-04-19T17:27:14.217Z level=DEBUG source=payload.go:68 msg="availableServers : found" file=/tmp/ollama2307533246/runners/cuda_v11 │
│ time=2024-04-19T17:27:14.217Z level=DEBUG source=payload.go:68 msg="availableServers : found" file=/tmp/ollama2307533246/runners/rocm_v60002 │
│ time=2024-04-19T17:27:14.217Z level=INFO source=payload.go:41 msg="Dynamic LLM libraries [cuda_v11 rocm_v60002 cpu cpu_avx cpu_avx2]" │
│ time=2024-04-19T17:27:14.217Z level=DEBUG source=payload.go:42 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY" │
│ time=2024-04-19T17:27:14.217Z level=INFO source=gpu.go:121 msg="Detecting GPU type" │
│ time=2024-04-19T17:27:14.217Z level=INFO source=gpu.go:268 msg="Searching for GPU management library libcudart.so*" │
│ time=2024-04-19T17:27:14.217Z level=DEBUG source=gpu.go:286 msg="gpu management search paths: [/tmp/ollama2307533246/runners/cuda*/libcudart.so* /usr/local/cuda/lib64/libcudart.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/x86_64-linux-gnu/libcudart.so* /usr/lib/wsl/lib/libcudart.so* /u │
│ sr/lib/wsl/drivers/*/libcudart.so* /opt/cuda/lib64/libcudart.so* /usr/local/cuda*/targets/aarch64-linux/lib/libcudart.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/aarch64-linux-gnu/libcudart.so* /usr/local/cuda/lib*/libcudart.so* /usr/lib*/libcudart.so* /usr/local/lib*/libcudart.so* / │
│ usr/local/nvidia/lib/libcudart.so** /usr/local/nvidia/lib64/libcudart.so**]" │
│ time=2024-04-19T17:27:14.217Z level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [/tmp/ollama2307533246/runners/cuda_v11/libcudart.so.11.0]" │
│ wiring cudart library functions in /tmp/ollama2307533246/runners/cuda_v11/libcudart.so.11.0 │
│ dlsym: cudaSetDevice │
│ dlsym: cudaDeviceSynchronize │
│ dlsym: cudaDeviceReset │
│ dlsym: cudaMemGetInfo │
│ dlsym: cudaGetDeviceCount │
│ dlsym: cudaDeviceGetAttribute │
│ dlsym: cudaDriverGetVersion │
│ cudaSetDevice err: 35 │
│ time=2024-04-19T17:27:14.218Z level=INFO source=gpu.go:343 msg="Unable to load cudart CUDA management library /tmp/ollama2307533246/runners/cuda_v11/libcudart.so.11.0: your nvidia driver is too old or missing, please upgrade to run ollama" │
│ time=2024-04-19T17:27:14.218Z level=INFO source=gpu.go:268 msg="Searching for GPU management library libnvidia-ml.so" │
│ time=2024-04-19T17:27:14.218Z level=DEBUG source=gpu.go:286 msg="gpu management search paths: [/usr/local/cuda/lib64/libnvidia-ml.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libnvidia-ml.so* /usr/lib/x86_64-linux-gnu/libnvidia-ml.so* /usr/lib/wsl/lib/libnvidia-ml.so* /usr/lib/wsl/drivers/*/libnvidia-ml.so* │
│ /opt/cuda/lib64/libnvidia-ml.so* /usr/lib*/libnvidia-ml.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libnvidia-ml.so* /usr/lib/aarch64-linux-gnu/libnvidia-ml.so* /usr/local/lib*/libnvidia-ml.so* /opt/cuda/targets/x86_64-linux/lib/stubs/libnvidia-ml.so* /usr/local/nvidia/lib/libnvidia-ml.so* /usr/local/nvidi │
│ a/lib64/libnvidia-ml.so*]" │
│ time=2024-04-19T17:27:14.218Z level=INFO source=gpu.go:314 msg="Discovered GPU libraries: []" │
│ time=2024-04-19T17:27:14.218Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2" │
│ time=2024-04-19T17:27:14.218Z level=DEBUG source=amd_linux.go:280 msg="amdgpu driver not detected /sys/module/amdgpu" │
│ time=2024-04-19T17:27:14.218Z level=INFO source=routes.go:1164 msg="no GPU detected"
```
Kubernetes Based nbody run :
```bash
cat <<EOF | kubectl create -f -
apiVersion: v1
kind: Pod
metadata:
name: nbody-gpu-benchmark
namespace: default
spec:
restartPolicy: OnFailure
runtimeClassName: nvidia
containers:
- name: cuda-container
image: nvcr.io/nvidia/k8s/cuda-sample:nbody
args: ["nbody", "-gpu", "-benchmark"]
resources:
limits:
nvidia.com/gpu: 1
env:
- name: NVIDIA_VISIBLE_DEVICES
value: all
- name: NVIDIA_DRIVER_CAPABILITIES
value: all
EOF
```
nbody container logs
```
Autoscroll:On FullScreen:Off Timestamps:Off Wrap:Off │
│ Run "nbody -benchmark [-numbodies=<numBodies>]" to measure performance. │
│ -fullscreen (run n-body simulation in fullscreen mode) │
│ -fp64 (use double precision floating point values for simulation) │
│ -hostmem (stores simulation data in host memory) │
│ -benchmark (run benchmark to measure performance) │
│ -numbodies=<N> (number of bodies (>= 1) to run in simulation) │
│ -device=<d> (where d=0,1,2.... for the CUDA device to use) │
│ -numdevices=<i> (where i=(number of CUDA devices > 0) to use for simulation) │
│ -compare (compares simulation results running once on the default GPU and once on the CPU) │
│ -cpu (run n-body simulation on the CPU) │
│ -tipsy=<file.bin> (load a tipsy model file for simulation) │
│ │
│ NOTE: The CUDA Samples are not meant for performance measurements. Results may vary when GPU Boost is enabled. │
│ │
│ > Windowed mode │
│ > Simulation data stored in video memory │
│ > Single precision floating point simulation │
│ > 1 Devices used for simulation │
│ GPU Device 0: "Ampere" with compute capability 8.6 │
│ │
│ > Compute 8.6 CUDA device: [NVIDIA GeForce RTX 3060] │
│ 28672 bodies, total time for 10 iterations: 22.067 ms │
│ = 372.538 billion interactions per second │
│ = 7450.761 single-precision GFLOP/s at 20 flops per interaction
```
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
latest
|
{
"login": "dylanbstorey",
"id": 6005970,
"node_id": "MDQ6VXNlcjYwMDU5NzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6005970?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dylanbstorey",
"html_url": "https://github.com/dylanbstorey",
"followers_url": "https://api.github.com/users/dylanbstorey/followers",
"following_url": "https://api.github.com/users/dylanbstorey/following{/other_user}",
"gists_url": "https://api.github.com/users/dylanbstorey/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dylanbstorey/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dylanbstorey/subscriptions",
"organizations_url": "https://api.github.com/users/dylanbstorey/orgs",
"repos_url": "https://api.github.com/users/dylanbstorey/repos",
"events_url": "https://api.github.com/users/dylanbstorey/events{/privacy}",
"received_events_url": "https://api.github.com/users/dylanbstorey/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3761/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3761/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1203
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1203/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1203/comments
|
https://api.github.com/repos/ollama/ollama/issues/1203/events
|
https://github.com/ollama/ollama/issues/1203
| 2,001,581,043
|
I_kwDOJ0Z1Ps53TbPz
| 1,203
|
Generating context from aborted request
|
{
"login": "FairyTail2000",
"id": 22645621,
"node_id": "MDQ6VXNlcjIyNjQ1NjIx",
"avatar_url": "https://avatars.githubusercontent.com/u/22645621?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/FairyTail2000",
"html_url": "https://github.com/FairyTail2000",
"followers_url": "https://api.github.com/users/FairyTail2000/followers",
"following_url": "https://api.github.com/users/FairyTail2000/following{/other_user}",
"gists_url": "https://api.github.com/users/FairyTail2000/gists{/gist_id}",
"starred_url": "https://api.github.com/users/FairyTail2000/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/FairyTail2000/subscriptions",
"organizations_url": "https://api.github.com/users/FairyTail2000/orgs",
"repos_url": "https://api.github.com/users/FairyTail2000/repos",
"events_url": "https://api.github.com/users/FairyTail2000/events{/privacy}",
"received_events_url": "https://api.github.com/users/FairyTail2000/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 6
| 2023-11-20T07:58:30
| 2024-11-22T07:07:10
| 2023-12-04T23:01:07
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
For my own frontend I noticed that it might be useful to have an endpoint where I can generate context from optionally previous context, the typed prompt from the user and the answer of the model before it was interrupted.
This could create a similiar experience to OpenAI's ChatGPT
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1203/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1203/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6807
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6807/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6807/comments
|
https://api.github.com/repos/ollama/ollama/issues/6807/events
|
https://github.com/ollama/ollama/issues/6807
| 2,526,592,624
|
I_kwDOJ0Z1Ps6WmL5w
| 6,807
|
Slow model load and cache ram does not free.
|
{
"login": "pisoiu",
"id": 51887464,
"node_id": "MDQ6VXNlcjUxODg3NDY0",
"avatar_url": "https://avatars.githubusercontent.com/u/51887464?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pisoiu",
"html_url": "https://github.com/pisoiu",
"followers_url": "https://api.github.com/users/pisoiu/followers",
"following_url": "https://api.github.com/users/pisoiu/following{/other_user}",
"gists_url": "https://api.github.com/users/pisoiu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pisoiu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pisoiu/subscriptions",
"organizations_url": "https://api.github.com/users/pisoiu/orgs",
"repos_url": "https://api.github.com/users/pisoiu/repos",
"events_url": "https://api.github.com/users/pisoiu/events{/privacy}",
"received_events_url": "https://api.github.com/users/pisoiu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5808482718,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWjZpng",
"url": "https://api.github.com/repos/ollama/ollama/labels/performance",
"name": "performance",
"color": "A5B5C6",
"default": false,
"description": ""
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info",
"name": "needs more info",
"color": "BA8041",
"default": false,
"description": "More information is needed to assist"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 15
| 2024-09-14T20:14:31
| 2024-11-05T23:24:10
| 2024-11-05T23:24:10
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Hi all. My system: AMD TR PRO 3975WX CPU, 512G RAM DDR4 ECC, 3xRTX A4000 (48G VRAM) GPU, 4TB Nvme corsair mp600 core xt, Ubuntu 22.04.1 LTS
I'm not specialist in Linux, so don't throw stones.
Problem 1: According to various tests, transfer speed of DDR4 can go up to 25GB/s. According to the benchmark of my local nvme disk, read speed is around 6GB/s. However, when I start 'ollama run llama3.1:70b' from terminal, system monitor indicate constant disk activity during model transfer and read speed tops around 1.7GB/s, no more. Why isn't it loaded faster if both disk and RAM can do much more? System isn't doing anything else. Let's say this isn't problematic with 70b, but with 405b is really annoying.
Problem 2: 48G VRAM is enough to fit :70b model. When I start 'ollama run llama3.1:70b', it is loaded first in the RAM, in the system monitor window I see 'cache' jumping up. After the model is completely transferred to RAM, I see it pushed into VRAM of GPU for inferrence. The 'memory' section of system monitor indicates '7.3GiB(1.5%) of 503.5GiB, cache 44.6 GiB'. When I'm done with the model and send '/bye' to ollama, I can see VRAM still filled for few more minutes, then it is freed. But not the 'cache' from RAM. It stays at 44.6GiB forever if I'm not doing anything else (I waited >30 min). This is becoming problematic when I load a different model. That will top over the already existing models in cache memory and will increase its size. Continuing loading different models will progressively fill it up to the top, eventually data will go in the swap. Old models are never removed from cache even if newer ones needs memory. Why?
Thank you.
LE: one detail which may or may not be important. Ollama is installed directly and I run it from terminal prompt and there is another installation in a docker container where it was installed from open-webui with built in ollama support, that one is inferred from the network. But both behave the same in regards to cache memory.
### OS
Linux
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.3.8
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6807/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6807/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1699
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1699/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1699/comments
|
https://api.github.com/repos/ollama/ollama/issues/1699/events
|
https://github.com/ollama/ollama/issues/1699
| 2,055,176,692
|
I_kwDOJ0Z1Ps56f4H0
| 1,699
|
Modelfile parameters not set during creation
|
{
"login": "tylertitsworth",
"id": 43555799,
"node_id": "MDQ6VXNlcjQzNTU1Nzk5",
"avatar_url": "https://avatars.githubusercontent.com/u/43555799?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tylertitsworth",
"html_url": "https://github.com/tylertitsworth",
"followers_url": "https://api.github.com/users/tylertitsworth/followers",
"following_url": "https://api.github.com/users/tylertitsworth/following{/other_user}",
"gists_url": "https://api.github.com/users/tylertitsworth/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tylertitsworth/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tylertitsworth/subscriptions",
"organizations_url": "https://api.github.com/users/tylertitsworth/orgs",
"repos_url": "https://api.github.com/users/tylertitsworth/repos",
"events_url": "https://api.github.com/users/tylertitsworth/events{/privacy}",
"received_events_url": "https://api.github.com/users/tylertitsworth/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 1
| 2023-12-24T17:59:31
| 2024-03-12T00:27:08
| 2024-03-12T00:27:08
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I have a model like so: (also providing system details)
```Dockerfile
$ cat /etc/os-release | head -n 4
PRETTY_NAME="Ubuntu 22.04.2 LTS"
NAME="Ubuntu"
VERSION_ID="22.04"
VERSION="22.04.2 LTS (Jammy Jellyfish)"
$ ollama -v
ollama version is 0.1.17
$ ollama show test --modelfile
# Modelfile generated by "ollama show"
# To build a new Modelfile based on this one, replace the FROM line with:
# FROM test:latest
FROM /usr/share/ollama/.ollama/models/blobs/sha256:8f7e99455a86ad490dab19d335d8c2d1d044e2744b923d4e78a4d84fe1457738
TEMPLATE """### System:
{{ .System }}
### User:
{{ .Prompt }}
### Assistant:
"""
SYSTEM """
Given the following extracted parts of a long document and a question, create a final answer with references ("SOURCES").
If you don't know the answer, just say that you don't know. Don't try to make up an answer.
ALWAYS return a "SOURCES" part in your answer.
---
Content: {context}
---
"""
PARAMETER repeat_penalty 1.3
PARAMETER temperature 0.4
PARAMETER top_k 20
PARAMETER top_p 0.65
```
My parameters shown in this output are the same as what I have set in my Modelfile.
However, when I query the API for the parameter values, I see that they are different values for `repeat_penalty`, `temperature`, and `top_p`:
```bash
$ ollama show test --parameters
repeat_penalty 1
temperature 0
top_k 20
top_p 1
```
```json
$ curl http://localhost:11434/api/show -d '{"name": "test"}'
{
"modelfile": "# Modelfile generated by \"ollama show\"\n# To build a new Modelfile based on this one, replace the FROM line with:\n# FROM test:latest\n\nFROM /usr/share/ollama/.ollama/models/blobs/sha256:8f7e99455a86ad490dab19d335d8c2d1d044e2744b923d4e78a4d84fe1457738\nTEMPLATE \"\"\"### System:\r\n{{ .System }}\r\n\r\n### User:\r\n{{ .Prompt }}\r\n\r\n### Assistant:\r\n\"\"\"\nSYSTEM \"\"\"\r\nGiven the following extracted parts of a long document and a question, create a final answer with references (\"SOURCES\"). \r\nIf you don't know the answer, just say that you don't know. Don't try to make up an answer.\r\nALWAYS return a \"SOURCES\" part in your answer.\r\n---\r\nContent: {context}\r\n---\r\n\"\"\"\nPARAMETER repeat_penalty 1.3\nPARAMETER temperature 0.4\nPARAMETER top_k 21\nPARAMETER top_p 0.65",
"parameters": "top_p 1\nrepeat_penalty 1\ntemperature 0\ntop_k 20",
"template": "### System:\r\n{{ .System }}\r\n\r\n### User:\r\n{{ .Prompt }}\r\n\r\n### Assistant:\r\n",
"system": "\r\nGiven the following extracted parts of a long document and a question, create a final answer with references (\"SOURCES\"). \r\nIf you don't know the answer, just say that you don't know. Don't try to make up an answer.\r\nALWAYS return a \"SOURCES\" part in your answer.\r\n---\r\nContent: {context}\r\n---\r\n",
"details":
{
"format": "gguf",
"family": "llama",
"families": ["llama"],
"parameter_size": "7B",
"quantization_level": "Q4_0",
},
}
```
Cleaning up that JSON using Python:
```python
>>> import requests
>>> defaults = dict(
... pair.split()
... for pair in requests.post(
... "http://localhost:11434/api/show",
... data='{"name": "test"}',
... timeout=5
... )
... .json()["parameters"]
... .split("\n")
... if pair.strip()
... )
>>> defaults
{'top_p': '1', 'repeat_penalty': '1', 'temperature': '0', 'top_k': '20'}
```
I then changed my top_k value to make sure it just wasn't the same as the model default, and it did modify the value successfully.
If I change my input model from a GGUF model I quantized to `neural-chat:latest`, the problem is reproduced, just with some other default parameters:
```bash
$ ollama show test-neural-chat --parameters
repeat_penalty 1
stop <|im_start|>
stop <|im_end|>
temperature 0
top_k 20
top_p 1
num_ctx 4096
```
For my sources, I'm following: https://github.com/jmorganca/ollama/blob/main/docs/modelfile.md#parameter
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1699/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1699/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2317
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2317/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2317/comments
|
https://api.github.com/repos/ollama/ollama/issues/2317/events
|
https://github.com/ollama/ollama/pull/2317
| 2,113,911,318
|
PR_kwDOJ0Z1Ps5lxtxi
| 2,317
|
Add multimodel support to `ollama run` in noninteractive mode
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-02-02T02:39:22
| 2024-02-02T05:33:07
| 2024-02-02T05:33:06
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/2317",
"html_url": "https://github.com/ollama/ollama/pull/2317",
"diff_url": "https://github.com/ollama/ollama/pull/2317.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2317.patch",
"merged_at": "2024-02-02T05:33:06"
}
|
Fixes https://github.com/ollama/ollama/issues/2295
```
% ollama run llava Describe this image: /Users/jmorgan/Desktop/old-tower.jpg
Added image '/Users/jmorgan/Desktop/old-tower.jpg'
The image depicts a vibrant cityscape. In the foreground, there's an iconic skyscraper, which is the CN Tower, a landmark of Toronto, Canada. The tower stands prominently against a clear blue sky. In the background, you can see a variety
of buildings, including what appears to be condominiums and commercial structures, all under a bright sunlight. There's a body of water visible in the lower right corner of the image, suggesting that this photo is taken from a vantage
point overlooking the city. The overall impression is of a bustling urban environment with a mix of architectural styles.
```

|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2317/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2317/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/485
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/485/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/485/comments
|
https://api.github.com/repos/ollama/ollama/issues/485/events
|
https://github.com/ollama/ollama/issues/485
| 1,886,350,976
|
I_kwDOJ0Z1Ps5wb26A
| 485
|
check subprocess id to see if server is running rather than timing out
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 0
| 2023-09-07T17:57:03
| 2023-09-18T19:16:34
| 2023-09-18T19:16:34
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
The timeout for a server to start running will need to be a long time for larger models, better to just check the process ID then wait for the server to respond (with a really long timeout) rather than relying on the timeout by itself.
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/485/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/485/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7071
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7071/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7071/comments
|
https://api.github.com/repos/ollama/ollama/issues/7071/events
|
https://github.com/ollama/ollama/pull/7071
| 2,560,381,790
|
PR_kwDOJ0Z1Ps59UNdW
| 7,071
|
llm: Don't add BOS/EOS for tokenize requests
|
{
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users/jessegross/followers",
"following_url": "https://api.github.com/users/jessegross/following{/other_user}",
"gists_url": "https://api.github.com/users/jessegross/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jessegross/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jessegross/subscriptions",
"organizations_url": "https://api.github.com/users/jessegross/orgs",
"repos_url": "https://api.github.com/users/jessegross/repos",
"events_url": "https://api.github.com/users/jessegross/events{/privacy}",
"received_events_url": "https://api.github.com/users/jessegross/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-10-01T23:29:53
| 2024-10-01T23:46:25
| 2024-10-01T23:46:23
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7071",
"html_url": "https://github.com/ollama/ollama/pull/7071",
"diff_url": "https://github.com/ollama/ollama/pull/7071.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7071.patch",
"merged_at": "2024-10-01T23:46:23"
}
|
This is consistent with what server.cpp currently does. It affects things like token processing counts for embedding requests.
|
{
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users/jessegross/followers",
"following_url": "https://api.github.com/users/jessegross/following{/other_user}",
"gists_url": "https://api.github.com/users/jessegross/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jessegross/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jessegross/subscriptions",
"organizations_url": "https://api.github.com/users/jessegross/orgs",
"repos_url": "https://api.github.com/users/jessegross/repos",
"events_url": "https://api.github.com/users/jessegross/events{/privacy}",
"received_events_url": "https://api.github.com/users/jessegross/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7071/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7071/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1005
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1005/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1005/comments
|
https://api.github.com/repos/ollama/ollama/issues/1005/events
|
https://github.com/ollama/ollama/issues/1005
| 1,977,548,240
|
I_kwDOJ0Z1Ps513v3Q
| 1,005
|
Improved context window size management
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 5808482718,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWjZpng",
"url": "https://api.github.com/repos/ollama/ollama/labels/performance",
"name": "performance",
"color": "A5B5C6",
"default": false,
"description": ""
},
{
"id": 6849881759,
"node_id": "LA_kwDOJ0Z1Ps8AAAABmEjmnw",
"url": "https://api.github.com/repos/ollama/ollama/labels/memory",
"name": "memory",
"color": "5017EA",
"default": false,
"description": ""
}
] |
open
| false
| null |
[] | null | 9
| 2023-11-04T23:13:47
| 2024-11-27T10:08:51
| null |
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Context window size is largely manual right now – it can be specified via `{"options": {"num_ctx": 32768}}` in the API or via `PARAMETER num_ctx 32768` in the Modelfile. Otherwise the default value is set to `2048` unless specified (some models in the [library](https://ollama.ai/ will use a larger context window size by default)
Context size should be determined dynamically at runtime based on the amount of memory available.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1005/reactions",
"total_count": 60,
"+1": 57,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 3,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1005/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/611
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/611/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/611/comments
|
https://api.github.com/repos/ollama/ollama/issues/611/events
|
https://github.com/ollama/ollama/pull/611
| 1,914,479,379
|
PR_kwDOJ0Z1Ps5bSIEV
| 611
|
fix error messages for unknown commands in the repl
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-09-27T00:33:10
| 2023-09-28T21:19:46
| 2023-09-28T21:19:46
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/611",
"html_url": "https://github.com/ollama/ollama/pull/611",
"diff_url": "https://github.com/ollama/ollama/pull/611.diff",
"patch_url": "https://github.com/ollama/ollama/pull/611.patch",
"merged_at": "2023-09-28T21:19:46"
}
| null |
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/611/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/611/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5669
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5669/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5669/comments
|
https://api.github.com/repos/ollama/ollama/issues/5669/events
|
https://github.com/ollama/ollama/issues/5669
| 2,406,831,342
|
I_kwDOJ0Z1Ps6PdVTu
| 5,669
|
"error loading llama server" error="llama runner process has terminated: exit status 0xc0000135 "
|
{
"login": "lorenzodimauro97",
"id": 50343905,
"node_id": "MDQ6VXNlcjUwMzQzOTA1",
"avatar_url": "https://avatars.githubusercontent.com/u/50343905?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lorenzodimauro97",
"html_url": "https://github.com/lorenzodimauro97",
"followers_url": "https://api.github.com/users/lorenzodimauro97/followers",
"following_url": "https://api.github.com/users/lorenzodimauro97/following{/other_user}",
"gists_url": "https://api.github.com/users/lorenzodimauro97/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lorenzodimauro97/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lorenzodimauro97/subscriptions",
"organizations_url": "https://api.github.com/users/lorenzodimauro97/orgs",
"repos_url": "https://api.github.com/users/lorenzodimauro97/repos",
"events_url": "https://api.github.com/users/lorenzodimauro97/events{/privacy}",
"received_events_url": "https://api.github.com/users/lorenzodimauro97/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 3
| 2024-07-13T10:08:56
| 2024-07-15T09:56:18
| 2024-07-15T09:56:18
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Cannot load any model with ollama 0.2.3, this is some of the logs:
time=2024-07-13T12:06:59.113+02:00 level=INFO source=sched.go:179 msg="one or more GPUs detected that are unable to accurately report free memory - disabling default concurrency"
time=2024-07-13T12:06:59.126+02:00 level=INFO source=sched.go:701 msg="new model will fit in available VRAM in single GPU, loading" model=C:\Users\Administrator\.ollama\models\blobs\sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa gpu=0 parallel=4 available=17028874240 required="6.2 GiB"
time=2024-07-13T12:06:59.126+02:00 level=INFO source=memory.go:309 msg="offload to rocm" layers.requested=-1 layers.model=33 layers.offload=33 layers.split="" memory.available="[15.9 GiB]" memory.required.full="6.2 GiB" memory.required.partial="6.2 GiB" memory.required.kv="1.0 GiB" memory.required.allocations="[6.2 GiB]" memory.weights.total="4.7 GiB" memory.weights.repeating="4.3 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="560.0 MiB" memory.graph.partial="677.5 MiB"
time=2024-07-13T12:06:59.132+02:00 level=INFO source=server.go:383 msg="starting llama server" cmd="C:\\Users\\Administrator\\AppData\\Local\\Programs\\Ollama\\ollama_runners\\rocm_v6.1\\ollama_llama_server.exe --model C:\\Users\\Administrator\\.ollama\\models\\blobs\\sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa --ctx-size 8192 --batch-size 512 --embedding --log-disable --n-gpu-layers 33 --parallel 4 --port 61134"
time=2024-07-13T12:06:59.149+02:00 level=INFO source=sched.go:437 msg="loaded runners" count=1
time=2024-07-13T12:06:59.149+02:00 level=INFO source=server.go:571 msg="waiting for llama runner to start responding"
time=2024-07-13T12:06:59.149+02:00 level=INFO source=server.go:612 msg="waiting for server to become available" status="llm server error"
time=2024-07-13T12:07:01.181+02:00 level=ERROR source=sched.go:443 msg="error loading llama server" error="llama runner process has terminated: exit status 0xc0000135 "
### OS
Windows
### GPU
AMD
### CPU
AMD
### Ollama version
0.2.3
|
{
"login": "lorenzodimauro97",
"id": 50343905,
"node_id": "MDQ6VXNlcjUwMzQzOTA1",
"avatar_url": "https://avatars.githubusercontent.com/u/50343905?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lorenzodimauro97",
"html_url": "https://github.com/lorenzodimauro97",
"followers_url": "https://api.github.com/users/lorenzodimauro97/followers",
"following_url": "https://api.github.com/users/lorenzodimauro97/following{/other_user}",
"gists_url": "https://api.github.com/users/lorenzodimauro97/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lorenzodimauro97/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lorenzodimauro97/subscriptions",
"organizations_url": "https://api.github.com/users/lorenzodimauro97/orgs",
"repos_url": "https://api.github.com/users/lorenzodimauro97/repos",
"events_url": "https://api.github.com/users/lorenzodimauro97/events{/privacy}",
"received_events_url": "https://api.github.com/users/lorenzodimauro97/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5669/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5669/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5904
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5904/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5904/comments
|
https://api.github.com/repos/ollama/ollama/issues/5904/events
|
https://github.com/ollama/ollama/issues/5904
| 2,426,848,795
|
I_kwDOJ0Z1Ps6QpsYb
| 5,904
|
llama runner process has terminated: signal: aborted (core dumped)
|
{
"login": "Dudu0831",
"id": 88758930,
"node_id": "MDQ6VXNlcjg4NzU4OTMw",
"avatar_url": "https://avatars.githubusercontent.com/u/88758930?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Dudu0831",
"html_url": "https://github.com/Dudu0831",
"followers_url": "https://api.github.com/users/Dudu0831/followers",
"following_url": "https://api.github.com/users/Dudu0831/following{/other_user}",
"gists_url": "https://api.github.com/users/Dudu0831/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Dudu0831/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Dudu0831/subscriptions",
"organizations_url": "https://api.github.com/users/Dudu0831/orgs",
"repos_url": "https://api.github.com/users/Dudu0831/repos",
"events_url": "https://api.github.com/users/Dudu0831/events{/privacy}",
"received_events_url": "https://api.github.com/users/Dudu0831/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6947643302,
"node_id": "LA_kwDOJ0Z1Ps8AAAABnhyfpg",
"url": "https://api.github.com/repos/ollama/ollama/labels/create",
"name": "create",
"color": "b60205",
"default": false,
"description": "Issues relating to ollama create"
}
] |
open
| false
| null |
[] | null | 6
| 2024-07-24T07:52:33
| 2024-11-06T01:01:00
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I successfully converted jina-embeddings v2 base zh to gguf through llama. cpp and imported it into llama。
Here is my Modelfile
> root@buaa-KVM:~/1T/ollama/Jina-AI-embedding# cat Modelfile
> FROM /root/ggml-vocab-jina-v2-zh.gguf
> PARAMETER num_ctx 8192
When I access it using/app/embed, the log will report an error。
> time=2024-07-24T15:40:34.577+08:00 level=ERROR source=sched.go:443 msg="error loading llama server" error="llama runner process has terminated: signal: aborted (core dumped)"
Below is my complete log。
7月 24 15:40:33 buaa-KVM ollama[458186]: time=2024-07-24T15:40:33.873+08:00 level=INFO source=sched.go:495 msg="updated VRAM based on existing loaded models" gpu=GPU-4f8ced6a-2dde-5e92-be03-8d21e26bd156 library=cuda total="23.6 GiB" available="16.9 GiB"
7月 24 15:40:33 buaa-KVM ollama[458186]: time=2024-07-24T15:40:33.873+08:00 level=INFO source=sched.go:495 msg="updated VRAM based on existing loaded models" gpu=GPU-36c16c0c-392d-ffc5-13ce-2fd6b9af0668 library=cuda total="23.6 GiB" available="23.3 GiB"
7月 24 15:40:33 buaa-KVM ollama[458186]: time=2024-07-24T15:40:33.874+08:00 level=INFO source=sched.go:701 msg="new model will fit in available VRAM in single GPU, loading" model=/usr/share/ollama/.ollama/models/blobs/sha256-65a4313f43b6f94a0a8693d70efe823792303a020601ab3d4cad54cf079296c6 gpu=GPU-36c16c0c-392d-ffc5-13ce-2fd6b9af0668 parallel=4 available=24965218304 required="1.1 GiB"
7月 24 15:40:33 buaa-KVM ollama[458186]: time=2024-07-24T15:40:33.874+08:00 level=INFO source=memory.go:309 msg="offload to cuda" layers.requested=-1 layers.model=13 layers.offload=13 layers.split="" memory.available="[23.3 GiB]" memory.required.full="1.1 GiB" memory.required.partial="1.1 GiB" memory.required.kv="96.0 MiB" memory.required.allocations="[1.1 GiB]" memory.weights.total="312.3 MiB" memory.weights.repeating="222.9 MiB" memory.weights.nonrepeating="89.4 MiB" memory.graph.full="192.0 MiB" memory.graph.partial="192.0 MiB"
7月 24 15:40:33 buaa-KVM ollama[458186]: time=2024-07-24T15:40:33.874+08:00 level=INFO source=server.go:383 msg="starting llama server" cmd="/tmp/ollama259438837/runners/cuda_v11/ollama_llama_server --model /usr/share/ollama/.ollama/models/blobs/sha256-65a4313f43b6f94a0a8693d70efe823792303a020601ab3d4cad54cf079296c6 --ctx-size 32768 --batch-size 512 --embedding --log-disable --n-gpu-layers 13 --parallel 4 --port 35263"
7月 24 15:40:33 buaa-KVM ollama[458186]: time=2024-07-24T15:40:33.875+08:00 level=INFO source=sched.go:437 msg="loaded runners" count=2
7月 24 15:40:33 buaa-KVM ollama[458186]: time=2024-07-24T15:40:33.875+08:00 level=INFO source=server.go:583 msg="waiting for llama runner to start responding"
7月 24 15:40:33 buaa-KVM ollama[458186]: time=2024-07-24T15:40:33.875+08:00 level=INFO source=server.go:617 msg="waiting for server to become available" status="llm server error"
7月 24 15:40:33 buaa-KVM ollama[462914]: INFO [main] build info | build=1 commit="d94c6e0" tid="140502749204480" timestamp=1721806833
7月 24 15:40:33 buaa-KVM ollama[462914]: INFO [main] system info | n_threads=32 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 0 | " tid="140502749204480" timestamp=1721806833 total_threads=32
7月 24 15:40:33 buaa-KVM ollama[462914]: INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="31" port="35263" tid="140502749204480" timestamp=1721806833
7月 24 15:40:33 buaa-KVM ollama[458186]: llama_model_loader: loaded meta data with 33 key-value pairs and 196 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-65a4313f43b6f94a0a8693d70efe823792303a020601ab3d4cad54cf079296c6 (version GGUF V3 (latest))
7月 24 15:40:33 buaa-KVM ollama[458186]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
7月 24 15:40:33 buaa-KVM ollama[458186]: llama_model_loader: - kv 0: general.architecture str = jina-bert-v2
7月 24 15:40:33 buaa-KVM ollama[458186]: llama_model_loader: - kv 1: general.type str = model
7月 24 15:40:33 buaa-KVM ollama[458186]: llama_model_loader: - kv 2: general.name str = Jina Bert Implementation
7月 24 15:40:33 buaa-KVM ollama[458186]: llama_model_loader: - kv 3: general.organization str = Jinaai
7月 24 15:40:33 buaa-KVM ollama[458186]: llama_model_loader: - kv 4: general.size_label str = 160M
7月 24 15:40:33 buaa-KVM ollama[458186]: llama_model_loader: - kv 5: general.license str = apache-2.0
7月 24 15:40:33 buaa-KVM ollama[458186]: llama_model_loader: - kv 6: general.tags arr[str,6] = ["sentence-transformers", "feature-ex...
7月 24 15:40:33 buaa-KVM ollama[458186]: llama_model_loader: - kv 7: general.languages arr[str,2] = ["en", "zh"]
7月 24 15:40:33 buaa-KVM ollama[458186]: llama_model_loader: - kv 8: jina-bert-v2.block_count u32 = 12
7月 24 15:40:33 buaa-KVM ollama[458186]: llama_model_loader: - kv 9: jina-bert-v2.context_length u32 = 8192
7月 24 15:40:33 buaa-KVM ollama[458186]: llama_model_loader: - kv 10: jina-bert-v2.embedding_length u32 = 768
7月 24 15:40:33 buaa-KVM ollama[458186]: llama_model_loader: - kv 11: jina-bert-v2.feed_forward_length u32 = 3072
7月 24 15:40:33 buaa-KVM ollama[458186]: llama_model_loader: - kv 12: jina-bert-v2.attention.head_count u32 = 12
7月 24 15:40:33 buaa-KVM ollama[458186]: llama_model_loader: - kv 13: jina-bert-v2.attention.layer_norm_epsilon f32 = 0.000000
7月 24 15:40:33 buaa-KVM ollama[458186]: llama_model_loader: - kv 14: general.file_type u32 = 1
7月 24 15:40:33 buaa-KVM ollama[458186]: llama_model_loader: - kv 15: jina-bert-v2.attention.causal bool = false
7月 24 15:40:33 buaa-KVM ollama[458186]: llama_model_loader: - kv 16: jina-bert-v2.pooling_type u32 = 1
7月 24 15:40:33 buaa-KVM ollama[458186]: llama_model_loader: - kv 17: tokenizer.ggml.model str = gpt2
7月 24 15:40:33 buaa-KVM ollama[458186]: llama_model_loader: - kv 18: tokenizer.ggml.pre str = jina-v2-zh
7月 24 15:40:33 buaa-KVM ollama[458186]: llama_model_loader: - kv 19: tokenizer.ggml.tokens arr[str,61056] = ["<s>", "<pad>", "</s>", "<unk>", "<m...
7月 24 15:40:33 buaa-KVM ollama[458186]: llama_model_loader: - kv 20: tokenizer.ggml.token_type arr[i32,61056] = [3, 3, 3, 3, 3, 1, 1, 1, 1, 1, 1, 1, ...
7月 24 15:40:33 buaa-KVM ollama[458186]: llama_model_loader: - kv 21: tokenizer.ggml.merges arr[str,39382] = ["t h", "i n", "a n", "e r", "th e", ...
7月 24 15:40:33 buaa-KVM ollama[458186]: llama_model_loader: - kv 22: tokenizer.ggml.bos_token_id u32 = 0
7月 24 15:40:33 buaa-KVM ollama[458186]: llama_model_loader: - kv 23: tokenizer.ggml.eos_token_id u32 = 2
7月 24 15:40:33 buaa-KVM ollama[458186]: llama_model_loader: - kv 24: tokenizer.ggml.unknown_token_id u32 = 3
7月 24 15:40:33 buaa-KVM ollama[458186]: llama_model_loader: - kv 25: tokenizer.ggml.seperator_token_id u32 = 2
7月 24 15:40:33 buaa-KVM ollama[458186]: llama_model_loader: - kv 26: tokenizer.ggml.padding_token_id u32 = 1
7月 24 15:40:33 buaa-KVM ollama[458186]: llama_model_loader: - kv 27: tokenizer.ggml.cls_token_id u32 = 0
7月 24 15:40:33 buaa-KVM ollama[458186]: llama_model_loader: - kv 28: tokenizer.ggml.mask_token_id u32 = 4
7月 24 15:40:33 buaa-KVM ollama[458186]: llama_model_loader: - kv 29: tokenizer.ggml.token_type_count u32 = 2
7月 24 15:40:33 buaa-KVM ollama[458186]: llama_model_loader: - kv 30: tokenizer.ggml.add_bos_token bool = true
7月 24 15:40:33 buaa-KVM ollama[458186]: llama_model_loader: - kv 31: tokenizer.ggml.add_eos_token bool = true
7月 24 15:40:33 buaa-KVM ollama[458186]: llama_model_loader: - kv 32: general.quantization_version u32 = 2
7月 24 15:40:33 buaa-KVM ollama[458186]: llama_model_loader: - type f32: 111 tensors
7月 24 15:40:33 buaa-KVM ollama[458186]: llama_model_loader: - type f16: 85 tensors
7月 24 15:40:33 buaa-KVM ollama[458186]: llm_load_vocab: missing or unrecognized pre-tokenizer type, using: 'default'
7月 24 15:40:33 buaa-KVM ollama[458186]: GGML_ASSERT: /go/src/github.com/ollama/ollama/llm/llama.cpp/src/llama.cpp:5570: unicode_cpts_from_utf8(word).size() > 0
7月 24 15:40:34 buaa-KVM ollama[458186]: Could not attach to process. If your uid matches the uid of the target
7月 24 15:40:34 buaa-KVM ollama[458186]: process, check the setting of /proc/sys/kernel/yama/ptrace_scope, or try
7月 24 15:40:34 buaa-KVM ollama[458186]: again as the root user. For more details, see /etc/sysctl.d/10-ptrace.conf
7月 24 15:40:34 buaa-KVM ollama[458186]: ptrace: Operation not permitted.
7月 24 15:40:34 buaa-KVM ollama[458186]: No stack.
7月 24 15:40:34 buaa-KVM ollama[458186]: The program is not being run.
7月 24 15:40:34 buaa-KVM ollama[458186]: time=2024-07-24T15:40:34.326+08:00 level=INFO source=server.go:617 msg="waiting for server to become available" status="llm server not responding"
7月 24 15:40:34 buaa-KVM ollama[458186]: time=2024-07-24T15:40:34.577+08:00 level=ERROR source=sched.go:443 msg="error loading llama server" error="llama runner process has terminated: signal: aborted (core dumped)"
7月 24 15:40:34 buaa-KVM ollama[458186]: [GIN] 2024/07/24 - 15:40:34 | 500 | 882.662819ms | 192.168.1.202 | POST "/api/embed"
7月 24 15:40:39 buaa-KVM ollama[458186]: time=2024-07-24T15:40:39.737+08:00 level=WARN source=sched.go:634 msg="gpu VRAM usage didn't recover within timeout" seconds=5.160248264 model=/usr/share/ollama/.ollama/models/blobs/sha256-65a4313f43b6f94a0a8693d70efe823792303a020601ab3d4cad54cf079296c6
7月 24 15:40:39 buaa-KVM ollama[458186]: time=2024-07-24T15:40:39.987+08:00 level=WARN source=sched.go:634 msg="gpu VRAM usage didn't recover within timeout" seconds=5.41003445 model=/usr/share/ollama/.ollama/models/blobs/sha256-65a4313f43b6f94a0a8693d70efe823792303a020601ab3d4cad54cf079296c6
7月 24 15:40:40 buaa-KVM ollama[458186]: time=2024-07-24T15:40:40.238+08:00 level=WARN source=sched.go:634 msg="gpu VRAM usage didn't recover within timeout" seconds=5.660559316 model=/usr/share/ollama/.ollama/models/blobs/sha256-65a4313f43b6f94a0a8693d70efe823792303a020601ab3d4cad54cf079296c6
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.2.8
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5904/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5904/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/5670
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5670/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5670/comments
|
https://api.github.com/repos/ollama/ollama/issues/5670/events
|
https://github.com/ollama/ollama/issues/5670
| 2,406,851,962
|
I_kwDOJ0Z1Ps6PdaV6
| 5,670
|
The usage of VRAM has significantly increased
|
{
"login": "lingyezhixing",
"id": 144504450,
"node_id": "U_kgDOCJz2gg",
"avatar_url": "https://avatars.githubusercontent.com/u/144504450?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lingyezhixing",
"html_url": "https://github.com/lingyezhixing",
"followers_url": "https://api.github.com/users/lingyezhixing/followers",
"following_url": "https://api.github.com/users/lingyezhixing/following{/other_user}",
"gists_url": "https://api.github.com/users/lingyezhixing/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lingyezhixing/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lingyezhixing/subscriptions",
"organizations_url": "https://api.github.com/users/lingyezhixing/orgs",
"repos_url": "https://api.github.com/users/lingyezhixing/repos",
"events_url": "https://api.github.com/users/lingyezhixing/events{/privacy}",
"received_events_url": "https://api.github.com/users/lingyezhixing/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6849881759,
"node_id": "LA_kwDOJ0Z1Ps8AAAABmEjmnw",
"url": "https://api.github.com/repos/ollama/ollama/labels/memory",
"name": "memory",
"color": "5017EA",
"default": false,
"description": ""
}
] |
open
| false
| null |
[] | null | 5
| 2024-07-13T11:15:40
| 2024-10-24T02:45:26
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
In previous versions, I set the context length of each of my models to the maximum value that could be fully loaded onto the GPU memory. However, after the update, I found that parts of them were being partially loaded onto the CPU instead. I wonder what could be causing this. The following table is some examples.
|NAME|SIZE|PROCESSOR|
| :-: | :-: | :-: |
|glm4:9b-chat-2K-q5_K_M|8.3 GB|10%/90% CPU/GPU|
|glm4:9b-chat-10K-q4_K_M|7.8 GB|7%/93% CPU/GPU|
|codegeex4:9b-all-10K-q4_K_M|7.8 GB|7%/93% CPU/GPU|
|qwen2:7b-instruct-19K-q5_K_M|8.3 GB|13%/87% CPU/GPU|
|internlm2:7b-chat-v2.5-8K-q5_K_M|7.7 GB|4%/96% CPU/GPU|
|llama3:8b-instruct-5K-q6_K|8.2 GB|10%/90% CPU/GPU|
My graphics card is a 4060 laptop model, with only 8GB of VRAM. Interestingly, even before the update, none of the models was actually utilizing the full capacity of my GPU memory.
### OS
Windows
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.2.3
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5670/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5670/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/3334
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3334/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3334/comments
|
https://api.github.com/repos/ollama/ollama/issues/3334/events
|
https://github.com/ollama/ollama/issues/3334
| 2,205,083,185
|
I_kwDOJ0Z1Ps6DbuYx
| 3,334
|
Certificate expired
|
{
"login": "cxzx150133",
"id": 13826967,
"node_id": "MDQ6VXNlcjEzODI2OTY3",
"avatar_url": "https://avatars.githubusercontent.com/u/13826967?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cxzx150133",
"html_url": "https://github.com/cxzx150133",
"followers_url": "https://api.github.com/users/cxzx150133/followers",
"following_url": "https://api.github.com/users/cxzx150133/following{/other_user}",
"gists_url": "https://api.github.com/users/cxzx150133/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cxzx150133/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cxzx150133/subscriptions",
"organizations_url": "https://api.github.com/users/cxzx150133/orgs",
"repos_url": "https://api.github.com/users/cxzx150133/repos",
"events_url": "https://api.github.com/users/cxzx150133/events{/privacy}",
"received_events_url": "https://api.github.com/users/cxzx150133/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-03-25T07:30:01
| 2024-03-25T08:45:42
| 2024-03-25T08:45:41
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
$ docker exec -it ollama ollama pull qwen:7b
pulling manifest
Error: pull model manifest: Get "https://registry.ollama.ai/v2/library/qwen/manifests/7b": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2024-03-25T07:29:23Z is after 2024-03-25T07:17:47Z
### What did you expect to see?
no error
### Steps to reproduce
_No response_
### Are there any recent changes that introduced the issue?
_No response_
### OS
_No response_
### Architecture
_No response_
### Platform
_No response_
### Ollama version
_No response_
### GPU
_No response_
### GPU info
_No response_
### CPU
_No response_
### Other software
_No response_
|
{
"login": "cxzx150133",
"id": 13826967,
"node_id": "MDQ6VXNlcjEzODI2OTY3",
"avatar_url": "https://avatars.githubusercontent.com/u/13826967?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cxzx150133",
"html_url": "https://github.com/cxzx150133",
"followers_url": "https://api.github.com/users/cxzx150133/followers",
"following_url": "https://api.github.com/users/cxzx150133/following{/other_user}",
"gists_url": "https://api.github.com/users/cxzx150133/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cxzx150133/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cxzx150133/subscriptions",
"organizations_url": "https://api.github.com/users/cxzx150133/orgs",
"repos_url": "https://api.github.com/users/cxzx150133/repos",
"events_url": "https://api.github.com/users/cxzx150133/events{/privacy}",
"received_events_url": "https://api.github.com/users/cxzx150133/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3334/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3334/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1154
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1154/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1154/comments
|
https://api.github.com/repos/ollama/ollama/issues/1154/events
|
https://github.com/ollama/ollama/issues/1154
| 1,997,530,054
|
I_kwDOJ0Z1Ps53D-PG
| 1,154
|
Cannot push models `FROM` library models
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 0
| 2023-11-16T18:51:53
| 2023-11-16T21:33:31
| 2023-11-16T21:33:31
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Attempting to push models with `FROM <library-model>` fails with scope errors.
**Steps to reproduce:**
1. Create a Modelfile from a library model.
```
FROM llama2
SYSTEM """
You are Mario from super mario bros, acting as an assistant.
"""
```
`ollama create <namespace>/mario -f path/to/modelfile`
2. Push the model
`ollama push <namespace>/mario`
3.
```
retrieving manifest
Error: max retries exceeded
```
**Root cause:**
Scopes on the tokens are not being set correctly for library models. It fails on the ollama.ai side with "Scope parameter set incorrectly" due to the scope being set to `:llama2:pull`.
This is the fix:
https://github.com/jmorganca/ollama/commit/bd86eab4261ca545c6ad6384da0bbcdcb4270e61
Once I make this change, I then see 405 errors on the first push (subsequent push of the same model succeeds after the inital 405). Have not had time to track down the cause of these 405s yet.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1154/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1154/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6464
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6464/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6464/comments
|
https://api.github.com/repos/ollama/ollama/issues/6464/events
|
https://github.com/ollama/ollama/issues/6464
| 2,480,991,617
|
I_kwDOJ0Z1Ps6T4O2B
| 6,464
|
Error: unsupported content type: unknown
|
{
"login": "CorrectPath",
"id": 179119218,
"node_id": "U_kgDOCq0kcg",
"avatar_url": "https://avatars.githubusercontent.com/u/179119218?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CorrectPath",
"html_url": "https://github.com/CorrectPath",
"followers_url": "https://api.github.com/users/CorrectPath/followers",
"following_url": "https://api.github.com/users/CorrectPath/following{/other_user}",
"gists_url": "https://api.github.com/users/CorrectPath/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CorrectPath/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CorrectPath/subscriptions",
"organizations_url": "https://api.github.com/users/CorrectPath/orgs",
"repos_url": "https://api.github.com/users/CorrectPath/repos",
"events_url": "https://api.github.com/users/CorrectPath/events{/privacy}",
"received_events_url": "https://api.github.com/users/CorrectPath/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 8
| 2024-08-22T14:50:31
| 2024-08-28T20:38:33
| 2024-08-28T20:38:33
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
This is the first time I tried to create a model with a gguf file, but it failed

model.modelfile

### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.3.6
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6464/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6464/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/649
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/649/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/649/comments
|
https://api.github.com/repos/ollama/ollama/issues/649/events
|
https://github.com/ollama/ollama/issues/649
| 1,919,731,310
|
I_kwDOJ0Z1Ps5ybMZu
| 649
|
Request: ensemble Llamas 🦙 (`llama2:13b-ensemble`)
|
{
"login": "jamesbraza",
"id": 8990777,
"node_id": "MDQ6VXNlcjg5OTA3Nzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/8990777?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jamesbraza",
"html_url": "https://github.com/jamesbraza",
"followers_url": "https://api.github.com/users/jamesbraza/followers",
"following_url": "https://api.github.com/users/jamesbraza/following{/other_user}",
"gists_url": "https://api.github.com/users/jamesbraza/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jamesbraza/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jamesbraza/subscriptions",
"organizations_url": "https://api.github.com/users/jamesbraza/orgs",
"repos_url": "https://api.github.com/users/jamesbraza/repos",
"events_url": "https://api.github.com/users/jamesbraza/events{/privacy}",
"received_events_url": "https://api.github.com/users/jamesbraza/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
closed
| false
| null |
[] | null | 6
| 2023-09-29T18:26:00
| 2023-12-04T20:04:02
| 2023-12-04T20:04:01
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
From Hugging Face's Open LLM leaderboard: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
A 13b model ranked somewhat highly is [`yeontaek/llama-2-13B-ensemble-v5`](https://huggingface.co/datasets/open-llm-leaderboard/details_yeontaek__llama-2-13B-ensemble-v5).

I believe TheBloke exposes it here via GGUF: https://huggingface.co/TheBloke/Llama-2-13B-Ensemble-v5-GGUF
It would be cool to add it to the [llama2](https://ollama.ai/library/llama2) offerings as `13b-ensemble`, `13b-ensemble-q4_0`.
|
{
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.github.com/users/technovangelist/followers",
"following_url": "https://api.github.com/users/technovangelist/following{/other_user}",
"gists_url": "https://api.github.com/users/technovangelist/gists{/gist_id}",
"starred_url": "https://api.github.com/users/technovangelist/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/technovangelist/subscriptions",
"organizations_url": "https://api.github.com/users/technovangelist/orgs",
"repos_url": "https://api.github.com/users/technovangelist/repos",
"events_url": "https://api.github.com/users/technovangelist/events{/privacy}",
"received_events_url": "https://api.github.com/users/technovangelist/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/649/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/649/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2033
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2033/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2033/comments
|
https://api.github.com/repos/ollama/ollama/issues/2033/events
|
https://github.com/ollama/ollama/issues/2033
| 2,086,722,446
|
I_kwDOJ0Z1Ps58YNuO
| 2,033
|
Add Vulkan runner
|
{
"login": "maxwell-kalin",
"id": 62115669,
"node_id": "MDQ6VXNlcjYyMTE1NjY5",
"avatar_url": "https://avatars.githubusercontent.com/u/62115669?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/maxwell-kalin",
"html_url": "https://github.com/maxwell-kalin",
"followers_url": "https://api.github.com/users/maxwell-kalin/followers",
"following_url": "https://api.github.com/users/maxwell-kalin/following{/other_user}",
"gists_url": "https://api.github.com/users/maxwell-kalin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/maxwell-kalin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/maxwell-kalin/subscriptions",
"organizations_url": "https://api.github.com/users/maxwell-kalin/orgs",
"repos_url": "https://api.github.com/users/maxwell-kalin/repos",
"events_url": "https://api.github.com/users/maxwell-kalin/events{/privacy}",
"received_events_url": "https://api.github.com/users/maxwell-kalin/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 6433346500,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf3UTxA",
"url": "https://api.github.com/repos/ollama/ollama/labels/amd",
"name": "amd",
"color": "000000",
"default": false,
"description": "Issues relating to AMD GPUs and ROCm"
},
{
"id": 6677491450,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgJu-g",
"url": "https://api.github.com/repos/ollama/ollama/labels/intel",
"name": "intel",
"color": "226E5B",
"default": false,
"description": "issues relating to Intel GPUs"
},
{
"id": 6677745918,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgZQ_g",
"url": "https://api.github.com/repos/ollama/ollama/labels/gpu",
"name": "gpu",
"color": "76C49E",
"default": false,
"description": ""
}
] |
open
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 30
| 2024-01-17T18:15:00
| 2025-01-21T19:49:38
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
https://github.com/nomic-ai/llama.cpp
GPT4All runs Mistral and Mixtral q4 models over 10x faster on my 6600M GPU
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2033/reactions",
"total_count": 40,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 36,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2033/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/2102
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2102/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2102/comments
|
https://api.github.com/repos/ollama/ollama/issues/2102/events
|
https://github.com/ollama/ollama/pull/2102
| 2,091,606,470
|
PR_kwDOJ0Z1Ps5kmhq_
| 2,102
|
fix: remove overwritten model layers
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-01-19T23:00:15
| 2024-01-22T17:37:50
| 2024-01-22T17:37:49
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/2102",
"html_url": "https://github.com/ollama/ollama/pull/2102",
"diff_url": "https://github.com/ollama/ollama/pull/2102.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2102.patch",
"merged_at": "2024-01-22T17:37:49"
}
|
if create overrides a manifest, first add the older manifest's layers to the delete map so they can be cleaned up
resolves #2097
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2102/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2102/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5815
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5815/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5815/comments
|
https://api.github.com/repos/ollama/ollama/issues/5815/events
|
https://github.com/ollama/ollama/pull/5815
| 2,420,963,614
|
PR_kwDOJ0Z1Ps51_b-4
| 5,815
|
Adjust windows ROCm discovery
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-07-20T16:23:10
| 2024-07-20T23:02:58
| 2024-07-20T23:02:55
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5815",
"html_url": "https://github.com/ollama/ollama/pull/5815",
"diff_url": "https://github.com/ollama/ollama/pull/5815.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5815.patch",
"merged_at": "2024-07-20T23:02:55"
}
|
The v5 hip library returns unsupported GPUs which wont enumerate at inference time in the runner so this makes sure we align discovery. The gfx906 cards are no longer supported so we shouldn't compile with that GPU type as it wont enumerate at runtime.
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5815/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5815/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4232
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4232/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4232/comments
|
https://api.github.com/repos/ollama/ollama/issues/4232/events
|
https://github.com/ollama/ollama/pull/4232
| 2,283,891,969
|
PR_kwDOJ0Z1Ps5uyn6j
| 4,232
|
Revert "fix golangci workflow not enable gofmt and goimports"
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-05-07T17:36:15
| 2024-05-09T08:45:07
| 2024-05-07T17:39:37
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4232",
"html_url": "https://github.com/ollama/ollama/pull/4232",
"diff_url": "https://github.com/ollama/ollama/pull/4232.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4232.patch",
"merged_at": "2024-05-07T17:39:37"
}
|
Reverts ollama/ollama#4190
gofmt is still a problem on windows see https://github.com/ollama/ollama/actions/runs/8989369091/job/24692319408?pr=4153
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4232/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4232/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4414
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4414/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4414/comments
|
https://api.github.com/repos/ollama/ollama/issues/4414/events
|
https://github.com/ollama/ollama/pull/4414
| 2,294,008,756
|
PR_kwDOJ0Z1Ps5vUtZD
| 4,414
|
update llama.cpp submodule to `614d3b9`
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-05-13T23:23:19
| 2024-05-16T20:53:10
| 2024-05-16T20:53:10
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4414",
"html_url": "https://github.com/ollama/ollama/pull/4414",
"diff_url": "https://github.com/ollama/ollama/pull/4414.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4414.patch",
"merged_at": "2024-05-16T20:53:09"
}
| null |
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4414/reactions",
"total_count": 6,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 6,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4414/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2904
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2904/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2904/comments
|
https://api.github.com/repos/ollama/ollama/issues/2904/events
|
https://github.com/ollama/ollama/issues/2904
| 2,165,614,507
|
I_kwDOJ0Z1Ps6BFKer
| 2,904
|
cuMemCreate with gpu nvidia m2000
|
{
"login": "aymengazzah",
"id": 152094579,
"node_id": "U_kgDOCRDHcw",
"avatar_url": "https://avatars.githubusercontent.com/u/152094579?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aymengazzah",
"html_url": "https://github.com/aymengazzah",
"followers_url": "https://api.github.com/users/aymengazzah/followers",
"following_url": "https://api.github.com/users/aymengazzah/following{/other_user}",
"gists_url": "https://api.github.com/users/aymengazzah/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aymengazzah/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aymengazzah/subscriptions",
"organizations_url": "https://api.github.com/users/aymengazzah/orgs",
"repos_url": "https://api.github.com/users/aymengazzah/repos",
"events_url": "https://api.github.com/users/aymengazzah/events{/privacy}",
"received_events_url": "https://api.github.com/users/aymengazzah/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-03-03T23:10:38
| 2024-03-05T20:25:02
| 2024-03-05T20:25:02
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
"Hi, is anyone else experiencing this error with the GPU? The GPU successfully passes through for video transcoding in another container app (Emby/Plex), but it shows an error for all ollama models."
### Error library
` level=WARN source=llm.go:162 msg="Failed to load dynamic library /tmp/ollama4235720163/cuda_v11/libext_server.so Unable to load dynamic library: Unable to load dynamic server library: /tmp/ollama4235720163/cuda_v11/libext_server.so: undefined symbol: cuMemCreate"`
### docker start
```
level=INFO source=images.go:710 msg="total blobs: 12"
level=INFO source=images.go:717 msg="total unused blobs removed: 0"
level=INFO source=routes.go:1019 msg="Listening on [::]:11434 (version 0.1.27)"
level=INFO source=payload_common.go:107 msg="Extracting dynamic libraries..."
level=INFO source=payload_common.go:146 msg="Dynamic LLM libraries [cpu cpu_avx cuda_v11 rocm_v6 cpu_avx2 rocm_v5]"
level=INFO source=gpu.go:94 msg="Detecting GPU type"
level=INFO source=gpu.go:265 msg="Searching for GPU management library libnvidia-ml.so"
level=INFO source=gpu.go:311 msg="Discovered GPU libraries: [/usr/lib/x86_64-linux-gnu/libnvidia-ml.so.418.226.00]"
level=INFO source=gpu.go:99 msg="Nvidia GPU detected"
level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
level=INFO source=gpu.go:146 msg="CUDA Compute Capability detected: 5.2"
```
### Start request
```
level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
level=INFO source=gpu.go:146 msg="CUDA Compute Capability detected: 5.2"
level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
level=INFO source=gpu.go:146 msg="CUDA Compute Capability detected: 5.2"
level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
level=WARN source=llm.go:162 msg="Failed to load dynamic library /tmp/ollama4235720163/cuda_v11/libext_server.so Unable to load dynamic library: Unable to load dynamic server library: /tmp/ollama4235720163/cuda_v11/libext_server.so: undefined symbol: cuMemCreate"
level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: /tmp/ollama4235720163/cpu_avx2/libext_server.so"
level=INFO source=dyn_ext_server.go:150 msg="Initializing llama server"
llama_model_loader: loaded meta data with 23 key-value pairs and 201 tensors from /root/.ollama/models/blobs/sha256:2af3b81862c6be03c769683af18efdadb2c33f60ff32ab6f83e42c043d6c7816 (version GGUF V3 (latest))
```
### Check nvidia
```
root@srv-01:~$ sudo docker exec -it ollama bash
root@bc5e85c49508:/# nvidia-smi
Sun Mar 3 23:07:23 2024
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 418.226.00 Driver Version: 418.226.00 CUDA Version: 10.1 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Quadro M2000 On | 00000000:03:00.0 On | N/A |
| 56% 31C P8 8W / 75W | 1MiB / 4040MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
```
### environment
os: debian 10.13
docker: 25.0.3
cpu: E5-2698 v4
gpu nvidia quadro m2000 4GB
### compose.yml
```
version: '3.8'
services:
ollama:
image: ollama/ollama:latest
container_name: ollama
restart: unless-stopped
user: "0:0"
userns_mode: host
volumes:
- ./:/root/.ollama
deploy:
resources:
reservations:
devices:
- driver: ${OLLAMA_GPU_DRIVER-nvidia}
count: ${OLLAMA_GPU_COUNT-1}
capabilities:
- gpu
```
|
{
"login": "aymengazzah",
"id": 152094579,
"node_id": "U_kgDOCRDHcw",
"avatar_url": "https://avatars.githubusercontent.com/u/152094579?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aymengazzah",
"html_url": "https://github.com/aymengazzah",
"followers_url": "https://api.github.com/users/aymengazzah/followers",
"following_url": "https://api.github.com/users/aymengazzah/following{/other_user}",
"gists_url": "https://api.github.com/users/aymengazzah/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aymengazzah/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aymengazzah/subscriptions",
"organizations_url": "https://api.github.com/users/aymengazzah/orgs",
"repos_url": "https://api.github.com/users/aymengazzah/repos",
"events_url": "https://api.github.com/users/aymengazzah/events{/privacy}",
"received_events_url": "https://api.github.com/users/aymengazzah/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2904/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2904/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5802
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5802/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5802/comments
|
https://api.github.com/repos/ollama/ollama/issues/5802/events
|
https://github.com/ollama/ollama/pull/5802
| 2,420,416,791
|
PR_kwDOJ0Z1Ps519kW8
| 5,802
|
preserve last assistant message
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-07-20T00:50:59
| 2024-07-20T03:19:28
| 2024-07-20T03:19:26
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5802",
"html_url": "https://github.com/ollama/ollama/pull/5802",
"diff_url": "https://github.com/ollama/ollama/pull/5802.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5802.patch",
"merged_at": "2024-07-20T03:19:26"
}
|
Fixes https://github.com/ollama/ollama/issues/5775
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5802/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5802/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5976
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5976/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5976/comments
|
https://api.github.com/repos/ollama/ollama/issues/5976/events
|
https://github.com/ollama/ollama/issues/5976
| 2,431,751,366
|
I_kwDOJ0Z1Ps6Q8ZTG
| 5,976
|
Unnecessary quotes when calling a tool
|
{
"login": "napa3um",
"id": 665538,
"node_id": "MDQ6VXNlcjY2NTUzOA==",
"avatar_url": "https://avatars.githubusercontent.com/u/665538?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/napa3um",
"html_url": "https://github.com/napa3um",
"followers_url": "https://api.github.com/users/napa3um/followers",
"following_url": "https://api.github.com/users/napa3um/following{/other_user}",
"gists_url": "https://api.github.com/users/napa3um/gists{/gist_id}",
"starred_url": "https://api.github.com/users/napa3um/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/napa3um/subscriptions",
"organizations_url": "https://api.github.com/users/napa3um/orgs",
"repos_url": "https://api.github.com/users/napa3um/repos",
"events_url": "https://api.github.com/users/napa3um/events{/privacy}",
"received_events_url": "https://api.github.com/users/napa3um/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 0
| 2024-07-26T08:52:09
| 2024-07-26T08:52:09
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I'm using **mistral-nemo:12b-instruct-2407-q4_1**
I'm trying to reproduce this example - https://github.com/ollama/ollama-js/blob/main/examples/tools/tools.ts
```javascript
tools: [
{
type: 'function',
function: {
name: 'eval',
description: 'Вычисляет арифметическое выражение в формате JS-функции eval. Используется тогда, когда нужен калькулятор.',
parameters: {
type: 'object',
properties: {
expression: {
type: 'string',
description: 'Арифметическое выражение',
}
},
required: ['expression'],
},
},
},
],
```
Everything works, but sometimes I get (without a space between the quotes):
```
The model didn't use the function. Its response was:
``⠀`
{
"jsonrpc": "2.0",
"method": "eval",
"params": {
"expression": "5234 / 6453 * 23456"
}
}
``⠀`
```
That is, sometimes the model wraps JSON in triple quotes, and Ollama interprets this as a text response rather than a tool call.
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.3.0
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5976/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/ollama/ollama/issues/5976/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/5392
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5392/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5392/comments
|
https://api.github.com/repos/ollama/ollama/issues/5392/events
|
https://github.com/ollama/ollama/pull/5392
| 2,382,257,842
|
PR_kwDOJ0Z1Ps5z_MFu
| 5,392
|
add ppc64le to code issues 796
|
{
"login": "ALutz273",
"id": 72616997,
"node_id": "MDQ6VXNlcjcyNjE2OTk3",
"avatar_url": "https://avatars.githubusercontent.com/u/72616997?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ALutz273",
"html_url": "https://github.com/ALutz273",
"followers_url": "https://api.github.com/users/ALutz273/followers",
"following_url": "https://api.github.com/users/ALutz273/following{/other_user}",
"gists_url": "https://api.github.com/users/ALutz273/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ALutz273/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ALutz273/subscriptions",
"organizations_url": "https://api.github.com/users/ALutz273/orgs",
"repos_url": "https://api.github.com/users/ALutz273/repos",
"events_url": "https://api.github.com/users/ALutz273/events{/privacy}",
"received_events_url": "https://api.github.com/users/ALutz273/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-06-30T13:34:26
| 2024-11-08T15:51:36
| 2024-11-08T15:51:36
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5392",
"html_url": "https://github.com/ollama/ollama/pull/5392",
"diff_url": "https://github.com/ollama/ollama/pull/5392.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5392.patch",
"merged_at": null
}
|
I tested it on a Power9 machine and the change worked. Unfortunately I don't have a GPU in there yet (cuda)
|
{
"login": "ALutz273",
"id": 72616997,
"node_id": "MDQ6VXNlcjcyNjE2OTk3",
"avatar_url": "https://avatars.githubusercontent.com/u/72616997?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ALutz273",
"html_url": "https://github.com/ALutz273",
"followers_url": "https://api.github.com/users/ALutz273/followers",
"following_url": "https://api.github.com/users/ALutz273/following{/other_user}",
"gists_url": "https://api.github.com/users/ALutz273/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ALutz273/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ALutz273/subscriptions",
"organizations_url": "https://api.github.com/users/ALutz273/orgs",
"repos_url": "https://api.github.com/users/ALutz273/repos",
"events_url": "https://api.github.com/users/ALutz273/events{/privacy}",
"received_events_url": "https://api.github.com/users/ALutz273/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5392/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5392/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7993
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7993/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7993/comments
|
https://api.github.com/repos/ollama/ollama/issues/7993/events
|
https://github.com/ollama/ollama/issues/7993
| 2,724,935,659
|
I_kwDOJ0Z1Ps6iazfr
| 7,993
|
Structured generation cannot handle self referencing (recursion)
|
{
"login": "CakeCrusher",
"id": 37946988,
"node_id": "MDQ6VXNlcjM3OTQ2OTg4",
"avatar_url": "https://avatars.githubusercontent.com/u/37946988?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CakeCrusher",
"html_url": "https://github.com/CakeCrusher",
"followers_url": "https://api.github.com/users/CakeCrusher/followers",
"following_url": "https://api.github.com/users/CakeCrusher/following{/other_user}",
"gists_url": "https://api.github.com/users/CakeCrusher/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CakeCrusher/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CakeCrusher/subscriptions",
"organizations_url": "https://api.github.com/users/CakeCrusher/orgs",
"repos_url": "https://api.github.com/users/CakeCrusher/repos",
"events_url": "https://api.github.com/users/CakeCrusher/events{/privacy}",
"received_events_url": "https://api.github.com/users/CakeCrusher/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
|
{
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "https://api.github.com/users/ParthSareen/followers",
"following_url": "https://api.github.com/users/ParthSareen/following{/other_user}",
"gists_url": "https://api.github.com/users/ParthSareen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ParthSareen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ParthSareen/subscriptions",
"organizations_url": "https://api.github.com/users/ParthSareen/orgs",
"repos_url": "https://api.github.com/users/ParthSareen/repos",
"events_url": "https://api.github.com/users/ParthSareen/events{/privacy}",
"received_events_url": "https://api.github.com/users/ParthSareen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "https://api.github.com/users/ParthSareen/followers",
"following_url": "https://api.github.com/users/ParthSareen/following{/other_user}",
"gists_url": "https://api.github.com/users/ParthSareen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ParthSareen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ParthSareen/subscriptions",
"organizations_url": "https://api.github.com/users/ParthSareen/orgs",
"repos_url": "https://api.github.com/users/ParthSareen/repos",
"events_url": "https://api.github.com/users/ParthSareen/events{/privacy}",
"received_events_url": "https://api.github.com/users/ParthSareen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 2
| 2024-12-08T03:51:25
| 2025-01-29T17:57:55
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Ollama structured genereation cannot handle self referencing recursion
```py
import json
from pydantic import BaseModel, Field
from typing import Optional
class Dossier(BaseModel):
"""Build a profile for the user"""
name: str = Field(..., description="The name of the user")
age: int = Field(..., description="The age of the user")
friends: list["Dossier"] = []
print(json.dumps(Dossier.model_json_schema(), indent=2))
from ollama import chat
response = chat(
messages=[
{
'role': 'user',
'content': 'Hello my name is Tom I am 13 years old, my friend Bob is 14 years old'
}
],
model='llama3.1:8b-instruct-q2_K',
format=Dossier.model_json_schema()
)
dossier = Dossier.model_validate_json(response.message.content)
print(dossier.model_dump_json(indent=2))
```
Output:
```
{
"$defs": {
"Dossier": {
"description": "Build a profile for the user",
"properties": {
"name": {
"description": "The name of the user",
"title": "Name",
"type": "string"
},
"age": {
"description": "The age of the user",
"title": "Age",
"type": "integer"
},
"friends": {
"default": [],
"items": {
"$ref": "#/$defs/Dossier"
},
"title": "Friends",
"type": "array"
}
},
"required": [
"name",
"age"
],
"title": "Dossier",
"type": "object"
}
},
"$ref": "#/$defs/Dossier"
}
```
```
---------------------------------------------------------------------------
ValidationError Traceback (most recent call last)
Cell In[54], [line 25](vscode-notebook-cell:?execution_count=54&line=25)
[13](vscode-notebook-cell:?execution_count=54&line=13) from ollama import chat
[15](vscode-notebook-cell:?execution_count=54&line=15) response = chat(
[16](vscode-notebook-cell:?execution_count=54&line=16) messages=[
[17](vscode-notebook-cell:?execution_count=54&line=17) {
(...)
[23](vscode-notebook-cell:?execution_count=54&line=23) format=Dossier.model_json_schema()
[24](vscode-notebook-cell:?execution_count=54&line=24) )
---> [25](vscode-notebook-cell:?execution_count=54&line=25) dossier = Dossier.model_validate_json(response.message.content)
[26](vscode-notebook-cell:?execution_count=54&line=26) print(dossier.model_dump_json(indent=2))
File c:\Notes\ollama\.venv\lib\site-packages\pydantic\main.py:656, in BaseModel.model_validate_json(cls, json_data, strict, context)
[654](file:///C:/Notes/ollama/.venv/lib/site-packages/pydantic/main.py:654) # `__tracebackhide__` tells pytest and some other tools to omit this function from tracebacks
[655](file:///C:/Notes/ollama/.venv/lib/site-packages/pydantic/main.py:655) __tracebackhide__ = True
--> [656](file:///C:/Notes/ollama/.venv/lib/site-packages/pydantic/main.py:656) return cls.__pydantic_validator__.validate_json(json_data, strict=strict, context=context)
ValidationError: 1 validation error for Dossier
Invalid JSON: expected value at line 1 column 1 [type=json_invalid, input_value="So you're 13 and your fr...m outside of class too?", input_type=str]
For further information visit https://errors.pydantic.dev/2.10/v/json_invalid
```
When destructured there is no problem
```py
import json
from pydantic import BaseModel, Field
from typing import Optional
class DossierInner(BaseModel):
"""Build a profile for the user"""
name: str = Field(..., description="The name of the user")
age: int = Field(..., description="The age of the user")
class Dossier(BaseModel):
"""Build a profile for the user"""
name: str = Field(..., description="The name of the user")
age: int = Field(..., description="The age of the user")
friends: list[DossierInner] = []
print(json.dumps(Dossier.model_json_schema(), indent=2))
from ollama import chat
response = chat(
messages=[
{
'role': 'user',
'content': 'Hello my name is Tom I am 13 years old, my friend Bob is 14 years old'
}
],
model='llama3.1:8b-instruct-q2_K',
format=Dossier.model_json_schema()
)
dossier = Dossier.model_validate_json(response.message.content)
print(dossier.model_dump_json(indent=2))
```
Output:
```
{
"$defs": {
"DossierInner": {
"description": "Build a profile for the user",
"properties": {
"name": {
"description": "The name of the user",
"title": "Name",
"type": "string"
},
"age": {
"description": "The age of the user",
"title": "Age",
"type": "integer"
}
},
"required": [
"name",
"age"
],
"title": "DossierInner",
"type": "object"
}
},
"description": "Build a profile for the user",
"properties": {
"name": {
"description": "The name of the user",
"title": "Name",
"type": "string"
},
"age": {
"description": "The age of the user",
"title": "Age",
"type": "integer"
},
"friends": {
"default": [],
"items": {
"$ref": "#/$defs/DossierInner"
},
"title": "Friends",
"type": "array"
}
},
"required": [
"name",
"age"
],
"title": "Dossier",
"type": "object"
}
{
"name": "Tom",
"age": -1,
"friends": [
{
"name": "Bob",
"age": -2
}
]
}
```
### OS
Windows
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.5.1
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7993/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7993/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/1092
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1092/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1092/comments
|
https://api.github.com/repos/ollama/ollama/issues/1092/events
|
https://github.com/ollama/ollama/issues/1092
| 1,989,063,350
|
I_kwDOJ0Z1Ps52jrK2
| 1,092
|
build failure: `APPLE_IDENTITY: unbound variable`
|
{
"login": "jpmcb",
"id": 23109390,
"node_id": "MDQ6VXNlcjIzMTA5Mzkw",
"avatar_url": "https://avatars.githubusercontent.com/u/23109390?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jpmcb",
"html_url": "https://github.com/jpmcb",
"followers_url": "https://api.github.com/users/jpmcb/followers",
"following_url": "https://api.github.com/users/jpmcb/following{/other_user}",
"gists_url": "https://api.github.com/users/jpmcb/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jpmcb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jpmcb/subscriptions",
"organizations_url": "https://api.github.com/users/jpmcb/orgs",
"repos_url": "https://api.github.com/users/jpmcb/repos",
"events_url": "https://api.github.com/users/jpmcb/events{/privacy}",
"received_events_url": "https://api.github.com/users/jpmcb/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2023-11-11T18:04:33
| 2023-11-12T22:25:08
| 2023-11-12T22:25:08
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Attempting to build on the darwin platform using the `build/build_darwing.sh` script results in the following error:
```
./scripts/build_darwin.sh: line 17: APPLE_IDENTITY: unbound variable
```
This is after go generate (with `cmake` for th llama.cpp targets) and the `ollama` binary have completed building:
```
❯ ls -la dist
total 71664
drwxr-xr-x@ 3 jpmcb staff 96 Nov 11 10:52 .
drwxr-xr-x@ 28 jpmcb staff 896 Nov 11 10:46 ..
-rwxr-xr-x@ 1 jpmcb staff 36688338 Nov 11 10:52 ollama
```
There appear to be a few unbound variables: `APPLE_IDENTITY`, `APPLE_ID`, `APPLE_PASSWORD`, `APPLE_TEAM_ID` which my guess are used to sign/authenticate the binaries so they run smoothly on macs without
I more or less just want a way to build and run locally without having to sign any binaries. `go generate ./... && go build-o dist/ollama-dev` would probably work fine but wondering if there's a more official way.
---
Some additional details on my system:
```
❯ system_profiler SPSoftwareDataType SPHardwareDataType
Software:
System Software Overview:
System Version: macOS 13.3 (22E252)
Kernel Version: Darwin 22.4.0
Boot Volume: Macintosh HD
Boot Mode: Normal
Hardware:
Hardware Overview:
Model Name: MacBook Pro
Chip: Apple M2 Max
Total Number of Cores: 12 (8 performance and 4 efficiency)
Memory: 32 GB
System Firmware Version: 8422.100.650
OS Loader Version: 8422.100.650
```
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1092/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1092/timeline
| null |
not_planned
| false
|
https://api.github.com/repos/ollama/ollama/issues/8326
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8326/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8326/comments
|
https://api.github.com/repos/ollama/ollama/issues/8326/events
|
https://github.com/ollama/ollama/issues/8326
| 2,771,707,302
|
I_kwDOJ0Z1Ps6lNOWm
| 8,326
|
Error: pull model manifest: 400: The specified repository contains sharded GGUF. Ollama does not support this yet.
|
{
"login": "OnceCrazyer",
"id": 16172911,
"node_id": "MDQ6VXNlcjE2MTcyOTEx",
"avatar_url": "https://avatars.githubusercontent.com/u/16172911?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/OnceCrazyer",
"html_url": "https://github.com/OnceCrazyer",
"followers_url": "https://api.github.com/users/OnceCrazyer/followers",
"following_url": "https://api.github.com/users/OnceCrazyer/following{/other_user}",
"gists_url": "https://api.github.com/users/OnceCrazyer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/OnceCrazyer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/OnceCrazyer/subscriptions",
"organizations_url": "https://api.github.com/users/OnceCrazyer/orgs",
"repos_url": "https://api.github.com/users/OnceCrazyer/repos",
"events_url": "https://api.github.com/users/OnceCrazyer/events{/privacy}",
"received_events_url": "https://api.github.com/users/OnceCrazyer/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 1
| 2025-01-07T01:17:03
| 2025-01-24T09:44:14
| 2025-01-24T09:44:14
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Error: pull model manifest: 400: The specified repository contains sharded GGUF. Ollama does not support this yet.
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.5.4
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8326/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8326/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/532
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/532/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/532/comments
|
https://api.github.com/repos/ollama/ollama/issues/532/events
|
https://github.com/ollama/ollama/pull/532
| 1,897,719,368
|
PR_kwDOJ0Z1Ps5aZ0p7
| 532
|
remove `.First`
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2023-09-15T05:12:25
| 2024-01-09T18:58:37
| 2024-01-09T18:58:37
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/532",
"html_url": "https://github.com/ollama/ollama/pull/532",
"diff_url": "https://github.com/ollama/ollama/pull/532.diff",
"patch_url": "https://github.com/ollama/ollama/pull/532.patch",
"merged_at": null
}
|
This change removes the need for `.First`
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/532/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/532/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/8453
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8453/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8453/comments
|
https://api.github.com/repos/ollama/ollama/issues/8453/events
|
https://github.com/ollama/ollama/issues/8453
| 2,792,105,255
|
I_kwDOJ0Z1Ps6mbCUn
| 8,453
|
support ReaderLM-v2
|
{
"login": "sunburst-yz",
"id": 37734140,
"node_id": "MDQ6VXNlcjM3NzM0MTQw",
"avatar_url": "https://avatars.githubusercontent.com/u/37734140?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sunburst-yz",
"html_url": "https://github.com/sunburst-yz",
"followers_url": "https://api.github.com/users/sunburst-yz/followers",
"following_url": "https://api.github.com/users/sunburst-yz/following{/other_user}",
"gists_url": "https://api.github.com/users/sunburst-yz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sunburst-yz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sunburst-yz/subscriptions",
"organizations_url": "https://api.github.com/users/sunburst-yz/orgs",
"repos_url": "https://api.github.com/users/sunburst-yz/repos",
"events_url": "https://api.github.com/users/sunburst-yz/events{/privacy}",
"received_events_url": "https://api.github.com/users/sunburst-yz/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
open
| false
| null |
[] | null | 3
| 2025-01-16T09:02:14
| 2025-01-19T18:33:04
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
https://huggingface.co/jinaai/ReaderLM-v2
ReaderLM-v2 is specialized for tasks involving HTML parsing, transformation, and text extraction.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8453/reactions",
"total_count": 8,
"+1": 8,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8453/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/2281
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2281/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2281/comments
|
https://api.github.com/repos/ollama/ollama/issues/2281/events
|
https://github.com/ollama/ollama/issues/2281
| 2,108,424,779
|
I_kwDOJ0Z1Ps59rAJL
| 2,281
|
Support GPU runners with AVX2
|
{
"login": "hyjwei",
"id": 76876891,
"node_id": "MDQ6VXNlcjc2ODc2ODkx",
"avatar_url": "https://avatars.githubusercontent.com/u/76876891?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hyjwei",
"html_url": "https://github.com/hyjwei",
"followers_url": "https://api.github.com/users/hyjwei/followers",
"following_url": "https://api.github.com/users/hyjwei/following{/other_user}",
"gists_url": "https://api.github.com/users/hyjwei/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hyjwei/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hyjwei/subscriptions",
"organizations_url": "https://api.github.com/users/hyjwei/orgs",
"repos_url": "https://api.github.com/users/hyjwei/repos",
"events_url": "https://api.github.com/users/hyjwei/events{/privacy}",
"received_events_url": "https://api.github.com/users/hyjwei/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 6677745918,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgZQ_g",
"url": "https://api.github.com/repos/ollama/ollama/labels/gpu",
"name": "gpu",
"color": "76C49E",
"default": false,
"description": ""
},
{
"id": 7700262114,
"node_id": "LA_kwDOJ0Z1Ps8AAAAByvis4g",
"url": "https://api.github.com/repos/ollama/ollama/labels/build",
"name": "build",
"color": "006b75",
"default": false,
"description": "Issues relating to building ollama from source"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 7
| 2024-01-30T17:47:16
| 2024-12-10T17:47:22
| 2024-12-10T17:47:21
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I am running ollama on i7-14700K, which supports AVX2 and AVX_VNNI, and a GeForce RTX 1060.
After reading #2205, I enable `OLLAMA_DEBUG=1` to check if ollama utilize AVX2 of this CPU. But unlike that one, I couldn't get ollama to use AVX2. The debug message has:
```
time=2024-01-30T12:27:26.016-05:00 level=INFO source=/tmp/ollama/gpu/gpu.go:146 msg="CUDA Compute Capability detected: 6.1"
time=2024-01-30T12:27:26.016-05:00 level=INFO source=/tmp/ollama/gpu/cpu_common.go:11 msg="CPU has AVX2"
loading library /tmp/ollama1660685050/cuda_v12/libext_server.so
time=2024-01-30T12:27:26.032-05:00 level=INFO source=/tmp/ollama/llm/dyn_ext_server.go:90 msg="Loading Dynamic llm server: /tmp/ollama1660685050/cuda_v12/libext_server.so"
time=2024-01-30T12:27:26.032-05:00 level=INFO source=/tmp/ollama/llm/dyn_ext_server.go:145 msg="Initializing llama server"
[1706635646] system info: AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 |
[1706635646] Performing pre-initialization of GPU
ggml_init_cublas: GGML_CUDA_FORCE_MMQ: yes
ggml_init_cublas: CUDA_USE_TENSOR_CORES: no
ggml_init_cublas: found 1 CUDA devices:
Device 0: NVIDIA GeForce GTX 1060 3GB, compute capability 6.1, VMM: yes
```
Thus ollama does detect GPU and also reports `CPU has AVX2`. However, when initializing server, it shows `AVX2 = 0` as well as `AVX_VNNI = 0`.
I also follow [here](https://github.com/ollama/ollama/blob/main/docs/development.md), setting `OLLAMA_CUSTOM_CPU_DEFS="-DLLAMA_AVX=on -DLLAMA_AVX2=on -DLLAMA_F16C=on -DLLAMA_FMA=on"`, to build the binary locally with AVX2 support. However, the result is the same as the released binary, and I still get `AVX_VNNI = 0 | AVX2 = 0`. How can I make ollama use AVX2 in my CPU?
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2281/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2281/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1508
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1508/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1508/comments
|
https://api.github.com/repos/ollama/ollama/issues/1508/events
|
https://github.com/ollama/ollama/issues/1508
| 2,040,308,485
|
I_kwDOJ0Z1Ps55nKMF
| 1,508
|
Error: llama runner process has terminated on M2
|
{
"login": "milioe",
"id": 80537193,
"node_id": "MDQ6VXNlcjgwNTM3MTkz",
"avatar_url": "https://avatars.githubusercontent.com/u/80537193?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/milioe",
"html_url": "https://github.com/milioe",
"followers_url": "https://api.github.com/users/milioe/followers",
"following_url": "https://api.github.com/users/milioe/following{/other_user}",
"gists_url": "https://api.github.com/users/milioe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/milioe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/milioe/subscriptions",
"organizations_url": "https://api.github.com/users/milioe/orgs",
"repos_url": "https://api.github.com/users/milioe/repos",
"events_url": "https://api.github.com/users/milioe/events{/privacy}",
"received_events_url": "https://api.github.com/users/milioe/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 5
| 2023-12-13T19:07:19
| 2023-12-17T16:02:36
| 2023-12-14T04:29:24
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I'm currently running Ollama on a MacBook Air m2 (8GB)
I firstly installed Ollama through `brew install ollama` and got `Error: llama runner process has terminated` after pulling and running `mistral:instruct` and `mistral:latest` .
After that, I uninstalled using `brew uninstall ollama` then installing it through website. I ran the following commands:
* `ollama pull mistral:instruct`
* `ollama run mistral`
and I got the same error:
`Error: llama runner process has terminated`
I checked the logs using `cat ~/.ollama/logs/server.log` and I got this:
```
2023/12/13 13:01:18 llama.go:434: starting llama runner
2023/12/13 13:01:18 llama.go:492: waiting for llama runner to start responding
{"timestamp":1702494078,"level":"INFO","function":"main","line":2656,"message":"build info","build":417,"commit":"23b5e12"}
{"timestamp":1702494078,"level":"INFO","function":"main","line":2663,"message":"system info","n_threads":4,"n_threads_batch":-1,"total_threads":8,"system_info":"AVX = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 1 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | SSSE3 = 0 | VSX = 0 | "}
llama_model_loader: loaded meta data with 22 key-value pairs and 291 tensors from /Users/emiliosandoval/.ollama/models/blobs/sha256:c70fa74a8e81c3bd041cc2c30152fe6e251fdc915a3792147147a5c06bc4b309 (version GGUF V3 (latest))
llama_model_loader: - tensor 0: token_embd.weight q4_0 [ 4096, 32000, 1, 1 ]
llama_model_loader: - tensor 1: blk.0.attn_q.weight q4_0 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 2: blk.0.attn_k.weight q4_0 [ 4096, 1024, 1, 1 ]
llama_model_loader: - tensor 3: blk.0.attn_v.weight q4_0 [ 4096, 1024, 1, 1 ]
llama_model_loader: - tensor 4: blk.0.attn_output.weight q4_0 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 5: blk.0.ffn_gate.weight q4_0 [ 4096, 14336, 1, 1 ]
llama_model_loader: - tensor 6: blk.0.ffn_up.weight q4_0 [ 4096, 14336, 1, 1 ]
llama_model_loader: - tensor 7: blk.0.ffn_down.weight q4_0 [ 14336, 4096, 1, 1 ]
llama_model_loader: - tensor 8: blk.0.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 9: blk.0.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 10: blk.1.attn_q.weight q4_0 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 11: blk.1.attn_k.weight q4_0 [ 4096, 1024, 1, 1 ]
llama_model_loader: - tensor 12: blk.1.attn_v.weight q4_0 [ 4096, 1024, 1, 1 ]
llama_model_loader: - tensor 13: blk.1.attn_output.weight q4_0 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 14: blk.1.ffn_gate.weight q4_0 [ 4096, 14336, 1, 1 ]
llama_model_loader: - tensor 15: blk.1.ffn_up.weight q4_0 [ 4096, 14336, 1, 1 ]
llama_model_loader: - tensor 16: blk.1.ffn_down.weight q4_0 [ 14336, 4096, 1, 1 ]
llama_model_loader: - tensor 17: blk.1.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 18: blk.1.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 19: blk.2.attn_q.weight q4_0 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 20: blk.2.attn_k.weight q4_0 [ 4096, 1024, 1, 1 ]
llama_model_loader: - tensor 21: blk.2.attn_v.weight q4_0 [ 4096, 1024, 1, 1 ]
llama_model_loader: - tensor 22: blk.2.attn_output.weight q4_0 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 23: blk.2.ffn_gate.weight q4_0 [ 4096, 14336, 1, 1 ]
llama_model_loader: - tensor 24: blk.2.ffn_up.weight q4_0 [ 4096, 14336, 1, 1 ]
llama_model_loader: - tensor 25: blk.2.ffn_down.weight q4_0 [ 14336, 4096, 1, 1 ]
llama_model_loader: - tensor 26: blk.2.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 27: blk.2.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 28: blk.3.attn_q.weight q4_0 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 29: blk.3.attn_k.weight q4_0 [ 4096, 1024, 1, 1 ]
llama_model_loader: - tensor 30: blk.3.attn_v.weight q4_0 [ 4096, 1024, 1, 1 ]
llama_model_loader: - tensor 31: blk.3.attn_output.weight q4_0 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 32: blk.3.ffn_gate.weight q4_0 [ 4096, 14336, 1, 1 ]
llama_model_loader: - tensor 33: blk.3.ffn_up.weight q4_0 [ 4096, 14336, 1, 1 ]
llama_model_loader: - tensor 34: blk.3.ffn_down.weight q4_0 [ 14336, 4096, 1, 1 ]
llama_model_loader: - tensor 35: blk.3.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 36: blk.3.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 37: blk.4.attn_q.weight q4_0 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 38: blk.4.attn_k.weight q4_0 [ 4096, 1024, 1, 1 ]
llama_model_loader: - tensor 39: blk.4.attn_v.weight q4_0 [ 4096, 1024, 1, 1 ]
llama_model_loader: - tensor 40: blk.4.attn_output.weight q4_0 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 41: blk.4.ffn_gate.weight q4_0 [ 4096, 14336, 1, 1 ]
llama_model_loader: - tensor 42: blk.4.ffn_up.weight q4_0 [ 4096, 14336, 1, 1 ]
llama_model_loader: - tensor 43: blk.4.ffn_down.weight q4_0 [ 14336, 4096, 1, 1 ]
llama_model_loader: - tensor 44: blk.4.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 45: blk.4.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 46: blk.5.attn_q.weight q4_0 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 47: blk.5.attn_k.weight q4_0 [ 4096, 1024, 1, 1 ]
llama_model_loader: - tensor 48: blk.5.attn_v.weight q4_0 [ 4096, 1024, 1, 1 ]
llama_model_loader: - tensor 49: blk.5.attn_output.weight q4_0 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 50: blk.5.ffn_gate.weight q4_0 [ 4096, 14336, 1, 1 ]
llama_model_loader: - tensor 51: blk.5.ffn_up.weight q4_0 [ 4096, 14336, 1, 1 ]
llama_model_loader: - tensor 52: blk.5.ffn_down.weight q4_0 [ 14336, 4096, 1, 1 ]
llama_model_loader: - tensor 53: blk.5.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 54: blk.5.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 55: blk.6.attn_q.weight q4_0 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 56: blk.6.attn_k.weight q4_0 [ 4096, 1024, 1, 1 ]
llama_model_loader: - tensor 57: blk.6.attn_v.weight q4_0 [ 4096, 1024, 1, 1 ]
llama_model_loader: - tensor 58: blk.6.attn_output.weight q4_0 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 59: blk.6.ffn_gate.weight q4_0 [ 4096, 14336, 1, 1 ]
llama_model_loader: - tensor 60: blk.6.ffn_up.weight q4_0 [ 4096, 14336, 1, 1 ]
llama_model_loader: - tensor 61: blk.6.ffn_down.weight q4_0 [ 14336, 4096, 1, 1 ]
llama_model_loader: - tensor 62: blk.6.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 63: blk.6.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 64: blk.7.attn_q.weight q4_0 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 65: blk.7.attn_k.weight q4_0 [ 4096, 1024, 1, 1 ]
llama_model_loader: - tensor 66: blk.7.attn_v.weight q4_0 [ 4096, 1024, 1, 1 ]
llama_model_loader: - tensor 67: blk.7.attn_output.weight q4_0 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 68: blk.7.ffn_gate.weight q4_0 [ 4096, 14336, 1, 1 ]
llama_model_loader: - tensor 69: blk.7.ffn_up.weight q4_0 [ 4096, 14336, 1, 1 ]
llama_model_loader: - tensor 70: blk.7.ffn_down.weight q4_0 [ 14336, 4096, 1, 1 ]
llama_model_loader: - tensor 71: blk.7.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 72: blk.7.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 73: blk.8.attn_q.weight q4_0 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 74: blk.8.attn_k.weight q4_0 [ 4096, 1024, 1, 1 ]
llama_model_loader: - tensor 75: blk.8.attn_v.weight q4_0 [ 4096, 1024, 1, 1 ]
llama_model_loader: - tensor 76: blk.8.attn_output.weight q4_0 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 77: blk.8.ffn_gate.weight q4_0 [ 4096, 14336, 1, 1 ]
llama_model_loader: - tensor 78: blk.8.ffn_up.weight q4_0 [ 4096, 14336, 1, 1 ]
llama_model_loader: - tensor 79: blk.8.ffn_down.weight q4_0 [ 14336, 4096, 1, 1 ]
llama_model_loader: - tensor 80: blk.8.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 81: blk.8.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 82: blk.9.attn_q.weight q4_0 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 83: blk.9.attn_k.weight q4_0 [ 4096, 1024, 1, 1 ]
llama_model_loader: - tensor 84: blk.9.attn_v.weight q4_0 [ 4096, 1024, 1, 1 ]
llama_model_loader: - tensor 85: blk.9.attn_output.weight q4_0 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 86: blk.9.ffn_gate.weight q4_0 [ 4096, 14336, 1, 1 ]
llama_model_loader: - tensor 87: blk.9.ffn_up.weight q4_0 [ 4096, 14336, 1, 1 ]
llama_model_loader: - tensor 88: blk.9.ffn_down.weight q4_0 [ 14336, 4096, 1, 1 ]
llama_model_loader: - tensor 89: blk.9.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 90: blk.9.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 91: blk.10.attn_q.weight q4_0 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 92: blk.10.attn_k.weight q4_0 [ 4096, 1024, 1, 1 ]
llama_model_loader: - tensor 93: blk.10.attn_v.weight q4_0 [ 4096, 1024, 1, 1 ]
llama_model_loader: - tensor 94: blk.10.attn_output.weight q4_0 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 95: blk.10.ffn_gate.weight q4_0 [ 4096, 14336, 1, 1 ]
llama_model_loader: - tensor 96: blk.10.ffn_up.weight q4_0 [ 4096, 14336, 1, 1 ]
llama_model_loader: - tensor 97: blk.10.ffn_down.weight q4_0 [ 14336, 4096, 1, 1 ]
llama_model_loader: - tensor 98: blk.10.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 99: blk.10.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 100: blk.11.attn_q.weight q4_0 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 101: blk.11.attn_k.weight q4_0 [ 4096, 1024, 1, 1 ]
llama_model_loader: - tensor 102: blk.11.attn_v.weight q4_0 [ 4096, 1024, 1, 1 ]
llama_model_loader: - tensor 103: blk.11.attn_output.weight q4_0 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 104: blk.11.ffn_gate.weight q4_0 [ 4096, 14336, 1, 1 ]
llama_model_loader: - tensor 105: blk.11.ffn_up.weight q4_0 [ 4096, 14336, 1, 1 ]
llama_model_loader: - tensor 106: blk.11.ffn_down.weight q4_0 [ 14336, 4096, 1, 1 ]
llama_model_loader: - tensor 107: blk.11.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 108: blk.11.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 109: blk.12.attn_q.weight q4_0 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 110: blk.12.attn_k.weight q4_0 [ 4096, 1024, 1, 1 ]
llama_model_loader: - tensor 111: blk.12.attn_v.weight q4_0 [ 4096, 1024, 1, 1 ]
llama_model_loader: - tensor 112: blk.12.attn_output.weight q4_0 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 113: blk.12.ffn_gate.weight q4_0 [ 4096, 14336, 1, 1 ]
llama_model_loader: - tensor 114: blk.12.ffn_up.weight q4_0 [ 4096, 14336, 1, 1 ]
llama_model_loader: - tensor 115: blk.12.ffn_down.weight q4_0 [ 14336, 4096, 1, 1 ]
llama_model_loader: - tensor 116: blk.12.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 117: blk.12.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 118: blk.13.attn_q.weight q4_0 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 119: blk.13.attn_k.weight q4_0 [ 4096, 1024, 1, 1 ]
llama_model_loader: - tensor 120: blk.13.attn_v.weight q4_0 [ 4096, 1024, 1, 1 ]
llama_model_loader: - tensor 121: blk.13.attn_output.weight q4_0 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 122: blk.13.ffn_gate.weight q4_0 [ 4096, 14336, 1, 1 ]
llama_model_loader: - tensor 123: blk.13.ffn_up.weight q4_0 [ 4096, 14336, 1, 1 ]
llama_model_loader: - tensor 124: blk.13.ffn_down.weight q4_0 [ 14336, 4096, 1, 1 ]
llama_model_loader: - tensor 125: blk.13.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 126: blk.13.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 127: blk.14.attn_q.weight q4_0 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 128: blk.14.attn_k.weight q4_0 [ 4096, 1024, 1, 1 ]
llama_model_loader: - tensor 129: blk.14.attn_v.weight q4_0 [ 4096, 1024, 1, 1 ]
llama_model_loader: - tensor 130: blk.14.attn_output.weight q4_0 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 131: blk.14.ffn_gate.weight q4_0 [ 4096, 14336, 1, 1 ]
llama_model_loader: - tensor 132: blk.14.ffn_up.weight q4_0 [ 4096, 14336, 1, 1 ]
llama_model_loader: - tensor 133: blk.14.ffn_down.weight q4_0 [ 14336, 4096, 1, 1 ]
llama_model_loader: - tensor 134: blk.14.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 135: blk.14.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 136: blk.15.attn_q.weight q4_0 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 137: blk.15.attn_k.weight q4_0 [ 4096, 1024, 1, 1 ]
llama_model_loader: - tensor 138: blk.15.attn_v.weight q4_0 [ 4096, 1024, 1, 1 ]
llama_model_loader: - tensor 139: blk.15.attn_output.weight q4_0 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 140: blk.15.ffn_gate.weight q4_0 [ 4096, 14336, 1, 1 ]
llama_model_loader: - tensor 141: blk.15.ffn_up.weight q4_0 [ 4096, 14336, 1, 1 ]
llama_model_loader: - tensor 142: blk.15.ffn_down.weight q4_0 [ 14336, 4096, 1, 1 ]
llama_model_loader: - tensor 143: blk.15.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 144: blk.15.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 145: blk.16.attn_q.weight q4_0 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 146: blk.16.attn_k.weight q4_0 [ 4096, 1024, 1, 1 ]
llama_model_loader: - tensor 147: blk.16.attn_v.weight q4_0 [ 4096, 1024, 1, 1 ]
llama_model_loader: - tensor 148: blk.16.attn_output.weight q4_0 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 149: blk.16.ffn_gate.weight q4_0 [ 4096, 14336, 1, 1 ]
llama_model_loader: - tensor 150: blk.16.ffn_up.weight q4_0 [ 4096, 14336, 1, 1 ]
llama_model_loader: - tensor 151: blk.16.ffn_down.weight q4_0 [ 14336, 4096, 1, 1 ]
llama_model_loader: - tensor 152: blk.16.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 153: blk.16.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 154: blk.17.attn_q.weight q4_0 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 155: blk.17.attn_k.weight q4_0 [ 4096, 1024, 1, 1 ]
llama_model_loader: - tensor 156: blk.17.attn_v.weight q4_0 [ 4096, 1024, 1, 1 ]
llama_model_loader: - tensor 157: blk.17.attn_output.weight q4_0 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 158: blk.17.ffn_gate.weight q4_0 [ 4096, 14336, 1, 1 ]
llama_model_loader: - tensor 159: blk.17.ffn_up.weight q4_0 [ 4096, 14336, 1, 1 ]
llama_model_loader: - tensor 160: blk.17.ffn_down.weight q4_0 [ 14336, 4096, 1, 1 ]
llama_model_loader: - tensor 161: blk.17.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 162: blk.17.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 163: blk.18.attn_q.weight q4_0 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 164: blk.18.attn_k.weight q4_0 [ 4096, 1024, 1, 1 ]
llama_model_loader: - tensor 165: blk.18.attn_v.weight q4_0 [ 4096, 1024, 1, 1 ]
llama_model_loader: - tensor 166: blk.18.attn_output.weight q4_0 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 167: blk.18.ffn_gate.weight q4_0 [ 4096, 14336, 1, 1 ]
llama_model_loader: - tensor 168: blk.18.ffn_up.weight q4_0 [ 4096, 14336, 1, 1 ]
llama_model_loader: - tensor 169: blk.18.ffn_down.weight q4_0 [ 14336, 4096, 1, 1 ]
llama_model_loader: - tensor 170: blk.18.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 171: blk.18.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 172: blk.19.attn_q.weight q4_0 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 173: blk.19.attn_k.weight q4_0 [ 4096, 1024, 1, 1 ]
llama_model_loader: - tensor 174: blk.19.attn_v.weight q4_0 [ 4096, 1024, 1, 1 ]
llama_model_loader: - tensor 175: blk.19.attn_output.weight q4_0 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 176: blk.19.ffn_gate.weight q4_0 [ 4096, 14336, 1, 1 ]
llama_model_loader: - tensor 177: blk.19.ffn_up.weight q4_0 [ 4096, 14336, 1, 1 ]
llama_model_loader: - tensor 178: blk.19.ffn_down.weight q4_0 [ 14336, 4096, 1, 1 ]
llama_model_loader: - tensor 179: blk.19.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 180: blk.19.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 181: blk.20.attn_q.weight q4_0 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 182: blk.20.attn_k.weight q4_0 [ 4096, 1024, 1, 1 ]
llama_model_loader: - tensor 183: blk.20.attn_v.weight q4_0 [ 4096, 1024, 1, 1 ]
llama_model_loader: - tensor 184: blk.20.attn_output.weight q4_0 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 185: blk.20.ffn_gate.weight q4_0 [ 4096, 14336, 1, 1 ]
llama_model_loader: - tensor 186: blk.20.ffn_up.weight q4_0 [ 4096, 14336, 1, 1 ]
llama_model_loader: - tensor 187: blk.20.ffn_down.weight q4_0 [ 14336, 4096, 1, 1 ]
llama_model_loader: - tensor 188: blk.20.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 189: blk.20.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 190: blk.21.attn_q.weight q4_0 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 191: blk.21.attn_k.weight q4_0 [ 4096, 1024, 1, 1 ]
llama_model_loader: - tensor 192: blk.21.attn_v.weight q4_0 [ 4096, 1024, 1, 1 ]
llama_model_loader: - tensor 193: blk.21.attn_output.weight q4_0 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 194: blk.21.ffn_gate.weight q4_0 [ 4096, 14336, 1, 1 ]
llama_model_loader: - tensor 195: blk.21.ffn_up.weight q4_0 [ 4096, 14336, 1, 1 ]
llama_model_loader: - tensor 196: blk.21.ffn_down.weight q4_0 [ 14336, 4096, 1, 1 ]
llama_model_loader: - tensor 197: blk.21.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 198: blk.21.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 199: blk.22.attn_q.weight q4_0 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 200: blk.22.attn_k.weight q4_0 [ 4096, 1024, 1, 1 ]
llama_model_loader: - tensor 201: blk.22.attn_v.weight q4_0 [ 4096, 1024, 1, 1 ]
llama_model_loader: - tensor 202: blk.22.attn_output.weight q4_0 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 203: blk.22.ffn_gate.weight q4_0 [ 4096, 14336, 1, 1 ]
llama_model_loader: - tensor 204: blk.22.ffn_up.weight q4_0 [ 4096, 14336, 1, 1 ]
llama_model_loader: - tensor 205: blk.22.ffn_down.weight q4_0 [ 14336, 4096, 1, 1 ]
llama_model_loader: - tensor 206: blk.22.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 207: blk.22.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 208: blk.23.attn_q.weight q4_0 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 209: blk.23.attn_k.weight q4_0 [ 4096, 1024, 1, 1 ]
llama_model_loader: - tensor 210: blk.23.attn_v.weight q4_0 [ 4096, 1024, 1, 1 ]
llama_model_loader: - tensor 211: blk.23.attn_output.weight q4_0 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 212: blk.23.ffn_gate.weight q4_0 [ 4096, 14336, 1, 1 ]
llama_model_loader: - tensor 213: blk.23.ffn_up.weight q4_0 [ 4096, 14336, 1, 1 ]
llama_model_loader: - tensor 214: blk.23.ffn_down.weight q4_0 [ 14336, 4096, 1, 1 ]
llama_model_loader: - tensor 215: blk.23.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 216: blk.23.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 217: blk.24.attn_q.weight q4_0 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 218: blk.24.attn_k.weight q4_0 [ 4096, 1024, 1, 1 ]
llama_model_loader: - tensor 219: blk.24.attn_v.weight q4_0 [ 4096, 1024, 1, 1 ]
llama_model_loader: - tensor 220: blk.24.attn_output.weight q4_0 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 221: blk.24.ffn_gate.weight q4_0 [ 4096, 14336, 1, 1 ]
llama_model_loader: - tensor 222: blk.24.ffn_up.weight q4_0 [ 4096, 14336, 1, 1 ]
llama_model_loader: - tensor 223: blk.24.ffn_down.weight q4_0 [ 14336, 4096, 1, 1 ]
llama_model_loader: - tensor 224: blk.24.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 225: blk.24.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 226: blk.25.attn_q.weight q4_0 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 227: blk.25.attn_k.weight q4_0 [ 4096, 1024, 1, 1 ]
llama_model_loader: - tensor 228: blk.25.attn_v.weight q4_0 [ 4096, 1024, 1, 1 ]
llama_model_loader: - tensor 229: blk.25.attn_output.weight q4_0 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 230: blk.25.ffn_gate.weight q4_0 [ 4096, 14336, 1, 1 ]
llama_model_loader: - tensor 231: blk.25.ffn_up.weight q4_0 [ 4096, 14336, 1, 1 ]
llama_model_loader: - tensor 232: blk.25.ffn_down.weight q4_0 [ 14336, 4096, 1, 1 ]
llama_model_loader: - tensor 233: blk.25.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 234: blk.25.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 235: blk.26.attn_q.weight q4_0 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 236: blk.26.attn_k.weight q4_0 [ 4096, 1024, 1, 1 ]
llama_model_loader: - tensor 237: blk.26.attn_v.weight q4_0 [ 4096, 1024, 1, 1 ]
llama_model_loader: - tensor 238: blk.26.attn_output.weight q4_0 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 239: blk.26.ffn_gate.weight q4_0 [ 4096, 14336, 1, 1 ]
llama_model_loader: - tensor 240: blk.26.ffn_up.weight q4_0 [ 4096, 14336, 1, 1 ]
llama_model_loader: - tensor 241: blk.26.ffn_down.weight q4_0 [ 14336, 4096, 1, 1 ]
llama_model_loader: - tensor 242: blk.26.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 243: blk.26.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 244: blk.27.attn_q.weight q4_0 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 245: blk.27.attn_k.weight q4_0 [ 4096, 1024, 1, 1 ]
llama_model_loader: - tensor 246: blk.27.attn_v.weight q4_0 [ 4096, 1024, 1, 1 ]
llama_model_loader: - tensor 247: blk.27.attn_output.weight q4_0 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 248: blk.27.ffn_gate.weight q4_0 [ 4096, 14336, 1, 1 ]
llama_model_loader: - tensor 249: blk.27.ffn_up.weight q4_0 [ 4096, 14336, 1, 1 ]
llama_model_loader: - tensor 250: blk.27.ffn_down.weight q4_0 [ 14336, 4096, 1, 1 ]
llama_model_loader: - tensor 251: blk.27.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 252: blk.27.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 253: blk.28.attn_q.weight q4_0 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 254: blk.28.attn_k.weight q4_0 [ 4096, 1024, 1, 1 ]
llama_model_loader: - tensor 255: blk.28.attn_v.weight q4_0 [ 4096, 1024, 1, 1 ]
llama_model_loader: - tensor 256: blk.28.attn_output.weight q4_0 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 257: blk.28.ffn_gate.weight q4_0 [ 4096, 14336, 1, 1 ]
llama_model_loader: - tensor 258: blk.28.ffn_up.weight q4_0 [ 4096, 14336, 1, 1 ]
llama_model_loader: - tensor 259: blk.28.ffn_down.weight q4_0 [ 14336, 4096, 1, 1 ]
llama_model_loader: - tensor 260: blk.28.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 261: blk.28.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 262: blk.29.attn_q.weight q4_0 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 263: blk.29.attn_k.weight q4_0 [ 4096, 1024, 1, 1 ]
llama_model_loader: - tensor 264: blk.29.attn_v.weight q4_0 [ 4096, 1024, 1, 1 ]
llama_model_loader: - tensor 265: blk.29.attn_output.weight q4_0 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 266: blk.29.ffn_gate.weight q4_0 [ 4096, 14336, 1, 1 ]
llama_model_loader: - tensor 267: blk.29.ffn_up.weight q4_0 [ 4096, 14336, 1, 1 ]
llama_model_loader: - tensor 268: blk.29.ffn_down.weight q4_0 [ 14336, 4096, 1, 1 ]
llama_model_loader: - tensor 269: blk.29.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 270: blk.29.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 271: blk.30.attn_q.weight q4_0 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 272: blk.30.attn_k.weight q4_0 [ 4096, 1024, 1, 1 ]
llama_model_loader: - tensor 273: blk.30.attn_v.weight q4_0 [ 4096, 1024, 1, 1 ]
llama_model_loader: - tensor 274: blk.30.attn_output.weight q4_0 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 275: blk.30.ffn_gate.weight q4_0 [ 4096, 14336, 1, 1 ]
llama_model_loader: - tensor 276: blk.30.ffn_up.weight q4_0 [ 4096, 14336, 1, 1 ]
llama_model_loader: - tensor 277: blk.30.ffn_down.weight q4_0 [ 14336, 4096, 1, 1 ]
llama_model_loader: - tensor 278: blk.30.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 279: blk.30.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 280: blk.31.attn_q.weight q4_0 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 281: blk.31.attn_k.weight q4_0 [ 4096, 1024, 1, 1 ]
llama_model_loader: - tensor 282: blk.31.attn_v.weight q4_0 [ 4096, 1024, 1, 1 ]
llama_model_loader: - tensor 283: blk.31.attn_output.weight q4_0 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 284: blk.31.ffn_gate.weight q4_0 [ 4096, 14336, 1, 1 ]
llama_model_loader: - tensor 285: blk.31.ffn_up.weight q4_0 [ 4096, 14336, 1, 1 ]
llama_model_loader: - tensor 286: blk.31.ffn_down.weight q4_0 [ 14336, 4096, 1, 1 ]
llama_model_loader: - tensor 287: blk.31.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 288: blk.31.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 289: output_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 290: output.weight q6_K [ 4096, 32000, 1, 1 ]
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.name str = mistralai
llama_model_loader: - kv 2: llama.context_length u32 = 32768
llama_model_loader: - kv 3: llama.embedding_length u32 = 4096
llama_model_loader: - kv 4: llama.block_count u32 = 32
llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336
llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 7: llama.attention.head_count u32 = 32
llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 10: llama.rope.freq_base f32 = 1000000.000000
llama_model_loader: - kv 11: general.file_type u32 = 2
llama_model_loader: - kv 12: tokenizer.ggml.model str = llama
llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,32000] = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv 14: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv 16: tokenizer.ggml.bos_token_id u32 = 1
llama_model_loader: - kv 17: tokenizer.ggml.eos_token_id u32 = 2
llama_model_loader: - kv 18: tokenizer.ggml.unknown_token_id u32 = 0
llama_model_loader: - kv 19: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 20: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 21: general.quantization_version u32 = 2
llama_model_loader: - type f32: 65 tensors
llama_model_loader: - type q4_0: 225 tensors
llama_model_loader: - type q6_K: 1 tensors
llm_load_vocab: special tokens definition check successful ( 259/32000 ).
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = SPM
llm_load_print_meta: n_vocab = 32000
llm_load_print_meta: n_merges = 0
llm_load_print_meta: n_ctx_train = 32768
llm_load_print_meta: n_embd = 4096
llm_load_print_meta: n_head = 32
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_layer = 32
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_gqa = 4
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: n_ff = 14336
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx = 32768
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: model type = 7B
llm_load_print_meta: model ftype = mostly Q4_0
llm_load_print_meta: model params = 7.24 B
llm_load_print_meta: model size = 3.83 GiB (4.54 BPW)
llm_load_print_meta: general.name = mistralai
llm_load_print_meta: BOS token = 1 '<s>'
llm_load_print_meta: EOS token = 2 '</s>'
llm_load_print_meta: UNK token = 0 '<unk>'
llm_load_print_meta: LF token = 13 '<0x0A>'
llm_load_tensors: ggml ctx size = 0.11 MiB
llm_load_tensors: mem required = 3917.97 MiB
..........
.....................
...............
..............
...........
.........
........
......
....
llama_new_context_with_model: n_ctx = 32768
llama_new_context_with_model: freq_base = 1000000.0
llama_new_context_with_model: freq_scale = 1
llama_new_context_with_model: kv self size = 4096.00 MiB
llama_build_graph: non-view tensors processed: 676/676
ggml_metal_init: allocating
ggml_metal_init: found device: Apple M2
ggml_metal_init: picking default device: Apple M2
ggml_metal_init: default.metallib not found, loading from source
ggml_metal_init: loading '/var/folders/q5/yq5tdvb520z37pwzt2h5__2m0000gn/T/ollama1268607975/llama.cpp/gguf/build/metal/bin/ggml-metal.metal'
ggml_metal_init: GPU name: Apple M2
ggml_metal_init: GPU family: MTLGPUFamilyApple8 (1008)
ggml_metal_init: hasUnifiedMemory = true
ggml_metal_init: recommendedMaxWorkingSetSize = 5461.34 MiB
ggml_metal_init: maxTransferRate = built-in GPU
llama_new_context_with_model: compute buffer total size = 2139.07 MiB
llama_new_context_with_model: max tensor size = 102.54 MiB
ggml_metal_add_buffer: allocated 'data ' buffer, size = 3918.58 MiB, ( 3919.20 / 5461.34)
ggml_metal_add_buffer: allocated 'kv ' buffer, size = 4096.00 MiB, offs = 0
ggml_metal_add_buffer: allocated 'kv ' buffer, size = 0.03 MiB, offs = 4294950912, ( 8015.23 / 5461.34)ggml_metal_add_buffer: warning: current allocated size is greater than the recommended max working set size
ggml_metal_add_buffer: allocated 'alloc ' buffer, size = 2136.02 MiB, (10151.25 / 5461.34)ggml_metal_add_buffer: warning: current allocated size is greater than the recommended max working set size
ggml_metal_graph_compute: command buffer 0 failed with status 5
GGML_ASSERT: /Users/jmorgan/workspace/ollama/llm/llama.cpp/gguf/ggml-metal.m:1623: false
2023/12/13 13:01:19 llama.go:449: signal: abort trap
2023/12/13 13:01:19 llama.go:457: error starting llama runner: llama runner process has terminated
2023/12/13 13:01:19 llama.go:523: llama runner stopped successfully
[GIN] 2023/12/13 - 13:01:19 | 500 | 1.04335325s | 127.0.0.1 | POST "/api/generate"
```
I also installed it and ran the previous commands on m2 pro and everything was fine. Is it because I'm using a m2 with 8gb? or is it because I firstly used `brew`?
I'd appreciate your help!
|
{
"login": "milioe",
"id": 80537193,
"node_id": "MDQ6VXNlcjgwNTM3MTkz",
"avatar_url": "https://avatars.githubusercontent.com/u/80537193?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/milioe",
"html_url": "https://github.com/milioe",
"followers_url": "https://api.github.com/users/milioe/followers",
"following_url": "https://api.github.com/users/milioe/following{/other_user}",
"gists_url": "https://api.github.com/users/milioe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/milioe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/milioe/subscriptions",
"organizations_url": "https://api.github.com/users/milioe/orgs",
"repos_url": "https://api.github.com/users/milioe/repos",
"events_url": "https://api.github.com/users/milioe/events{/privacy}",
"received_events_url": "https://api.github.com/users/milioe/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1508/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1508/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2385
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2385/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2385/comments
|
https://api.github.com/repos/ollama/ollama/issues/2385/events
|
https://github.com/ollama/ollama/issues/2385
| 2,122,637,940
|
I_kwDOJ0Z1Ps5-hOJ0
| 2,385
|
ollama breaks running qwen on ubuntu 20
|
{
"login": "cognitivetech",
"id": 55156785,
"node_id": "MDQ6VXNlcjU1MTU2Nzg1",
"avatar_url": "https://avatars.githubusercontent.com/u/55156785?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cognitivetech",
"html_url": "https://github.com/cognitivetech",
"followers_url": "https://api.github.com/users/cognitivetech/followers",
"following_url": "https://api.github.com/users/cognitivetech/following{/other_user}",
"gists_url": "https://api.github.com/users/cognitivetech/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cognitivetech/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cognitivetech/subscriptions",
"organizations_url": "https://api.github.com/users/cognitivetech/orgs",
"repos_url": "https://api.github.com/users/cognitivetech/repos",
"events_url": "https://api.github.com/users/cognitivetech/events{/privacy}",
"received_events_url": "https://api.github.com/users/cognitivetech/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-02-07T09:59:57
| 2024-02-09T20:46:26
| 2024-02-09T20:46:26
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Either using the version included with `ollama pull qwen` or using my own custom modelfile with q8 and chatml template qwen causes ollama to get "stuck" it doesn't use GPU for qwen, or any other working model after trying qwen until reboot.
see also: https://github.com/ollama/ollama/issues/1691
|
{
"login": "cognitivetech",
"id": 55156785,
"node_id": "MDQ6VXNlcjU1MTU2Nzg1",
"avatar_url": "https://avatars.githubusercontent.com/u/55156785?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cognitivetech",
"html_url": "https://github.com/cognitivetech",
"followers_url": "https://api.github.com/users/cognitivetech/followers",
"following_url": "https://api.github.com/users/cognitivetech/following{/other_user}",
"gists_url": "https://api.github.com/users/cognitivetech/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cognitivetech/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cognitivetech/subscriptions",
"organizations_url": "https://api.github.com/users/cognitivetech/orgs",
"repos_url": "https://api.github.com/users/cognitivetech/repos",
"events_url": "https://api.github.com/users/cognitivetech/events{/privacy}",
"received_events_url": "https://api.github.com/users/cognitivetech/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2385/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2385/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/558
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/558/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/558/comments
|
https://api.github.com/repos/ollama/ollama/issues/558/events
|
https://github.com/ollama/ollama/pull/558
| 1,905,531,929
|
PR_kwDOJ0Z1Ps5a0Bxc
| 558
|
add dockerfile for building linux binaries
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-09-20T18:40:01
| 2023-09-22T19:20:13
| 2023-09-22T19:20:13
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/558",
"html_url": "https://github.com/ollama/ollama/pull/558",
"diff_url": "https://github.com/ollama/ollama/pull/558.diff",
"patch_url": "https://github.com/ollama/ollama/pull/558.patch",
"merged_at": "2023-09-22T19:20:13"
}
| null |
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/558/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/558/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/8040
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8040/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8040/comments
|
https://api.github.com/repos/ollama/ollama/issues/8040/events
|
https://github.com/ollama/ollama/issues/8040
| 2,731,895,942
|
I_kwDOJ0Z1Ps6i1WyG
| 8,040
|
Add API endpoint for Ollama server version and feature information
|
{
"login": "anxkhn",
"id": 83116240,
"node_id": "MDQ6VXNlcjgzMTE2MjQw",
"avatar_url": "https://avatars.githubusercontent.com/u/83116240?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anxkhn",
"html_url": "https://github.com/anxkhn",
"followers_url": "https://api.github.com/users/anxkhn/followers",
"following_url": "https://api.github.com/users/anxkhn/following{/other_user}",
"gists_url": "https://api.github.com/users/anxkhn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/anxkhn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anxkhn/subscriptions",
"organizations_url": "https://api.github.com/users/anxkhn/orgs",
"repos_url": "https://api.github.com/users/anxkhn/repos",
"events_url": "https://api.github.com/users/anxkhn/events{/privacy}",
"received_events_url": "https://api.github.com/users/anxkhn/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-12-11T05:53:14
| 2024-12-29T19:33:45
| 2024-12-29T19:33:45
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
**Description:**
Ollama is rapidly evolving, with new features and capabilities being added regularly. The recent introduction of structured outputs in version 0.5.0 is a prime example of this progress. As Ollama continues to grow, it's becoming increasingly important for clients to have a reliable way to determine the server's version and supported features.
Currently, there is no API endpoint to retrieve this information. This makes it difficult for clients to:
- Ensure compatibility with features like structured outputs, which are dependent on specific server versions.
- Optimize their requests based on the server's capabilities.
- Effectively troubleshoot issues that may arise due to version mismatches.
**Proposed solution:**
Introduce a new API endpoint, such as `/api/version`, that returns a JSON object containing:
- `version`: The Ollama server version (e.g., "v0.5.0").
- `features`: An array of supported features (e.g., ["structured_outputs_json", "structured_outputs_xyz"]).
- (Optional) `system_info`: Potentially some basic system information like OS or available hardware (if deemed useful).
**Benefits:**
- Improved client-server interaction.
- Easier debugging and troubleshooting.
- Better support for the growing ecosystem of Ollama clients and tools.
I'm happy to work on this issue. If you have any specific points you'd like to emphasize, feel free to share your thoughts.
@jmorganca Just let me know how you'd like to proceed!
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8040/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8040/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4195
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4195/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4195/comments
|
https://api.github.com/repos/ollama/ollama/issues/4195/events
|
https://github.com/ollama/ollama/issues/4195
| 2,280,143,854
|
I_kwDOJ0Z1Ps6H6Dvu
| 4,195
|
how to download and run ollama and llma 3 in docker can u give me the docker file code for that
|
{
"login": "sushantsk1",
"id": 83342285,
"node_id": "MDQ6VXNlcjgzMzQyMjg1",
"avatar_url": "https://avatars.githubusercontent.com/u/83342285?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sushantsk1",
"html_url": "https://github.com/sushantsk1",
"followers_url": "https://api.github.com/users/sushantsk1/followers",
"following_url": "https://api.github.com/users/sushantsk1/following{/other_user}",
"gists_url": "https://api.github.com/users/sushantsk1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sushantsk1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sushantsk1/subscriptions",
"organizations_url": "https://api.github.com/users/sushantsk1/orgs",
"repos_url": "https://api.github.com/users/sushantsk1/repos",
"events_url": "https://api.github.com/users/sushantsk1/events{/privacy}",
"received_events_url": "https://api.github.com/users/sushantsk1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-05-06T06:31:36
| 2024-05-06T23:42:30
| 2024-05-06T23:42:30
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
i want to download and run llmaa 3 using ollama on docker help me and give the code for docker file
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4195/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 1,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4195/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/345
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/345/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/345/comments
|
https://api.github.com/repos/ollama/ollama/issues/345/events
|
https://github.com/ollama/ollama/pull/345
| 1,850,306,315
|
PR_kwDOJ0Z1Ps5X6QTT
| 345
|
set non-zero error code on error
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-08-14T18:17:38
| 2023-08-16T16:20:29
| 2023-08-16T16:20:28
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/345",
"html_url": "https://github.com/ollama/ollama/pull/345",
"diff_url": "https://github.com/ollama/ollama/pull/345.diff",
"patch_url": "https://github.com/ollama/ollama/pull/345.patch",
"merged_at": "2023-08-16T16:20:28"
}
|
ollama should exit non-zero when operations fail
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/345/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/345/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4924
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4924/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4924/comments
|
https://api.github.com/repos/ollama/ollama/issues/4924/events
|
https://github.com/ollama/ollama/issues/4924
| 2,341,385,774
|
I_kwDOJ0Z1Ps6LjrYu
| 4,924
|
Dictionary learning and concept extraction for model tuning
|
{
"login": "IgorAlexey",
"id": 18470725,
"node_id": "MDQ6VXNlcjE4NDcwNzI1",
"avatar_url": "https://avatars.githubusercontent.com/u/18470725?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/IgorAlexey",
"html_url": "https://github.com/IgorAlexey",
"followers_url": "https://api.github.com/users/IgorAlexey/followers",
"following_url": "https://api.github.com/users/IgorAlexey/following{/other_user}",
"gists_url": "https://api.github.com/users/IgorAlexey/gists{/gist_id}",
"starred_url": "https://api.github.com/users/IgorAlexey/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/IgorAlexey/subscriptions",
"organizations_url": "https://api.github.com/users/IgorAlexey/orgs",
"repos_url": "https://api.github.com/users/IgorAlexey/repos",
"events_url": "https://api.github.com/users/IgorAlexey/events{/privacy}",
"received_events_url": "https://api.github.com/users/IgorAlexey/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 0
| 2024-06-08T02:27:17
| 2024-06-08T02:27:17
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Described in Anthropic's [Mapping the Mind of a Large Language Model](https://www.anthropic.com/news/mapping-mind-language-model) and OpenAI's [Extracting Concepts from GPT-4](https://openai.com/index/extracting-concepts-from-gpt-4/).
Once we can identify the neurons associated with certain concepts for the publicly available model weights, we would benefit from the capability of changing their values for a higher degree of control
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4924/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4924/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/1550
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1550/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1550/comments
|
https://api.github.com/repos/ollama/ollama/issues/1550/events
|
https://github.com/ollama/ollama/issues/1550
| 2,044,206,570
|
I_kwDOJ0Z1Ps552B3q
| 1,550
|
Error: failed to start a llama runner
|
{
"login": "webmastermario",
"id": 121729061,
"node_id": "U_kgDOB0FwJQ",
"avatar_url": "https://avatars.githubusercontent.com/u/121729061?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/webmastermario",
"html_url": "https://github.com/webmastermario",
"followers_url": "https://api.github.com/users/webmastermario/followers",
"following_url": "https://api.github.com/users/webmastermario/following{/other_user}",
"gists_url": "https://api.github.com/users/webmastermario/gists{/gist_id}",
"starred_url": "https://api.github.com/users/webmastermario/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/webmastermario/subscriptions",
"organizations_url": "https://api.github.com/users/webmastermario/orgs",
"repos_url": "https://api.github.com/users/webmastermario/repos",
"events_url": "https://api.github.com/users/webmastermario/events{/privacy}",
"received_events_url": "https://api.github.com/users/webmastermario/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 4
| 2023-12-15T18:46:26
| 2024-02-01T23:17:34
| 2024-02-01T23:17:34
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hello,
i tried to install ollama on my centos dedicated server but everything is working but when i try
[root@213-227-129-200 ~]# ollama run llava
Error: failed to start a llama runner
i get this. what can i do?
-- Logs begin at Fri 2023-08-04 06:00:01 UTC, end at Fri 2023-12-15 18:45:42 UTC. --
Dec 15 15:33:10 213-227-129-200.cprapid.com systemd[1]: Started Ollama Service.
Dec 15 15:33:10 213-227-129-200.cprapid.com ollama[25174]: Couldn't find '/usr/share/ollama/.ollama/id_ed25519'. Generating ne
Dec 15 15:33:10 213-227-129-200.cprapid.com ollama[25174]: Your new public key is:
Dec 15 15:33:10 213-227-129-200.cprapid.com ollama[25174]: ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGFFKiLz3PKkUtFK+Bj3gAJK7csulDh
Dec 15 15:33:10 213-227-129-200.cprapid.com ollama[25174]: 2023/12/15 15:33:10 images.go:737: total blobs: 0
Dec 15 15:33:10 213-227-129-200.cprapid.com ollama[25174]: 2023/12/15 15:33:10 images.go:744: total unused blobs removed: 0
Dec 15 15:33:10 213-227-129-200.cprapid.com ollama[25174]: 2023/12/15 15:33:10 routes.go:871: Listening on 127.0.0.1:11434 (ve
Dec 15 15:33:10 213-227-129-200.cprapid.com ollama[25174]: 2023/12/15 15:33:10 routes.go:891: warning: gpu support may not be
Dec 15 15:34:10 213-227-129-200.cprapid.com ollama[25174]: [GIN] 2023/12/15 - 15:34:10 | 200 | 178.535µs | 127.0.0.1
Dec 15 15:34:10 213-227-129-200.cprapid.com ollama[25174]: [GIN] 2023/12/15 - 15:34:10 | 200 | 181.125375ms | 127.0.0.1
Dec 15 15:34:47 213-227-129-200.cprapid.com ollama[25174]: [GIN] 2023/12/15 - 15:34:47 | 200 | 44.934µs | 127.0.0.1
Dec 15 15:34:49 213-227-129-200.cprapid.com ollama[25174]: 2023/12/15 15:34:49 download.go:123: downloading 200765e12836 in 39
Dec 15 15:35:27 213-227-129-200.cprapid.com ollama[25174]: 2023/12/15 15:35:27 download.go:123: downloading 64c2234f0395 in 7
Dec 15 15:35:36 213-227-129-200.cprapid.com ollama[25174]: 2023/12/15 15:35:36 download.go:123: downloading d5ca8c59f62d in 1
Dec 15 15:35:39 213-227-129-200.cprapid.com ollama[25174]: 2023/12/15 15:35:39 download.go:123: downloading 6c58ad369ad0 in 1
Dec 15 15:35:42 213-227-129-200.cprapid.com ollama[25174]: 2023/12/15 15:35:42 download.go:123: downloading 805db971dc64 in 1
Dec 15 15:35:49 213-227-129-200.cprapid.com ollama[25174]: [GIN] 2023/12/15 - 15:35:49 | 200 | 1m1s | 127.0.0.1
Dec 15 15:37:09 213-227-129-200.cprapid.com ollama[25174]: [GIN] 2023/12/15 - 15:37:09 | 200 | 175.861µs | 127.0.0.1
Dec 15 15:37:09 213-227-129-200.cprapid.com ollama[25174]: [GIN] 2023/12/15 - 15:37:09 | 200 | 1.276939ms | 127.0.0.1
Dec 15 15:37:09 213-227-129-200.cprapid.com ollama[25174]: [GIN] 2023/12/15 - 15:37:09 | 200 | 831.562µs | 127.0.0.1
Dec 15 15:37:09 213-227-129-200.cprapid.com ollama[25174]: 2023/12/15 15:37:09 llama.go:403: skipping accelerated runner becau
Dec 15 15:37:09 213-227-129-200.cprapid.com ollama[25174]: 2023/12/15 15:37:09 llama.go:436: starting llama runner
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1550/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1550/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3729
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3729/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3729/comments
|
https://api.github.com/repos/ollama/ollama/issues/3729/events
|
https://github.com/ollama/ollama/issues/3729
| 2,250,055,473
|
I_kwDOJ0Z1Ps6GHR8x
| 3,729
|
failed at cuda 12.2 with GTX1080 Ti
|
{
"login": "MissingTwins",
"id": 146804746,
"node_id": "U_kgDOCMAQCg",
"avatar_url": "https://avatars.githubusercontent.com/u/146804746?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MissingTwins",
"html_url": "https://github.com/MissingTwins",
"followers_url": "https://api.github.com/users/MissingTwins/followers",
"following_url": "https://api.github.com/users/MissingTwins/following{/other_user}",
"gists_url": "https://api.github.com/users/MissingTwins/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MissingTwins/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MissingTwins/subscriptions",
"organizations_url": "https://api.github.com/users/MissingTwins/orgs",
"repos_url": "https://api.github.com/users/MissingTwins/repos",
"events_url": "https://api.github.com/users/MissingTwins/events{/privacy}",
"received_events_url": "https://api.github.com/users/MissingTwins/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-04-18T08:20:14
| 2024-04-18T18:24:09
| 2024-04-18T18:24:09
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
This is a fresh installed ollama, but failed at first launch. cuda 12.2
```
ben@amd:~/work/ollama$ curl -fsSL https://ollama.com/install.sh | sh
>>> Downloading ollama...
####################################################################################################################### 100.0%####################################################################################################################### 100.0%
>>> Installing ollama to /usr/local/bin...
>>> Creating ollama user...
>>> Adding ollama user to render group...
>>> Adding ollama user to video group...
>>> Adding current user to ollama group...
>>> Creating ollama systemd service...
>>> Enabling and starting ollama service...
Created symlink /etc/systemd/system/default.target.wants/ollama.service → /etc/systemd/system/ollama.service.
>>> NVIDIA GPU installed.
ben@amd:~/work/ollama$ ollama mistral
Error: unknown command "mistral" for "ollama"
ben@amd:~/work/ollama$ ollama mistral^C
ben@amd:~/work/ollama$ ollama run mistral
pulling manifest
pulling e8a35b5937a5... 100% ▕██████████████████████████████████████████████████████████████▏ 4.1 GB
pulling 43070e2d4e53... 100% ▕██████████████████████████████████████████████████████████████▏ 11 KB
pulling e6836092461f... 100% ▕██████████████████████████████████████████████████████████████▏ 42 B
pulling ed11eda7790d... 100% ▕██████████████████████████████████████████████████████████████▏ 30 B
pulling f9b1e3196ecf... 100% ▕██████████████████████████████████████████████████████████████▏ 483 B
verifying sha256 digest
writing manifest
removing any unused layers
success
Error: llama runner process no longer running: 1
ben@amd:~/work/ollama$ ollama run mistral
Error: llama runner process no longer running: 1
ben@amd:~/work/ollama$ nvidia-smi
Thu Apr 18 15:59:20 2024
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.161.07 Driver Version: 535.161.07 CUDA Version: 12.2 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 NVIDIA GeForce GTX 1080 Ti On | 00000000:0A:00.0 Off | N/A |
| 0% 33C P8 19W / 275W | 4MiB / 11264MiB | 0% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
+---------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
| No running processes found |
+---------------------------------------------------------------------------------------+
ben@amd:~/work/ollama$ ollama -v
ollama version is 0.1.32
```
I have linked `libcublas.so.11 -> libcublas.so.12` but it still failed. cude works well for other cuda 11.x projects.
```
ben@amd:~/work/ollama$ ls -al /usr/local/cuda/lib64/libcudart*
lrwxrwxrwx 1 root root 15 Aug 16 2023 /usr/local/cuda/lib64/libcudart.so -> libcudart.so.12
lrwxrwxrwx 1 root root 21 Aug 16 2023 /usr/local/cuda/lib64/libcudart.so.12 -> libcudart.so.12.2.140
-rw-r--r-- 1 root root 683360 Aug 16 2023 /usr/local/cuda/lib64/libcudart.so.12.2.140
-rw-r--r-- 1 root root 1379326 Aug 16 2023 /usr/local/cuda/lib64/libcudart_static.a
ben@amd:~/work/ollama$ ls -al /usr/local/cuda/lib64/libcublas*
lrwxrwxrwx 1 root root 17 Aug 16 2023 /usr/local/cuda/lib64/libcublasLt.so -> libcublasLt.so.12
lrwxrwxrwx 1 root root 23 Aug 16 2023 /usr/local/cuda/lib64/libcublasLt.so.12 -> libcublasLt.so.12.2.5.6
-rw-r--r-- 1 root root 525843792 Aug 16 2023 /usr/local/cuda/lib64/libcublasLt.so.12.2.5.6
-rw-r--r-- 1 root root 770686098 Aug 16 2023 /usr/local/cuda/lib64/libcublasLt_static.a
lrwxrwxrwx 1 root root 15 Aug 16 2023 /usr/local/cuda/lib64/libcublas.so -> libcublas.so.12
lrwxrwxrwx 1 root root 15 Feb 15 19:55 /usr/local/cuda/lib64/libcublas.so.11 -> libcublas.so.12
lrwxrwxrwx 1 root root 21 Aug 16 2023 /usr/local/cuda/lib64/libcublas.so.12 -> libcublas.so.12.2.5.6
-rw-r--r-- 1 root root 106675248 Aug 16 2023 /usr/local/cuda/lib64/libcublas.so.12.2.5.6
-rw-r--r-- 1 root root 168600104 Aug 16 2023 /usr/local/cuda/lib64/libcublas_static.a
ben@amd:~/work/ollama$ ls -ld /usr/local/cuda*
lrwxrwxrwx 1 root root 22 Jan 20 23:13 /usr/local/cuda -> /etc/alternatives/cuda
lrwxrwxrwx 1 root root 25 Jan 20 23:13 /usr/local/cuda-12 -> /etc/alternatives/cuda-12
drwxr-xr-x 15 root root 4096 Jan 20 23:12 /usr/local/cuda-12.2
```
------------
<details>
<summary> Here is the Logs </summary>
```
[ 83.120309] amd systemd[1]: Started Ollama Service.
[ 83.133078] amd ollama[2303]: Couldn't find '/usr/share/ollama/.ollama/id_ed25519'. Generating new private key.
[ 83.134194] amd ollama[2303]: Your new public key is:
[ 83.134194] amd ollama[2303]: ssh-ed25519 Censored
[ 83.134456] amd ollama[2303]: time=2024-04-18T15:55:19.875+09:00 level=INFO source=images.go:817 msg="total blobs: 0"
[ 83.134513] amd ollama[2303]: time=2024-04-18T15:55:19.875+09:00 level=INFO source=images.go:824 msg="total unused blobs removed: 0"
[ 83.134621] amd ollama[2303]: time=2024-04-18T15:55:19.875+09:00 level=INFO source=routes.go:1143 msg="Listening on 127.0.0.1:11434 (version 0.1.32)"
[ 83.134956] amd ollama[2303]: time=2024-04-18T15:55:19.875+09:00 level=INFO source=payload.go:28 msg="extracting embedded files" dir=/tmp/ollama3603894721/runners
[ 86.003279] amd ollama[2303]: time=2024-04-18T15:55:22.744+09:00 level=INFO source=payload.go:41 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11 rocm_v60002]"
[ 86.003279] amd ollama[2303]: time=2024-04-18T15:55:22.744+09:00 level=INFO source=gpu.go:121 msg="Detecting GPU type"
[ 86.003618] amd ollama[2303]: time=2024-04-18T15:55:22.744+09:00 level=INFO source=gpu.go:268 msg="Searching for GPU management library libcudart.so*"
[ 86.010035] amd ollama[2303]: time=2024-04-18T15:55:22.750+09:00 level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [/tmp/ollama3603894721/runners/cuda_v11/libcudart.so.11.0 /usr/local/cuda/lib64/libcudart.so.12.2.140]"
[ 86.070476] amd ollama[2303]: time=2024-04-18T15:55:22.811+09:00 level=INFO source=gpu.go:126 msg="Nvidia GPU detected via cudart"
[ 86.070514] amd ollama[2303]: time=2024-04-18T15:55:22.811+09:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
[ 86.225488] amd ollama[2303]: time=2024-04-18T15:55:22.966+09:00 level=INFO source=gpu.go:202 msg="[cudart] CUDART CUDA Compute Capability detected: 6.1"
[ 137.899701] amd ollama[2303]: [GIN] 2024/04/18 - 15:56:14 | 200 | 37.46µs | 127.0.0.1 | HEAD "/"
[ 137.900488] amd ollama[2303]: [GIN] 2024/04/18 - 15:56:14 | 404 | 116.077µs | 127.0.0.1 | POST "/api/show"
[ 140.789836] amd ollama[2303]: time=2024-04-18T15:56:17.530+09:00 level=INFO source=download.go:136 msg="downloading e8a35b5937a5 in 42 100 MB part(s)"
[ 185.253719] amd ollama[2303]: time=2024-04-18T15:57:01.994+09:00 level=INFO source=download.go:136 msg="downloading 43070e2d4e53 in 1 11 KB part(s)"
[ 187.174634] amd ollama[2303]: time=2024-04-18T15:57:03.915+09:00 level=INFO source=download.go:136 msg="downloading e6836092461f in 1 42 B part(s)"
[ 190.115108] amd ollama[2303]: time=2024-04-18T15:57:06.855+09:00 level=INFO source=download.go:136 msg="downloading ed11eda7790d in 1 30 B part(s)"
[ 192.046078] amd ollama[2303]: time=2024-04-18T15:57:08.786+09:00 level=INFO source=download.go:136 msg="downloading f9b1e3196ecf in 1 483 B part(s)"
[ 195.485759] amd ollama[2303]: [GIN] 2024/04/18 - 15:57:12 | 200 | 57.585406568s | 127.0.0.1 | POST "/api/pull"
[ 195.486886] amd ollama[2303]: [GIN] 2024/04/18 - 15:57:12 | 200 | 672.555µs | 127.0.0.1 | POST "/api/show"
[ 195.487559] amd ollama[2303]: [GIN] 2024/04/18 - 15:57:12 | 200 | 198.871µs | 127.0.0.1 | POST "/api/show"
[ 195.993394] amd ollama[2303]: time=2024-04-18T15:57:12.734+09:00 level=INFO source=gpu.go:121 msg="Detecting GPU type"
[ 195.993394] amd ollama[2303]: time=2024-04-18T15:57:12.734+09:00 level=INFO source=gpu.go:268 msg="Searching for GPU management library libcudart.so*"
[ 195.997472] amd ollama[2303]: time=2024-04-18T15:57:12.738+09:00 level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [/tmp/ollama3603894721/runners/cuda_v11/libcudart.so.11.0 /usr/local/cuda/lib64/libcudart.so.12.2.140]"
[ 195.998324] amd ollama[2303]: time=2024-04-18T15:57:12.739+09:00 level=INFO source=gpu.go:126 msg="Nvidia GPU detected via cudart"
[ 195.998324] amd ollama[2303]: time=2024-04-18T15:57:12.739+09:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
[ 196.134773] amd ollama[2303]: time=2024-04-18T15:57:12.875+09:00 level=INFO source=gpu.go:202 msg="[cudart] CUDART CUDA Compute Capability detected: 6.1"
[ 196.191449] amd ollama[2303]: time=2024-04-18T15:57:12.932+09:00 level=INFO source=gpu.go:121 msg="Detecting GPU type"
[ 196.191449] amd ollama[2303]: time=2024-04-18T15:57:12.932+09:00 level=INFO source=gpu.go:268 msg="Searching for GPU management library libcudart.so*"
[ 196.193140] amd ollama[2303]: time=2024-04-18T15:57:12.934+09:00 level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [/tmp/ollama3603894721/runners/cuda_v11/libcudart.so.11.0 /usr/local/cuda/lib64/libcudart.so.12.2.140]"
[ 196.193572] amd ollama[2303]: time=2024-04-18T15:57:12.934+09:00 level=INFO source=gpu.go:126 msg="Nvidia GPU detected via cudart"
[ 196.193572] amd ollama[2303]: time=2024-04-18T15:57:12.934+09:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
[ 196.268651] amd ollama[2303]: time=2024-04-18T15:57:13.009+09:00 level=INFO source=gpu.go:202 msg="[cudart] CUDART CUDA Compute Capability detected: 6.1"
[ 196.312505] amd ollama[2303]: time=2024-04-18T15:57:13.053+09:00 level=INFO source=server.go:127 msg="offload to gpu" reallayers=33 layers=33 required="4724.5 MiB" used="4724.5 MiB" available="11009.9 MiB" kv="256.0 MiB" fulloffload="164.0 MiB" partialoffload="181.0 MiB"
[ 196.312587] amd ollama[2303]: time=2024-04-18T15:57:13.053+09:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
[ 196.312708] amd ollama[2303]: time=2024-04-18T15:57:13.053+09:00 level=INFO source=server.go:264 msg="starting llama server" cmd="/tmp/ollama3603894721/runners/cuda_v11/ollama_llama_server --model /usr/share/ollama/.ollama/models/blobs/sha256-e8a35b5937a5e6d5c35d1f2a15f161e07eefe5e5bb0a3cdd42998ee79b057730 --ctx-size 2048 --batch-size 512 --embedding --log-disable --n-gpu-layers 33 --port 42067"
[ 196.312960] amd ollama[2303]: time=2024-04-18T15:57:13.053+09:00 level=INFO source=server.go:389 msg="waiting for llama runner to start responding"
[ 196.319131] amd ollama[2303]: /tmp/ollama3603894721/runners/cuda_v11/ollama_llama_server: /usr/local/cuda/lib64/libcublas.so.11: version `libcublas.so.11' not found (required by /tmp/ollama3603894721/runners/cuda_v11/ollama_llama_server)
[ 196.363803] amd ollama[2303]: time=2024-04-18T15:57:13.104+09:00 level=ERROR source=routes.go:120 msg="error loading llama server" error="llama runner process no longer running: 1 "
[ 196.363845] amd ollama[2303]: [GIN] 2024/04/18 - 15:57:13 | 500 | 875.760611ms | 127.0.0.1 | POST "/api/chat"
[ 245.946877] amd ollama[2303]: [GIN] 2024/04/18 - 15:58:02 | 200 | 19.607µs | 127.0.0.1 | HEAD "/"
[ 245.947494] amd ollama[2303]: [GIN] 2024/04/18 - 15:58:02 | 200 | 375.091µs | 127.0.0.1 | POST "/api/show"
[ 245.948175] amd ollama[2303]: [GIN] 2024/04/18 - 15:58:02 | 200 | 289.03µs | 127.0.0.1 | POST "/api/show"
[ 246.447634] amd ollama[2303]: time=2024-04-18T15:58:03.188+09:00 level=INFO source=gpu.go:121 msg="Detecting GPU type"
[ 246.447634] amd ollama[2303]: time=2024-04-18T15:58:03.188+09:00 level=INFO source=gpu.go:268 msg="Searching for GPU management library libcudart.so*"
[ 246.450677] amd ollama[2303]: time=2024-04-18T15:58:03.191+09:00 level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [/tmp/ollama3603894721/runners/cuda_v11/libcudart.so.11.0 /usr/local/cuda/lib64/libcudart.so.12.2.140]"
[ 246.451539] amd ollama[2303]: time=2024-04-18T15:58:03.192+09:00 level=INFO source=gpu.go:126 msg="Nvidia GPU detected via cudart"
[ 246.451539] amd ollama[2303]: time=2024-04-18T15:58:03.192+09:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
[ 246.595211] amd ollama[2303]: time=2024-04-18T15:58:03.336+09:00 level=INFO source=gpu.go:202 msg="[cudart] CUDART CUDA Compute Capability detected: 6.1"
[ 246.650697] amd ollama[2303]: time=2024-04-18T15:58:03.391+09:00 level=INFO source=gpu.go:121 msg="Detecting GPU type"
[ 246.650697] amd ollama[2303]: time=2024-04-18T15:58:03.391+09:00 level=INFO source=gpu.go:268 msg="Searching for GPU management library libcudart.so*"
[ 246.652325] amd ollama[2303]: time=2024-04-18T15:58:03.393+09:00 level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [/tmp/ollama3603894721/runners/cuda_v11/libcudart.so.11.0 /usr/local/cuda/lib64/libcudart.so.12.2.140]"
[ 246.652764] amd ollama[2303]: time=2024-04-18T15:58:03.393+09:00 level=INFO source=gpu.go:126 msg="Nvidia GPU detected via cudart"
[ 246.652764] amd ollama[2303]: time=2024-04-18T15:58:03.393+09:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
[ 246.731444] amd ollama[2303]: time=2024-04-18T15:58:03.472+09:00 level=INFO source=gpu.go:202 msg="[cudart] CUDART CUDA Compute Capability detected: 6.1"
[ 246.781592] amd ollama[2303]: time=2024-04-18T15:58:03.522+09:00 level=INFO source=server.go:127 msg="offload to gpu" reallayers=33 layers=33 required="4724.5 MiB" used="4724.5 MiB" available="11009.9 MiB" kv="256.0 MiB" fulloffload="164.0 MiB" partialoffload="181.0 MiB"
[ 246.781684] amd ollama[2303]: time=2024-04-18T15:58:03.522+09:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
[ 246.781800] amd ollama[2303]: time=2024-04-18T15:58:03.522+09:00 level=INFO source=server.go:264 msg="starting llama server" cmd="/tmp/ollama3603894721/runners/cuda_v11/ollama_llama_server --model /usr/share/ollama/.ollama/models/blobs/sha256-e8a35b5937a5e6d5c35d1f2a15f161e07eefe5e5bb0a3cdd42998ee79b057730 --ctx-size 2048 --batch-size 512 --embedding --log-disable --n-gpu-layers 33 --port 36135"
[ 246.782201] amd ollama[2303]: time=2024-04-18T15:58:03.523+09:00 level=INFO source=server.go:389 msg="waiting for llama runner to start responding"
[ 246.783896] amd ollama[2303]: /tmp/ollama3603894721/runners/cuda_v11/ollama_llama_server: /usr/local/cuda/lib64/libcublas.so.11: version `libcublas.so.11' not found (required by /tmp/ollama3603894721/runners/cuda_v11/ollama_llama_server)
[ 246.833186] amd ollama[2303]: time=2024-04-18T15:58:03.574+09:00 level=ERROR source=routes.go:120 msg="error loading llama server" error="llama runner process no longer running: 1 "
[ 246.833231] amd ollama[2303]: [GIN] 2024/04/18 - 15:58:03 | 500 | 884.761421ms | 127.0.0.1 | POST "/api/chat"
[ 437.978588] amd ollama[2303]: [GIN] 2024/04/18 - 16:01:14 | 200 | 20.769µs | 127.0.0.1 | HEAD "/"
[ 437.979267] amd ollama[2303]: [GIN] 2024/04/18 - 16:01:14 | 200 | 284.743µs | 127.0.0.1 | GET "/api/tags"
[ 456.418438] amd ollama[2303]: [GIN] 2024/04/18 - 16:01:33 | 200 | 28.283µs | 127.0.0.1 | HEAD "/"
[ 458.934304] amd ollama[2303]: time=2024-04-18T16:01:35.675+09:00 level=INFO source=download.go:136 msg="downloading 170370233dd5 in 42 100 MB part(s)"
[ 476.934522] amd ollama[2303]: time=2024-04-18T16:01:53.675+09:00 level=INFO source=download.go:251 msg="170370233dd5 part 8 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection."
[ 508.847472] amd ollama[2303]: time=2024-04-18T16:02:25.588+09:00 level=INFO source=download.go:136 msg="downloading 72d6f08a42f6 in 7 100 MB part(s)"
[ 518.832954] amd ollama[2303]: time=2024-04-18T16:02:35.573+09:00 level=INFO source=download.go:136 msg="downloading c43332387573 in 1 67 B part(s)"
[ 520.743339] amd ollama[2303]: time=2024-04-18T16:02:37.484+09:00 level=INFO source=download.go:136 msg="downloading 7c658f9561e5 in 1 564 B part(s)"
[ 524.572442] amd ollama[2303]: [GIN] 2024/04/18 - 16:02:41 | 200 | 1m8s | 127.0.0.1 | POST "/api/pull"
[ 551.538296] amd ollama[2303]: [GIN] 2024/04/18 - 16:03:08 | 200 | 26.851µs | 127.0.0.1 | HEAD "/"
[ 551.539134] amd ollama[2303]: [GIN] 2024/04/18 - 16:03:08 | 404 | 65.683µs | 127.0.0.1 | POST "/api/show"
[ 555.200514] amd ollama[2303]: [GIN] 2024/04/18 - 16:03:11 | 200 | 3.661717372s | 127.0.0.1 | POST "/api/pull"
[ 555.201466] amd ollama[2303]: [GIN] 2024/04/18 - 16:03:11 | 200 | 569.521µs | 127.0.0.1 | POST "/api/show"
[ 555.202271] amd ollama[2303]: [GIN] 2024/04/18 - 16:03:11 | 200 | 207.34µs | 127.0.0.1 | POST "/api/show"
[ 555.448202] amd ollama[2303]: time=2024-04-18T16:03:12.189+09:00 level=INFO source=gpu.go:121 msg="Detecting GPU type"
[ 555.448202] amd ollama[2303]: time=2024-04-18T16:03:12.189+09:00 level=INFO source=gpu.go:268 msg="Searching for GPU management library libcudart.so*"
[ 555.451303] amd ollama[2303]: time=2024-04-18T16:03:12.192+09:00 level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [/tmp/ollama3603894721/runners/cuda_v11/libcudart.so.11.0 /usr/local/cuda/lib64/libcudart.so.12.2.140]"
[ 555.452133] amd ollama[2303]: time=2024-04-18T16:03:12.193+09:00 level=INFO source=gpu.go:126 msg="Nvidia GPU detected via cudart"
[ 555.452133] amd ollama[2303]: time=2024-04-18T16:03:12.193+09:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
[ 555.576880] amd ollama[2303]: time=2024-04-18T16:03:12.317+09:00 level=INFO source=gpu.go:202 msg="[cudart] CUDART CUDA Compute Capability detected: 6.1"
[ 555.632608] amd ollama[2303]: time=2024-04-18T16:03:12.373+09:00 level=INFO source=gpu.go:121 msg="Detecting GPU type"
[ 555.632608] amd ollama[2303]: time=2024-04-18T16:03:12.373+09:00 level=INFO source=gpu.go:268 msg="Searching for GPU management library libcudart.so*"
[ 555.634189] amd ollama[2303]: time=2024-04-18T16:03:12.375+09:00 level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [/tmp/ollama3603894721/runners/cuda_v11/libcudart.so.11.0 /usr/local/cuda/lib64/libcudart.so.12.2.140]"
[ 555.634617] amd ollama[2303]: time=2024-04-18T16:03:12.375+09:00 level=INFO source=gpu.go:126 msg="Nvidia GPU detected via cudart"
[ 555.634617] amd ollama[2303]: time=2024-04-18T16:03:12.375+09:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
[ 555.727459] amd ollama[2303]: time=2024-04-18T16:03:12.468+09:00 level=INFO source=gpu.go:202 msg="[cudart] CUDART CUDA Compute Capability detected: 6.1"
[ 555.775085] amd ollama[2303]: time=2024-04-18T16:03:12.515+09:00 level=INFO source=server.go:127 msg="offload to gpu" reallayers=33 layers=33 required="5320.0 MiB" used="5320.0 MiB" available="11009.9 MiB" kv="256.0 MiB" fulloffload="164.0 MiB" partialoffload="181.0 MiB"
[ 555.775171] amd ollama[2303]: time=2024-04-18T16:03:12.516+09:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
[ 555.775247] amd ollama[2303]: time=2024-04-18T16:03:12.516+09:00 level=INFO source=server.go:264 msg="starting llama server" cmd="/tmp/ollama3603894721/runners/cuda_v11/ollama_llama_server --model /usr/share/ollama/.ollama/models/blobs/sha256-170370233dd5c5415250a2ecd5c71586352850729062ccef1496385647293868 --ctx-size 2048 --batch-size 512 --embedding --log-disable --n-gpu-layers 33 --mmproj /usr/share/ollama/.ollama/models/blobs/sha256-72d6f08a42f656d36b356dbe0920675899a99ce21192fd66266fb7d82ed07539 --port 35909"
[ 555.775609] amd ollama[2303]: time=2024-04-18T16:03:12.516+09:00 level=INFO source=server.go:389 msg="waiting for llama runner to start responding"
[ 555.777326] amd ollama[2303]: /tmp/ollama3603894721/runners/cuda_v11/ollama_llama_server: /usr/local/cuda/lib64/libcublas.so.11: version `libcublas.so.11' not found (required by /tmp/ollama3603894721/runners/cuda_v11/ollama_llama_server)
[ 555.826400] amd ollama[2303]: time=2024-04-18T16:03:12.567+09:00 level=ERROR source=routes.go:120 msg="error loading llama server" error="llama runner process no longer running: 1 "
[ 555.826454] amd ollama[2303]: [GIN] 2024/04/18 - 16:03:12 | 500 | 623.91401ms | 127.0.0.1 | POST "/api/chat"
[ 2734.691422] amd ollama[2303]: [GIN] 2024/04/18 - 16:39:31 | 200 | 42.189µs | 127.0.0.1 | GET "/api/version"
```
</details>
### OS
Linux
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.1.32
|
{
"login": "MissingTwins",
"id": 146804746,
"node_id": "U_kgDOCMAQCg",
"avatar_url": "https://avatars.githubusercontent.com/u/146804746?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MissingTwins",
"html_url": "https://github.com/MissingTwins",
"followers_url": "https://api.github.com/users/MissingTwins/followers",
"following_url": "https://api.github.com/users/MissingTwins/following{/other_user}",
"gists_url": "https://api.github.com/users/MissingTwins/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MissingTwins/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MissingTwins/subscriptions",
"organizations_url": "https://api.github.com/users/MissingTwins/orgs",
"repos_url": "https://api.github.com/users/MissingTwins/repos",
"events_url": "https://api.github.com/users/MissingTwins/events{/privacy}",
"received_events_url": "https://api.github.com/users/MissingTwins/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3729/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3729/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6324
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6324/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6324/comments
|
https://api.github.com/repos/ollama/ollama/issues/6324/events
|
https://github.com/ollama/ollama/pull/6324
| 2,461,538,236
|
PR_kwDOJ0Z1Ps54Iu7h
| 6,324
|
cmd: speed up gguf creates
|
{
"login": "joshyan1",
"id": 76125168,
"node_id": "MDQ6VXNlcjc2MTI1MTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/76125168?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joshyan1",
"html_url": "https://github.com/joshyan1",
"followers_url": "https://api.github.com/users/joshyan1/followers",
"following_url": "https://api.github.com/users/joshyan1/following{/other_user}",
"gists_url": "https://api.github.com/users/joshyan1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joshyan1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joshyan1/subscriptions",
"organizations_url": "https://api.github.com/users/joshyan1/orgs",
"repos_url": "https://api.github.com/users/joshyan1/repos",
"events_url": "https://api.github.com/users/joshyan1/events{/privacy}",
"received_events_url": "https://api.github.com/users/joshyan1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-08-12T17:25:42
| 2024-08-12T18:46:11
| 2024-08-12T18:46:09
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6324",
"html_url": "https://github.com/ollama/ollama/pull/6324",
"diff_url": "https://github.com/ollama/ollama/pull/6324.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6324.patch",
"merged_at": "2024-08-12T18:46:09"
}
| null |
{
"login": "joshyan1",
"id": 76125168,
"node_id": "MDQ6VXNlcjc2MTI1MTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/76125168?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joshyan1",
"html_url": "https://github.com/joshyan1",
"followers_url": "https://api.github.com/users/joshyan1/followers",
"following_url": "https://api.github.com/users/joshyan1/following{/other_user}",
"gists_url": "https://api.github.com/users/joshyan1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joshyan1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joshyan1/subscriptions",
"organizations_url": "https://api.github.com/users/joshyan1/orgs",
"repos_url": "https://api.github.com/users/joshyan1/repos",
"events_url": "https://api.github.com/users/joshyan1/events{/privacy}",
"received_events_url": "https://api.github.com/users/joshyan1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6324/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6324/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/683
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/683/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/683/comments
|
https://api.github.com/repos/ollama/ollama/issues/683/events
|
https://github.com/ollama/ollama/issues/683
| 1,922,909,871
|
I_kwDOJ0Z1Ps5ynUav
| 683
|
Uninstall
|
{
"login": "fakerybakery",
"id": 76186054,
"node_id": "MDQ6VXNlcjc2MTg2MDU0",
"avatar_url": "https://avatars.githubusercontent.com/u/76186054?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fakerybakery",
"html_url": "https://github.com/fakerybakery",
"followers_url": "https://api.github.com/users/fakerybakery/followers",
"following_url": "https://api.github.com/users/fakerybakery/following{/other_user}",
"gists_url": "https://api.github.com/users/fakerybakery/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fakerybakery/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fakerybakery/subscriptions",
"organizations_url": "https://api.github.com/users/fakerybakery/orgs",
"repos_url": "https://api.github.com/users/fakerybakery/repos",
"events_url": "https://api.github.com/users/fakerybakery/events{/privacy}",
"received_events_url": "https://api.github.com/users/fakerybakery/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2023-10-02T22:47:10
| 2023-10-02T22:56:12
| 2023-10-02T22:56:12
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
How can I uninstall this program?
|
{
"login": "fakerybakery",
"id": 76186054,
"node_id": "MDQ6VXNlcjc2MTg2MDU0",
"avatar_url": "https://avatars.githubusercontent.com/u/76186054?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fakerybakery",
"html_url": "https://github.com/fakerybakery",
"followers_url": "https://api.github.com/users/fakerybakery/followers",
"following_url": "https://api.github.com/users/fakerybakery/following{/other_user}",
"gists_url": "https://api.github.com/users/fakerybakery/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fakerybakery/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fakerybakery/subscriptions",
"organizations_url": "https://api.github.com/users/fakerybakery/orgs",
"repos_url": "https://api.github.com/users/fakerybakery/repos",
"events_url": "https://api.github.com/users/fakerybakery/events{/privacy}",
"received_events_url": "https://api.github.com/users/fakerybakery/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/683/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/683/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3083
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3083/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3083/comments
|
https://api.github.com/repos/ollama/ollama/issues/3083/events
|
https://github.com/ollama/ollama/pull/3083
| 2,182,546,528
|
PR_kwDOJ0Z1Ps5pbZni
| 3,083
|
refactor readseeker
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-03-12T19:44:29
| 2024-03-16T19:08:57
| 2024-03-16T19:08:56
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3083",
"html_url": "https://github.com/ollama/ollama/pull/3083",
"diff_url": "https://github.com/ollama/ollama/pull/3083.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3083.patch",
"merged_at": "2024-03-16T19:08:56"
}
|
no functional change
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3083/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3083/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3073
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3073/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3073/comments
|
https://api.github.com/repos/ollama/ollama/issues/3073/events
|
https://github.com/ollama/ollama/pull/3073
| 2,181,001,953
|
PR_kwDOJ0Z1Ps5pWCJ9
| 3,073
|
chore: fix typo
|
{
"login": "racerole",
"id": 148756161,
"node_id": "U_kgDOCN3WwQ",
"avatar_url": "https://avatars.githubusercontent.com/u/148756161?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/racerole",
"html_url": "https://github.com/racerole",
"followers_url": "https://api.github.com/users/racerole/followers",
"following_url": "https://api.github.com/users/racerole/following{/other_user}",
"gists_url": "https://api.github.com/users/racerole/gists{/gist_id}",
"starred_url": "https://api.github.com/users/racerole/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/racerole/subscriptions",
"organizations_url": "https://api.github.com/users/racerole/orgs",
"repos_url": "https://api.github.com/users/racerole/repos",
"events_url": "https://api.github.com/users/racerole/events{/privacy}",
"received_events_url": "https://api.github.com/users/racerole/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-03-12T08:22:16
| 2024-03-12T18:09:23
| 2024-03-12T18:09:23
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3073",
"html_url": "https://github.com/ollama/ollama/pull/3073",
"diff_url": "https://github.com/ollama/ollama/pull/3073.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3073.patch",
"merged_at": "2024-03-12T18:09:23"
}
| null |
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3073/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3073/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2955
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2955/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2955/comments
|
https://api.github.com/repos/ollama/ollama/issues/2955/events
|
https://github.com/ollama/ollama/issues/2955
| 2,171,833,240
|
I_kwDOJ0Z1Ps6Bc4uY
| 2,955
|
Is there guidance to run Ollama as a background "Daemon" on MacOS pre-login?
|
{
"login": "dukekautington3rd",
"id": 33333503,
"node_id": "MDQ6VXNlcjMzMzMzNTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/33333503?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dukekautington3rd",
"html_url": "https://github.com/dukekautington3rd",
"followers_url": "https://api.github.com/users/dukekautington3rd/followers",
"following_url": "https://api.github.com/users/dukekautington3rd/following{/other_user}",
"gists_url": "https://api.github.com/users/dukekautington3rd/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dukekautington3rd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dukekautington3rd/subscriptions",
"organizations_url": "https://api.github.com/users/dukekautington3rd/orgs",
"repos_url": "https://api.github.com/users/dukekautington3rd/repos",
"events_url": "https://api.github.com/users/dukekautington3rd/events{/privacy}",
"received_events_url": "https://api.github.com/users/dukekautington3rd/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 10
| 2024-03-06T15:49:17
| 2024-12-04T05:02:40
| 2024-03-06T23:08:47
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I would really like Ollama to run as a service on my Mac or at least set the appropriate listening variable before it starts.
Today I have to `launchctl setenv OLLAMA_HOST 0.0.0.0:8080` and restart Ollama any time there is a reboot.
And I must be logged in in-order for Ollama to be serving up the LLM.
I've tried automatically setting the variables many ways:
[
/etc/launchd.conf
/etc/rc.common
~/.zprofile # Only works after terminal is opened
plist file for /Library/LaunchAgents # Only works @ login
plist file in /Library/LaunchDaemons # Can't get to work
above plist files calling [launchctl, automator app, script file]
]
My first milestone was to get the env `OLLAMA_HOST 0.0.0.0:8080` set before Ollama automatically launches
then... get Ollama to launch and continuously run in the background pre-login.
I've been spinning my wheels on this for a while now. It just seems like something that should be so easy.
Before someone suggests it, I know I could easily run in linux. I have an extensive k8s deployment that runs most of my things, but I want to leverage the Macos NPU(Maybe I have a false sense of value here) for performance.
|
{
"login": "dukekautington3rd",
"id": 33333503,
"node_id": "MDQ6VXNlcjMzMzMzNTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/33333503?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dukekautington3rd",
"html_url": "https://github.com/dukekautington3rd",
"followers_url": "https://api.github.com/users/dukekautington3rd/followers",
"following_url": "https://api.github.com/users/dukekautington3rd/following{/other_user}",
"gists_url": "https://api.github.com/users/dukekautington3rd/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dukekautington3rd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dukekautington3rd/subscriptions",
"organizations_url": "https://api.github.com/users/dukekautington3rd/orgs",
"repos_url": "https://api.github.com/users/dukekautington3rd/repos",
"events_url": "https://api.github.com/users/dukekautington3rd/events{/privacy}",
"received_events_url": "https://api.github.com/users/dukekautington3rd/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2955/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2955/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8294
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8294/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8294/comments
|
https://api.github.com/repos/ollama/ollama/issues/8294/events
|
https://github.com/ollama/ollama/issues/8294
| 2,767,643,050
|
I_kwDOJ0Z1Ps6k9uGq
| 8,294
|
Ollama should avoid calling hallucinated tools
|
{
"login": "ehsavoie",
"id": 73053,
"node_id": "MDQ6VXNlcjczMDUz",
"avatar_url": "https://avatars.githubusercontent.com/u/73053?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ehsavoie",
"html_url": "https://github.com/ehsavoie",
"followers_url": "https://api.github.com/users/ehsavoie/followers",
"following_url": "https://api.github.com/users/ehsavoie/following{/other_user}",
"gists_url": "https://api.github.com/users/ehsavoie/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ehsavoie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ehsavoie/subscriptions",
"organizations_url": "https://api.github.com/users/ehsavoie/orgs",
"repos_url": "https://api.github.com/users/ehsavoie/repos",
"events_url": "https://api.github.com/users/ehsavoie/events{/privacy}",
"received_events_url": "https://api.github.com/users/ehsavoie/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
|
{
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "https://api.github.com/users/ParthSareen/followers",
"following_url": "https://api.github.com/users/ParthSareen/following{/other_user}",
"gists_url": "https://api.github.com/users/ParthSareen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ParthSareen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ParthSareen/subscriptions",
"organizations_url": "https://api.github.com/users/ParthSareen/orgs",
"repos_url": "https://api.github.com/users/ParthSareen/repos",
"events_url": "https://api.github.com/users/ParthSareen/events{/privacy}",
"received_events_url": "https://api.github.com/users/ParthSareen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "https://api.github.com/users/ParthSareen/followers",
"following_url": "https://api.github.com/users/ParthSareen/following{/other_user}",
"gists_url": "https://api.github.com/users/ParthSareen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ParthSareen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ParthSareen/subscriptions",
"organizations_url": "https://api.github.com/users/ParthSareen/orgs",
"repos_url": "https://api.github.com/users/ParthSareen/repos",
"events_url": "https://api.github.com/users/ParthSareen/events{/privacy}",
"received_events_url": "https://api.github.com/users/ParthSareen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 9
| 2025-01-03T14:13:36
| 2025-01-08T17:51:32
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Sometimes the model seems to hallucinate and call a tool on the client that doesn't exist. In my opinion since Ollama has the list of tools being callable it should check that the tool being called is in this list before calling it.
This is described also there:
https://github.com/langchain4j/langchain4j/issues/1052
### OS
Linux, Docker
### GPU
Other
### CPU
Intel
### Ollama version
0.5.4
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8294/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8294/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/3490
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3490/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3490/comments
|
https://api.github.com/repos/ollama/ollama/issues/3490/events
|
https://github.com/ollama/ollama/pull/3490
| 2,225,662,964
|
PR_kwDOJ0Z1Ps5rt1q7
| 3,490
|
CI missing archive
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-04-04T14:23:46
| 2024-04-04T14:24:27
| 2024-04-04T14:24:24
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3490",
"html_url": "https://github.com/ollama/ollama/pull/3490",
"diff_url": "https://github.com/ollama/ollama/pull/3490.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3490.patch",
"merged_at": "2024-04-04T14:24:24"
}
| null |
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3490/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3490/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5924
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5924/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5924/comments
|
https://api.github.com/repos/ollama/ollama/issues/5924/events
|
https://github.com/ollama/ollama/pull/5924
| 2,428,343,864
|
PR_kwDOJ0Z1Ps52YdiK
| 5,924
|
llm(llama): pass rope factors
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-07-24T19:42:31
| 2024-07-24T20:06:00
| 2024-07-24T20:05:59
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5924",
"html_url": "https://github.com/ollama/ollama/pull/5924",
"diff_url": "https://github.com/ollama/ollama/pull/5924.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5924.patch",
"merged_at": "2024-07-24T20:05:59"
}
| null |
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5924/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5924/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7635
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7635/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7635/comments
|
https://api.github.com/repos/ollama/ollama/issues/7635/events
|
https://github.com/ollama/ollama/pull/7635
| 2,653,082,010
|
PR_kwDOJ0Z1Ps6BrJO5
| 7,635
|
CI: give windows lint more time
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-11-12T19:12:05
| 2024-11-12T19:22:42
| 2024-11-12T19:22:39
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7635",
"html_url": "https://github.com/ollama/ollama/pull/7635",
"diff_url": "https://github.com/ollama/ollama/pull/7635.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7635.patch",
"merged_at": "2024-11-12T19:22:39"
}
|
It looks like 8 minutes isn't quite enough and we're seeing sporadic timeouts
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7635/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7635/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3419
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3419/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3419/comments
|
https://api.github.com/repos/ollama/ollama/issues/3419/events
|
https://github.com/ollama/ollama/issues/3419
| 2,216,639,921
|
I_kwDOJ0Z1Ps6EHz2x
| 3,419
|
Ollama local discovery
|
{
"login": "rakyll",
"id": 108380,
"node_id": "MDQ6VXNlcjEwODM4MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/108380?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rakyll",
"html_url": "https://github.com/rakyll",
"followers_url": "https://api.github.com/users/rakyll/followers",
"following_url": "https://api.github.com/users/rakyll/following{/other_user}",
"gists_url": "https://api.github.com/users/rakyll/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rakyll/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rakyll/subscriptions",
"organizations_url": "https://api.github.com/users/rakyll/orgs",
"repos_url": "https://api.github.com/users/rakyll/repos",
"events_url": "https://api.github.com/users/rakyll/events{/privacy}",
"received_events_url": "https://api.github.com/users/rakyll/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-03-30T19:38:48
| 2024-05-15T00:43:41
| 2024-05-15T00:43:41
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What are you trying to do?
It's a common use case for LLM tool builders to wish they can rely on a local model rather than relying on a hosted one to save costs. Currently, there is no official way to discover whether a local ollama server is running.
### How should we solve this?
Provide a mechanism so it becomes possible to programmatically discover the ollama server endpoint. It could be a subcommand in the ollama CLI:
```
$ ollama info
{ "status": "running", "api_endpoint": "...." }
```
### What is the impact of not solving this?
This would open up the possibility of being able to fall back to a local model if ollama is available and running. It would open up wider adoption of ollama because cost and latency are both critical concerns for LLM applications.
### Anything else?
_No response_
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3419/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3419/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3230
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3230/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3230/comments
|
https://api.github.com/repos/ollama/ollama/issues/3230/events
|
https://github.com/ollama/ollama/issues/3230
| 2,193,623,988
|
I_kwDOJ0Z1Ps6CwAu0
| 3,230
|
GPU does not run with Ollama
|
{
"login": "DerLehrer",
"id": 90964131,
"node_id": "MDQ6VXNlcjkwOTY0MTMx",
"avatar_url": "https://avatars.githubusercontent.com/u/90964131?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DerLehrer",
"html_url": "https://github.com/DerLehrer",
"followers_url": "https://api.github.com/users/DerLehrer/followers",
"following_url": "https://api.github.com/users/DerLehrer/following{/other_user}",
"gists_url": "https://api.github.com/users/DerLehrer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DerLehrer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DerLehrer/subscriptions",
"organizations_url": "https://api.github.com/users/DerLehrer/orgs",
"repos_url": "https://api.github.com/users/DerLehrer/repos",
"events_url": "https://api.github.com/users/DerLehrer/events{/privacy}",
"received_events_url": "https://api.github.com/users/DerLehrer/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg",
"url": "https://api.github.com/repos/ollama/ollama/labels/windows",
"name": "windows",
"color": "0052CC",
"default": false,
"description": ""
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg",
"url": "https://api.github.com/repos/ollama/ollama/labels/nvidia",
"name": "nvidia",
"color": "8CDB00",
"default": false,
"description": "Issues relating to Nvidia GPUs and CUDA"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 3
| 2024-03-18T23:03:34
| 2024-04-15T22:47:17
| 2024-04-15T22:47:17
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi everyone,
I am running a Windows 10 computer with GTX950 and Intel(R) Core(TM) i5-3475S, 32 GB RAM,
I downloaded the new Windows-version of Ollama and the llama2-uncensored and also the tinyllama LLM.
Good: Everything works.
Bad: Ollama only makes use of the CPU and ignores the GPU.
As far as I can tell, Ollama should support my graphics card and the CPU supports AVX.
From the server-log:
_time=2024-03-18T23:06:15.263+01:00 level=INFO source=gpu.go:77 msg="Detecting GPU type"
time=2024-03-18T23:06:15.263+01:00 level=INFO source=gpu.go:191 msg="Searching for GPU management library nvml.dll"
time=2024-03-18T23:06:15.297+01:00 level=INFO source=gpu.go:237 msg="Discovered GPU libraries: [c:\\Windows\\System32\\nvml.dll C:\\Windows\\system32\\nvml.dll]"
time=2024-03-18T23:06:15.377+01:00 level=INFO source=gpu.go:82 msg="Nvidia GPU detected"
time=2024-03-18T23:06:15.377+01:00 level=INFO source=cpu_common.go:15 msg="CPU has AVX"
time=2024-03-18T23:06:15.395+01:00 level=INFO source=gpu.go:119 msg="CUDA Compute Capability detected: 5.2"
time=2024-03-18T23:06:15.396+01:00 level=INFO source=cpu_common.go:15 msg="CPU has AVX"
time=2024-03-18T23:06:15.396+01:00 level=INFO source=gpu.go:119 msg="CUDA Compute Capability detected: 5.2"
time=2024-03-18T23:06:15.396+01:00 level=INFO source=cpu_common.go:15 msg="CPU has AVX"
time=2024-03-18T23:06:15.396+01:00 level=INFO source=assets.go:63 msg="Updating PATH to ...
time=2024-03-18T23:06:15.472+01:00 level=INFO source=dyn_ext_server.go:150 msg="Initializing llama server"
ggml_init_cublas: GGML_CUDA_FORCE_MMQ: no
ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes
ggml_init_cublas: found 1 CUDA devices:
Device 0: NVIDIA GeForce GTX 950, compute capability 5.2, VMM: yes_
I would be glad if someone could tell me what to do to activate GPU-usage.
Thanks
Chris
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3230/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3230/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4895
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4895/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4895/comments
|
https://api.github.com/repos/ollama/ollama/issues/4895/events
|
https://github.com/ollama/ollama/issues/4895
| 2,339,544,174
|
I_kwDOJ0Z1Ps6Lcpxu
| 4,895
|
Add "use_mmap" to environment variable
|
{
"login": "sisi399",
"id": 50093165,
"node_id": "MDQ6VXNlcjUwMDkzMTY1",
"avatar_url": "https://avatars.githubusercontent.com/u/50093165?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sisi399",
"html_url": "https://github.com/sisi399",
"followers_url": "https://api.github.com/users/sisi399/followers",
"following_url": "https://api.github.com/users/sisi399/following{/other_user}",
"gists_url": "https://api.github.com/users/sisi399/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sisi399/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sisi399/subscriptions",
"organizations_url": "https://api.github.com/users/sisi399/orgs",
"repos_url": "https://api.github.com/users/sisi399/repos",
"events_url": "https://api.github.com/users/sisi399/events{/privacy}",
"received_events_url": "https://api.github.com/users/sisi399/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 2
| 2024-06-07T04:05:44
| 2024-10-26T06:30:54
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I recently discovered the potential benefits of the --no-mmap option, particularly for specific system configurations, such as PCs or laptops equipped with only 8GB of system RAM and a GPU with VRAM of 6GB or more, capable of loading entire models onto it.
Loading models with mmap can render the use of 8B models nearly impossible, as it can cause RAM usage to spike to 99% and remain there, often leading to complete freezing of the PC and requiring a hard reset.
Disabling mmap allows users to load 8B models while still having half of the RAM available for other tasks. The only drawback is a slightly longer initial model load time (around 5-10 seconds in my case), which I believe is a worthwhile trade-off. Quick benchmarks even suggest that --no-mmap might be slightly faster for generating tokens.
Now, here's the rationale for adding an environment variable.
Many frontends/UIs utilize Ollama, but a significant portion of them lack the toggle option to set the "nommap". By introducing an environment variable, I can globally set "nommap" and ensure that any frontends will load models with the nommap flag enabled.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4895/reactions",
"total_count": 12,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 12,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4895/timeline
| null | null | false
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.