url
stringlengths 51
54
| repository_url
stringclasses 1
value | labels_url
stringlengths 65
68
| comments_url
stringlengths 60
63
| events_url
stringlengths 58
61
| html_url
stringlengths 39
44
| id
int64 1.78B
2.82B
| node_id
stringlengths 18
19
| number
int64 1
8.69k
| title
stringlengths 1
382
| user
dict | labels
listlengths 0
5
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
2
| milestone
null | comments
int64 0
323
| created_at
timestamp[s] | updated_at
timestamp[s] | closed_at
timestamp[s] | author_association
stringclasses 4
values | sub_issues_summary
dict | active_lock_reason
null | draft
bool 2
classes | pull_request
dict | body
stringlengths 2
118k
⌀ | closed_by
dict | reactions
dict | timeline_url
stringlengths 60
63
| performed_via_github_app
null | state_reason
stringclasses 4
values | is_pull_request
bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/ollama/ollama/issues/718
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/718/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/718/comments
|
https://api.github.com/repos/ollama/ollama/issues/718/events
|
https://github.com/ollama/ollama/pull/718
| 1,930,409,238
|
PR_kwDOJ0Z1Ps5cH0SF
| 718
|
not found error before pulling model
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-10-06T15:17:05
| 2023-10-06T20:06:21
| 2023-10-06T20:06:20
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/718",
"html_url": "https://github.com/ollama/ollama/pull/718",
"diff_url": "https://github.com/ollama/ollama/pull/718.diff",
"patch_url": "https://github.com/ollama/ollama/pull/718.patch",
"merged_at": "2023-10-06T20:06:20"
}
|
When attempting to run a model through the API before pulling it a cryptic "no such file or directory" error was returned with the error path.
Improve this error to suggest pulling the model first, like the CLI does automatically.
```
curl -X 'POST' -d '{"prompt":"hello", "model": "mistral"}' 'http://127.0.0.1:11434/api/generate'
{"error":"model 'mistral' not found, try pulling it first"}%
```
resolves #715
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/718/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/718/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3744
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3744/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3744/comments
|
https://api.github.com/repos/ollama/ollama/issues/3744/events
|
https://github.com/ollama/ollama/issues/3744
| 2,251,941,390
|
I_kwDOJ0Z1Ps6GOeYO
| 3,744
|
Download the models with alternative tools
|
{
"login": "pepo-ec",
"id": 1961172,
"node_id": "MDQ6VXNlcjE5NjExNzI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1961172?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pepo-ec",
"html_url": "https://github.com/pepo-ec",
"followers_url": "https://api.github.com/users/pepo-ec/followers",
"following_url": "https://api.github.com/users/pepo-ec/following{/other_user}",
"gists_url": "https://api.github.com/users/pepo-ec/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pepo-ec/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pepo-ec/subscriptions",
"organizations_url": "https://api.github.com/users/pepo-ec/orgs",
"repos_url": "https://api.github.com/users/pepo-ec/repos",
"events_url": "https://api.github.com/users/pepo-ec/events{/privacy}",
"received_events_url": "https://api.github.com/users/pepo-ec/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 5
| 2024-04-19T02:20:16
| 2024-11-30T15:12:12
| null |
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
How can I download models with other tools like wget/curl and then import them to a local Ollama server?
When I download a model **it takes up all the available bandwidth** and I want to be able to control the bandwidth so that it takes longer but does not leave my LAN without connectivity
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3744/reactions",
"total_count": 11,
"+1": 11,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3744/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/8085
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8085/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8085/comments
|
https://api.github.com/repos/ollama/ollama/issues/8085/events
|
https://github.com/ollama/ollama/issues/8085
| 2,737,963,986
|
I_kwDOJ0Z1Ps6jMgPS
| 8,085
|
ollama : /usr/lib64/libstdc++.so.6: version GLIBCXX_3.4.25 not found - Kylin Linux glibc++ version incompatible with official builds
|
{
"login": "ouber23",
"id": 7042434,
"node_id": "MDQ6VXNlcjcwNDI0MzQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/7042434?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ouber23",
"html_url": "https://github.com/ouber23",
"followers_url": "https://api.github.com/users/ouber23/followers",
"following_url": "https://api.github.com/users/ouber23/following{/other_user}",
"gists_url": "https://api.github.com/users/ouber23/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ouber23/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ouber23/subscriptions",
"organizations_url": "https://api.github.com/users/ouber23/orgs",
"repos_url": "https://api.github.com/users/ouber23/repos",
"events_url": "https://api.github.com/users/ouber23/events{/privacy}",
"received_events_url": "https://api.github.com/users/ouber23/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5755339642,
"node_id": "LA_kwDOJ0Z1Ps8AAAABVwuDeg",
"url": "https://api.github.com/repos/ollama/ollama/labels/linux",
"name": "linux",
"color": "516E70",
"default": false,
"description": ""
}
] |
open
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 3
| 2024-12-13T09:43:47
| 2025-01-06T19:19:08
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
when run ollama the error has happend as follow:
ollama : /usr/lib64/libstdc++.so.6: version GLIBCXX_3.4.25 not found
### OS
_No response_
### GPU
_No response_
### CPU
_No response_
### Ollama version
_No response_
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8085/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8085/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/2574
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2574/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2574/comments
|
https://api.github.com/repos/ollama/ollama/issues/2574/events
|
https://github.com/ollama/ollama/issues/2574
| 2,140,986,066
|
I_kwDOJ0Z1Ps5_nNrS
| 2,574
|
OLLAMA_MODELS Directory
|
{
"login": "shersoni610",
"id": 57876250,
"node_id": "MDQ6VXNlcjU3ODc2MjUw",
"avatar_url": "https://avatars.githubusercontent.com/u/57876250?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shersoni610",
"html_url": "https://github.com/shersoni610",
"followers_url": "https://api.github.com/users/shersoni610/followers",
"following_url": "https://api.github.com/users/shersoni610/following{/other_user}",
"gists_url": "https://api.github.com/users/shersoni610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shersoni610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shersoni610/subscriptions",
"organizations_url": "https://api.github.com/users/shersoni610/orgs",
"repos_url": "https://api.github.com/users/shersoni610/repos",
"events_url": "https://api.github.com/users/shersoni610/events{/privacy}",
"received_events_url": "https://api.github.com/users/shersoni610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] |
closed
| false
| null |
[] | null | 10
| 2024-02-18T13:17:30
| 2025-01-26T19:08:53
| 2024-03-14T00:19:53
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hello,
I am running Ollama on a Linus machine (zsh shell). I set the environmental variable OLLAMA_MODELS to link to an external hard drive.
export OLLAMA_MODELS=/home/akbar/Disk2/Models/Ollama/models
However, the models are still store in /usr/share/ollama/.ollama folder. I wish to store all the models to an external drive to save the
limited space on the SSD.
Can someone help?
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2574/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2574/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5571
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5571/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5571/comments
|
https://api.github.com/repos/ollama/ollama/issues/5571/events
|
https://github.com/ollama/ollama/issues/5571
| 2,398,158,082
|
I_kwDOJ0Z1Ps6O8P0C
| 5,571
|
`CUDA error: unspecified launch failure` on inference on Nvidia V100 GPUs
|
{
"login": "louisbrulenaudet",
"id": 35007448,
"node_id": "MDQ6VXNlcjM1MDA3NDQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/35007448?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/louisbrulenaudet",
"html_url": "https://github.com/louisbrulenaudet",
"followers_url": "https://api.github.com/users/louisbrulenaudet/followers",
"following_url": "https://api.github.com/users/louisbrulenaudet/following{/other_user}",
"gists_url": "https://api.github.com/users/louisbrulenaudet/gists{/gist_id}",
"starred_url": "https://api.github.com/users/louisbrulenaudet/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/louisbrulenaudet/subscriptions",
"organizations_url": "https://api.github.com/users/louisbrulenaudet/orgs",
"repos_url": "https://api.github.com/users/louisbrulenaudet/repos",
"events_url": "https://api.github.com/users/louisbrulenaudet/events{/privacy}",
"received_events_url": "https://api.github.com/users/louisbrulenaudet/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg",
"url": "https://api.github.com/repos/ollama/ollama/labels/nvidia",
"name": "nvidia",
"color": "8CDB00",
"default": false,
"description": "Issues relating to Nvidia GPUs and CUDA"
},
{
"id": 6677745918,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgZQ_g",
"url": "https://api.github.com/repos/ollama/ollama/labels/gpu",
"name": "gpu",
"color": "76C49E",
"default": false,
"description": ""
}
] |
closed
| false
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 7
| 2024-07-09T13:00:25
| 2024-07-10T20:17:14
| 2024-07-10T20:17:14
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Hi everyone,
Users of older versions of Ollama have no problems, but with the new version, an error appears during inference. This seems to be linked to an error during the process of copying data between host and device ([cudaMemcpyAsync](https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__MEMORY.html#group__CUDART__MEMORY_1g85073372f776b4c4d5f89f7124b7bf79)).
I don't know if an answer could be found on Ollama's side or if it comes directly from llama.cpp, but here's the error message :
```
2024-07-09 14:50:08,792 - logger - INFO - {'command': 'serve'}
2024/07/09 14:50:08 routes.go:1033: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_MODELS:/app/cfvr/lbrulenaudet/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
time=2024-07-09T14:50:08.836+02:00 level=INFO source=images.go:751 msg="total blobs: 4"
time=2024-07-09T14:50:08.838+02:00 level=INFO source=images.go:758 msg="total unused blobs removed: 0"
time=2024-07-09T14:50:08.839+02:00 level=INFO source=routes.go:1080 msg="Listening on 127.0.0.1:11434 (version 0.2.1)"
time=2024-07-09T14:50:08.841+02:00 level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama3554105619/runners
time=2024-07-09T14:50:12.694+02:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11 rocm_v60101]"
time=2024-07-09T14:50:12.694+02:00 level=INFO source=gpu.go:205 msg="looking for compatible GPUs"
time=2024-07-09T14:50:12.704+02:00 level=INFO source=gpu.go:534 msg="unable to load cuda driver library" library=/usr/lib/x86_64-linux-gnu/libcuda.so.460.32.03 error="symbol lookup for cuCtxCreate_v3 failed: /usr/lib/x86_64-linux-gnu/libcuda.so.460.32.03: undefined symbol: cuCtxCreate_v3"
time=2024-07-09T14:50:12.706+02:00 level=INFO source=gpu.go:534 msg="unable to load cuda driver library" library=/usr/lib/x86_64-linux-gnu/libcuda.so.340.108 error="symbol lookup for cuDeviceGetUuid failed: /usr/lib/x86_64-linux-gnu/libcuda.so.340.108: undefined symbol: cuDeviceGetUuid"
time=2024-07-09T14:50:13.021+02:00 level=INFO source=types.go:103 msg="inference compute" id=GPU-600ee5b9-f172-c5e8-0e92-334d49fd4276 library=cuda compute=7.0 driver=0.0 name="" total="31.7 GiB" available="31.4 GiB"
time=2024-07-09T14:52:24.970+02:00 level=INFO source=sched.go:738 msg="new model will fit in available VRAM in single GPU, loading" model=/app/cfvr/lbrulenaudet/.ollama/models/blobs/sha256-3de21719a8ffb4f6acc4b636d4ca38d882e0d0aa9a5d417106f985e0e0a4a735 gpu=GPU-600ee5b9-f172-c5e8-0e92-334d49fd4276 parallel=4 available=33765720064 required="13.9 GiB"
time=2024-07-09T14:52:24.971+02:00 level=INFO source=memory.go:309 msg="offload to cuda" layers.requested=-1 layers.model=28 layers.offload=28 layers.split="" memory.available="[31.4 GiB]" memory.required.full="13.9 GiB" memory.required.partial="13.9 GiB" memory.required.kv="2.1 GiB" memory.required.allocations="[13.9 GiB]" memory.weights.total="12.8 GiB" memory.weights.repeating="12.7 GiB" memory.weights.nonrepeating="164.1 MiB" memory.graph.full="296.0 MiB" memory.graph.partial="391.4 MiB"
time=2024-07-09T14:52:24.972+02:00 level=INFO source=server.go:375 msg="starting llama server" cmd="/tmp/ollama3554105619/runners/cuda_v11/ollama_llama_server --model /app/cfvr/lbrulenaudet/.ollama/models/blobs/sha256-3de21719a8ffb4f6acc4b636d4ca38d882e0d0aa9a5d417106f985e0e0a4a735 --ctx-size 8192 --batch-size 512 --embedding --log-disable --n-gpu-layers 28 --parallel 4 --port 37081"
time=2024-07-09T14:52:24.973+02:00 level=INFO source=sched.go:474 msg="loaded runners" count=1
time=2024-07-09T14:52:24.973+02:00 level=INFO source=server.go:563 msg="waiting for llama runner to start responding"
time=2024-07-09T14:52:24.974+02:00 level=INFO source=server.go:604 msg="waiting for server to become available" status="llm server error"
llama_model_loader: loaded meta data with 42 key-value pairs and 377 tensors from /app/cfvr/lbrulenaudet/.ollama/models/blobs/sha256-3de21719a8ffb4f6acc4b636d4ca38d882e0d0aa9a5d417106f985e0e0a4a735 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = deepseek2
llama_model_loader: - kv 1: general.name str = DeepSeek-Coder-V2-Lite-Instruct
llama_model_loader: - kv 2: deepseek2.block_count u32 = 27
llama_model_loader: - kv 3: deepseek2.context_length u32 = 163840
llama_model_loader: - kv 4: deepseek2.embedding_length u32 = 2048
llama_model_loader: - kv 5: deepseek2.feed_forward_length u32 = 10944
llama_model_loader: - kv 6: deepseek2.attention.head_count u32 = 16
llama_model_loader: - kv 7: deepseek2.attention.head_count_kv u32 = 16
llama_model_loader: - kv 8: deepseek2.rope.freq_base f32 = 10000.000000
llama_model_loader: - kv 9: deepseek2.attention.layer_norm_rms_epsilon f32 = 0.000001
llama_model_loader: - kv 10: deepseek2.expert_used_count u32 = 6
llama_model_loader: - kv 11: general.file_type u32 = 17
llama_model_loader: - kv 12: deepseek2.leading_dense_block_count u32 = 1
llama_model_loader: - kv 13: deepseek2.vocab_size u32 = 102400
llama_model_loader: - kv 14: deepseek2.attention.kv_lora_rank u32 = 512
llama_model_loader: - kv 15: deepseek2.attention.key_length u32 = 192
llama_model_loader: - kv 16: deepseek2.attention.value_length u32 = 128
llama_model_loader: - kv 17: deepseek2.expert_feed_forward_length u32 = 1408
llama_model_loader: - kv 18: deepseek2.expert_count u32 = 64
llama_model_loader: - kv 19: deepseek2.expert_shared_count u32 = 2
llama_model_loader: - kv 20: deepseek2.expert_weights_scale f32 = 1.000000
llama_model_loader: - kv 21: deepseek2.rope.dimension_count u32 = 64
llama_model_loader: - kv 22: deepseek2.rope.scaling.type str = yarn
llama_model_loader: - kv 23: deepseek2.rope.scaling.factor f32 = 40.000000
llama_model_loader: - kv 24: deepseek2.rope.scaling.original_context_length u32 = 4096
llama_model_loader: - kv 25: deepseek2.rope.scaling.yarn_log_multiplier f32 = 0.070700
llama_model_loader: - kv 26: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 27: tokenizer.ggml.pre str = deepseek-llm
llama_model_loader: - kv 28: tokenizer.ggml.tokens arr[str,102400] = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 29: tokenizer.ggml.token_type arr[i32,102400] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 30: tokenizer.ggml.merges arr[str,99757] = ["Ġ Ġ", "Ġ t", "Ġ a", "i n", "h e...
llama_model_loader: - kv 31: tokenizer.ggml.bos_token_id u32 = 100000
llama_model_loader: - kv 32: tokenizer.ggml.eos_token_id u32 = 100001
llama_model_loader: - kv 33: tokenizer.ggml.padding_token_id u32 = 100001
llama_model_loader: - kv 34: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 35: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 36: tokenizer.chat_template str = {% if not add_generation_prompt is de...
llama_model_loader: - kv 37: general.quantization_version u32 = 2
llama_model_loader: - kv 38: quantize.imatrix.file str = /models/DeepSeek-Coder-V2-Lite-Instru...
llama_model_loader: - kv 39: quantize.imatrix.dataset str = /training_data/calibration_datav3.txt
llama_model_loader: - kv 40: quantize.imatrix.entries_count i32 = 293
llama_model_loader: - kv 41: quantize.imatrix.chunks_count i32 = 139
llama_model_loader: - type f32: 108 tensors
llama_model_loader: - type q5_1: 14 tensors
llama_model_loader: - type q8_0: 13 tensors
llama_model_loader: - type q5_K: 229 tensors
llama_model_loader: - type q6_K: 13 tensors
time=2024-07-09T14:52:25.227+02:00 level=INFO source=server.go:604 msg="waiting for server to become available" status="llm server loading model"
llm_load_vocab: special tokens cache size = 2400
llm_load_vocab: token to piece cache size = 0.6661 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = deepseek2
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 102400
llm_load_print_meta: n_merges = 99757
llm_load_print_meta: vocab_only = 0
llm_load_print_meta: n_ctx_train = 163840
llm_load_print_meta: n_embd = 2048
llm_load_print_meta: n_layer = 27
llm_load_print_meta: n_head = 16
llm_load_print_meta: n_head_kv = 16
llm_load_print_meta: n_rot = 64
llm_load_print_meta: n_swa = 0
llm_load_print_meta: n_embd_head_k = 192
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 1
llm_load_print_meta: n_embd_k_gqa = 3072
llm_load_print_meta: n_embd_v_gqa = 2048
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-06
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 10944
llm_load_print_meta: n_expert = 64
llm_load_print_meta: n_expert_used = 6
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = yarn
llm_load_print_meta: freq_base_train = 10000.0
llm_load_print_meta: freq_scale_train = 0.025
llm_load_print_meta: n_ctx_orig_yarn = 4096
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: model type = 16B
llm_load_print_meta: model ftype = Q5_K - Medium
llm_load_print_meta: model params = 15.71 B
llm_load_print_meta: model size = 11.03 GiB (6.03 BPW)
llm_load_print_meta: general.name = DeepSeek-Coder-V2-Lite-Instruct
llm_load_print_meta: BOS token = 100000 '<|begin▁of▁sentence|>'
llm_load_print_meta: EOS token = 100001 '<|end▁of▁sentence|>'
llm_load_print_meta: PAD token = 100001 '<|end▁of▁sentence|>'
llm_load_print_meta: LF token = 126 'Ä'
llm_load_print_meta: max token length = 256
llm_load_print_meta: n_layer_dense_lead = 1
llm_load_print_meta: n_lora_q = 0
llm_load_print_meta: n_lora_kv = 512
llm_load_print_meta: n_ff_exp = 1408
llm_load_print_meta: n_expert_shared = 2
llm_load_print_meta: expert_weights_scale = 1.0
llm_load_print_meta: rope_yarn_log_mul = 0.0707
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: yes
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
Device 0: Tesla V100-PCIE-32GB, compute capability 7.0, VMM: yes
llm_load_tensors: ggml ctx size = 0.32 MiB
time=2024-07-09T14:52:26.684+02:00 level=INFO source=server.go:604 msg="waiting for server to become available" status="llm server not responding"
llm_load_tensors: offloading 27 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 28/28 layers to GPU
llm_load_tensors: CPU buffer size = 137.50 MiB
llm_load_tensors: CUDA0 buffer size = 11160.99 MiB
time=2024-07-09T14:52:28.291+02:00 level=INFO source=server.go:604 msg="waiting for server to become available" status="llm server loading model"
time=2024-07-09T14:52:32.266+02:00 level=INFO source=server.go:604 msg="waiting for server to become available" status="llm server not responding"
llama_new_context_with_model: n_ctx = 8192
llama_new_context_with_model: n_batch = 512
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 10000.0
llama_new_context_with_model: freq_scale = 0.025
llama_kv_cache_init: CUDA0 KV buffer size = 2160.00 MiB
llama_new_context_with_model: KV self size = 2160.00 MiB, K (f16): 1296.00 MiB, V (f16): 864.00 MiB
llama_new_context_with_model: CUDA_Host output buffer size = 1.59 MiB
llama_new_context_with_model: CUDA0 compute buffer size = 296.00 MiB
llama_new_context_with_model: CUDA_Host compute buffer size = 20.01 MiB
llama_new_context_with_model: graph nodes = 1924
llama_new_context_with_model: graph splits = 2
time=2024-07-09T14:52:32.970+02:00 level=INFO source=server.go:604 msg="waiting for server to become available" status="llm server loading model"
time=2024-07-09T14:52:34.481+02:00 level=INFO source=server.go:609 msg="llama runner started in 9.51 seconds"
CUDA error: unspecified launch failure
current device: 0, in function ggml_cuda_mul_mat_id at /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml/src/ggml-cuda.cu:2010
cudaMemcpyAsync(ids_host.data(), ids_dev, ggml_nbytes(ids), cudaMemcpyDeviceToHost, stream)
GGML_ASSERT: /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml/src/ggml-cuda.cu:100: !"CUDA error"
```
This is the output of the nvidia-smi:
`NVIDIA-SMI 460.32.03 Driver Version: 460.32.03 CUDA Version: 11.2`
Thank you in advance for your reply, and I look forward to hearing from you.
Yours sincerely
Louis Brulé Naudet
### OS
Linux
### GPU
Nvidia
### CPU
AMD
### Ollama version
2.0.1
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5571/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5571/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3215
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3215/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3215/comments
|
https://api.github.com/repos/ollama/ollama/issues/3215/events
|
https://github.com/ollama/ollama/issues/3215
| 2,191,457,641
|
I_kwDOJ0Z1Ps6Cnv1p
| 3,215
|
Access Denied Using LocalTunnel or Ngrok
|
{
"login": "Sonali-Behera-TRT",
"id": 131662185,
"node_id": "U_kgDOB9kBaQ",
"avatar_url": "https://avatars.githubusercontent.com/u/131662185?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Sonali-Behera-TRT",
"html_url": "https://github.com/Sonali-Behera-TRT",
"followers_url": "https://api.github.com/users/Sonali-Behera-TRT/followers",
"following_url": "https://api.github.com/users/Sonali-Behera-TRT/following{/other_user}",
"gists_url": "https://api.github.com/users/Sonali-Behera-TRT/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Sonali-Behera-TRT/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Sonali-Behera-TRT/subscriptions",
"organizations_url": "https://api.github.com/users/Sonali-Behera-TRT/orgs",
"repos_url": "https://api.github.com/users/Sonali-Behera-TRT/repos",
"events_url": "https://api.github.com/users/Sonali-Behera-TRT/events{/privacy}",
"received_events_url": "https://api.github.com/users/Sonali-Behera-TRT/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 11
| 2024-03-18T07:46:21
| 2024-12-12T02:52:17
| 2024-03-18T09:18:42
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I am unable to access my Ollama server locally using LocalTunnel or Ngrok. When attempting to access the server through the provided URL, I receive a `403 Forbidden` error message.
I am using Ollama on Colab/Kaggle to utilize free GPU access. Ollama operates within a containerized environment on Colab/Kaggle, making it impossible to access its endpoint directly from the notebook. Thus, I must tunnel the local Ollama server address to the internet for external access. The steps outlined below were functioning flawlessly until three days ago when they started throwing an error: `Access to silent-lamps-hang.loca.lt was denied. You don't have authorization to view this page. HTTP ERROR 403`. I am unable to troubleshoot this issue. Subsequently, I attempted the same steps locally without reinstalling Ollama, as I had previously downloaded it. I executed only steps `3` and `4` from the "Steps to reproduce" section, which worked without any issues. However, upon executing all the steps as listed below, the aforementioned error resurfaced.
I suspect that the error may be due to a new version replacing the old one on my system.
I tried different tunneling methods like LocalTunnel and Ngrok.Both gave the same error, but they work for other tunneling cases except ollama.
Any assistance in resolving this matter would be greatly appreciated!
### What did you expect to see?
I expect to access the Ollama server interface at the given URL after establishing the tunnel. It should display `Ollama is running` similar to the default endpoint `http://localhost:11434`.
### Steps to reproduce
In Kaggle/Colab Notebook:
1. Start a fresh notebook.
2. Copy and paste the following commands into separate cells:
- `!curl https://ollama.ai/install.sh | sh`
- `!ollama`
- `!curl https://ipv4.icanhazip.com/` (This retrieves your IP address)
- `!ollama serve & npx localtunnel -p 11434` (Starts Ollama server and creates a tunnel)
3. Run all the commands in the cells.
4. Access Instructions:
- The output from step 4 will provide a URL. Open this URL in a new browser tab.
- In the new browser tab, locate the "Tunnel Password" field (specific wording might vary depending on the tunneling tool).
- Copy the IP address obtained in step 3 and paste it into the "Tunnel Password" field.
Locally (Using Terminal):
1. Open your terminal application.
2. Execute the following commands one by one:
- `curl https://ollama.ai/install.sh | sh`
- `ollama`
- `curl https://ipv4.icanhazip.com/` (This retrieves your IP address)
- `ollama serve & npx localtunnel -p 11434` (Starts Ollama server and creates a tunnel)
3. Access Instructions:
- Similar to the Colab steps, the output from step 4 will provide a URL. Open this URL in a new browser tab.
- Locate the "Tunnel Password" field in the new browser tab and paste the IP address from step 3 into it.
### Are there any recent changes that introduced the issue?
This access issue started after I downloaded and installed the latest version of Ollama (v0.1.29). Previously, with an earlier version, I was able to access the server successfully using LocalTunnel or Ngrok.
### OS
Linux
### Architecture
x86
### Platform
_No response_
### Ollama version
0.1.29
### GPU
Nvidia
### GPU info
1. In Kaggle, I am using GPU T4X2. Details are below
```
Mon Mar 18 07:39:04 2024
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.129.03 Driver Version: 535.129.03 CUDA Version: 12.2 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 Tesla T4 Off | 00000000:00:04.0 Off | 0 |
| N/A 56C P8 10W / 70W | 0MiB / 15360MiB | 0% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
| 1 Tesla T4 Off | 00000000:00:05.0 Off | 0 |
| N/A 57C P8 10W / 70W | 0MiB / 15360MiB | 0% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
+---------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
| No running processes found |
+---------------------------------------------------------------------------------------+
```
2. In Colab, I am using the T4 GPU. Below are the details
```
Mon Mar 18 07:42:13 2024
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.104.05 Driver Version: 535.104.05 CUDA Version: 12.2 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 Tesla T4 Off | 00000000:00:04.0 Off | 0 |
| N/A 54C P8 9W / 70W | 0MiB / 15360MiB | 0% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
+---------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
| No running processes found |
+---------------------------------------------------------------------------------------+
```
3. For local environment, I do not use any GPU. Only CPU is used.
### CPU
Intel
### Other software
_No response_
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3215/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/ollama/ollama/issues/3215/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5698
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5698/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5698/comments
|
https://api.github.com/repos/ollama/ollama/issues/5698/events
|
https://github.com/ollama/ollama/issues/5698
| 2,408,174,807
|
I_kwDOJ0Z1Ps6PidTX
| 5,698
|
add support MiniCPM-Llama3-V-2_5
|
{
"login": "LDLINGLINGLING",
"id": 47373076,
"node_id": "MDQ6VXNlcjQ3MzczMDc2",
"avatar_url": "https://avatars.githubusercontent.com/u/47373076?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LDLINGLINGLING",
"html_url": "https://github.com/LDLINGLINGLING",
"followers_url": "https://api.github.com/users/LDLINGLINGLING/followers",
"following_url": "https://api.github.com/users/LDLINGLINGLING/following{/other_user}",
"gists_url": "https://api.github.com/users/LDLINGLINGLING/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LDLINGLINGLING/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LDLINGLINGLING/subscriptions",
"organizations_url": "https://api.github.com/users/LDLINGLINGLING/orgs",
"repos_url": "https://api.github.com/users/LDLINGLINGLING/repos",
"events_url": "https://api.github.com/users/LDLINGLINGLING/events{/privacy}",
"received_events_url": "https://api.github.com/users/LDLINGLINGLING/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-07-15T08:37:13
| 2024-08-28T21:48:08
| 2024-08-28T21:48:08
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
This model is the most powerful multi-modal model I have tried so far. It has a large number of users. However, it is not currently supported by ollama.
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5698/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5698/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6345
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6345/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6345/comments
|
https://api.github.com/repos/ollama/ollama/issues/6345/events
|
https://github.com/ollama/ollama/pull/6345
| 2,464,171,418
|
PR_kwDOJ0Z1Ps54Rvqv
| 6,345
|
Update openai.md to remove extra checkbox for vision
|
{
"login": "pamelafox",
"id": 297042,
"node_id": "MDQ6VXNlcjI5NzA0Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/297042?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pamelafox",
"html_url": "https://github.com/pamelafox",
"followers_url": "https://api.github.com/users/pamelafox/followers",
"following_url": "https://api.github.com/users/pamelafox/following{/other_user}",
"gists_url": "https://api.github.com/users/pamelafox/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pamelafox/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pamelafox/subscriptions",
"organizations_url": "https://api.github.com/users/pamelafox/orgs",
"repos_url": "https://api.github.com/users/pamelafox/repos",
"events_url": "https://api.github.com/users/pamelafox/events{/privacy}",
"received_events_url": "https://api.github.com/users/pamelafox/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-08-13T20:33:50
| 2024-08-13T20:36:05
| 2024-08-13T20:36:05
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6345",
"html_url": "https://github.com/ollama/ollama/pull/6345",
"diff_url": "https://github.com/ollama/ollama/pull/6345.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6345.patch",
"merged_at": "2024-08-13T20:36:05"
}
|
The list has Vision twice- once checked, the other unchecked. I'm removing the second one, optimistically, but I haven't verified a vision model works yet. So maybe the first one should be removed instead?
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6345/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6345/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/571
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/571/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/571/comments
|
https://api.github.com/repos/ollama/ollama/issues/571/events
|
https://github.com/ollama/ollama/pull/571
| 1,907,992,694
|
PR_kwDOJ0Z1Ps5a8Yee
| 571
|
update dockerfile.cuda
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-09-22T00:59:12
| 2023-09-22T19:34:42
| 2023-09-22T19:34:42
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/571",
"html_url": "https://github.com/ollama/ollama/pull/571",
"diff_url": "https://github.com/ollama/ollama/pull/571.diff",
"patch_url": "https://github.com/ollama/ollama/pull/571.patch",
"merged_at": "2023-09-22T19:34:42"
}
| null |
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/571/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/571/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7327
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7327/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7327/comments
|
https://api.github.com/repos/ollama/ollama/issues/7327/events
|
https://github.com/ollama/ollama/issues/7327
| 2,606,888,973
|
I_kwDOJ0Z1Ps6bYfgN
| 7,327
|
ollama create Error: open config.json: file does not exist
|
{
"login": "dragoncdj",
"id": 132640267,
"node_id": "U_kgDOB-fuCw",
"avatar_url": "https://avatars.githubusercontent.com/u/132640267?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dragoncdj",
"html_url": "https://github.com/dragoncdj",
"followers_url": "https://api.github.com/users/dragoncdj/followers",
"following_url": "https://api.github.com/users/dragoncdj/following{/other_user}",
"gists_url": "https://api.github.com/users/dragoncdj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dragoncdj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dragoncdj/subscriptions",
"organizations_url": "https://api.github.com/users/dragoncdj/orgs",
"repos_url": "https://api.github.com/users/dragoncdj/repos",
"events_url": "https://api.github.com/users/dragoncdj/events{/privacy}",
"received_events_url": "https://api.github.com/users/dragoncdj/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-10-23T01:31:29
| 2024-11-13T22:20:49
| 2024-11-13T22:20:49
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I use the create command
>ollama create mymodel2 -f D:\AI\qwen7\Modelfile
but return
Error: open config.json: file does not exist
This is my Modelfile
FROM .\export\pytorch_model.bin
PARAMETER stop <|eot|>
PARAMETER top_p 0.9
PARAMETER temperature 1.0
but,There is a config. json file in the same level folder as the pytorch \ model. bin file.
I don't know what the reason is, how to solve it


### OS
Windows
### GPU
_No response_
### CPU
Intel
### Ollama version
0.3.13
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7327/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7327/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2332
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2332/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2332/comments
|
https://api.github.com/repos/ollama/ollama/issues/2332/events
|
https://github.com/ollama/ollama/issues/2332
| 2,115,350,139
|
I_kwDOJ0Z1Ps5-Fa57
| 2,332
|
using a legacy x86_64 cpu and GTX 1050 Ti?
|
{
"login": "truatpasteurdotfr",
"id": 8300215,
"node_id": "MDQ6VXNlcjgzMDAyMTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/8300215?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/truatpasteurdotfr",
"html_url": "https://github.com/truatpasteurdotfr",
"followers_url": "https://api.github.com/users/truatpasteurdotfr/followers",
"following_url": "https://api.github.com/users/truatpasteurdotfr/following{/other_user}",
"gists_url": "https://api.github.com/users/truatpasteurdotfr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/truatpasteurdotfr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/truatpasteurdotfr/subscriptions",
"organizations_url": "https://api.github.com/users/truatpasteurdotfr/orgs",
"repos_url": "https://api.github.com/users/truatpasteurdotfr/repos",
"events_url": "https://api.github.com/users/truatpasteurdotfr/events{/privacy}",
"received_events_url": "https://api.github.com/users/truatpasteurdotfr/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 7
| 2024-02-02T16:57:31
| 2024-02-03T16:31:51
| 2024-02-03T16:31:50
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi,
I have an old machine I would try to play with:
```
$ lscpu
...
Model name: Intel(R) Xeon(R) CPU E5410 @ 2.33GHz
...
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good nopl aperfmperf eagerfpu pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm dca sse4_1 lahf_lm rsb_ctxsw tpr_shadow vnmi flexpriority dtherm
```
No AVX, but the gpu card is still supported (CC=6.1)
```
$ /c7/shared/cuda/12.1.1_530.30.02/samples/bin/x86_64/linux/release/deviceQuery
...
Device 0: "NVIDIA GeForce GTX 1050 Ti"
CUDA Driver Version / Runtime Version 12.2 / 12.1
CUDA Capability Major/Minor version number: 6.1
Total amount of global memory: 4038 MBytes (4234674176 bytes)
(006) Multiprocessors, (128) CUDA Cores/MP: 768 CUDA Cores
...
```
I have rebuild ollama with cuda support and it is not using the gpu (although properly detected):
```
[tru@mafalda ollama]$ ./ollama --version
Warning: could not connect to a running Ollama instance
Warning: client version is 0.1.23-0-g09a6f76
[tru@mafalda ollama]$ ./ollama serve
time=2024-02-02T17:27:46.581+01:00 level=INFO source=images.go:860 msg="total blobs: 16"
time=2024-02-02T17:27:46.583+01:00 level=INFO source=images.go:867 msg="total unused blobs removed: 0"
time=2024-02-02T17:27:46.585+01:00 level=INFO source=routes.go:995 msg="Listening on 127.0.0.1:11434 (version 0.1.23-0-g09a6f76)"
time=2024-02-02T17:27:46.585+01:00 level=INFO source=payload_common.go:106 msg="Extracting dynamic libraries..."
time=2024-02-02T17:27:58.309+01:00 level=INFO source=payload_common.go:145 msg="Dynamic LLM libraries [cpu cuda_v1_530 cpu_avx2 cpu_avx]"
time=2024-02-02T17:27:58.310+01:00 level=INFO source=gpu.go:94 msg="Detecting GPU type"
time=2024-02-02T17:27:58.310+01:00 level=INFO source=gpu.go:242 msg="Searching for GPU management library libnvidia-ml.so"
time=2024-02-02T17:27:58.318+01:00 level=INFO source=gpu.go:288 msg="Discovered GPU libraries: [/usr/lib64/libnvidia-ml.so.535.129.03]"
time=2024-02-02T17:27:58.331+01:00 level=INFO source=gpu.go:99 msg="Nvidia GPU detected"
time=2024-02-02T17:27:58.332+01:00 level=INFO source=cpu_common.go:18 msg="CPU does not have vector extensions"
time=2024-02-02T17:27:58.332+01:00 level=WARN source=gpu.go:128 msg="CPU does not have AVX or AVX2, disabling GPU support."
time=2024-02-02T17:27:58.332+01:00 level=INFO source=routes.go:1018 msg="no GPU detected"
[GIN] 2024/02/02 - 17:27:59 | 200 | 100.887µs | 127.0.0.1 | HEAD "/"
[GIN] 2024/02/02 - 17:27:59 | 200 | 1.543664ms | 127.0.0.1 | POST "/api/show"
[GIN] 2024/02/02 - 17:27:59 | 200 | 1.425633ms | 127.0.0.1 | POST "/api/show"
time=2024-02-02T17:28:01.622+01:00 level=INFO source=cpu_common.go:18 msg="CPU does not have vector extensions"
time=2024-02-02T17:28:01.622+01:00 level=WARN source=gpu.go:128 msg="CPU does not have AVX or AVX2, disabling GPU support."
time=2024-02-02T17:28:01.622+01:00 level=INFO source=cpu_common.go:18 msg="CPU does not have vector extensions"
time=2024-02-02T17:28:01.622+01:00 level=WARN source=gpu.go:128 msg="CPU does not have AVX or AVX2, disabling GPU support."
time=2024-02-02T17:28:01.622+01:00 level=INFO source=llm.go:77 msg="GPU not available, falling back to CPU"
loading library /tmp/ollama2276873866/cpu/libext_server.so
...
```
The fallback to cpu works as expected and I can it run fine abeit slowly:
```
[tru@mafalda ~]$ ollama run stablelm2 <<< ' why is the sky blue? '
The color of the sky depends on several ....
```
Why is AVX/AXV2 required to enable the gpu part?
Thanks
Tru
|
{
"login": "truatpasteurdotfr",
"id": 8300215,
"node_id": "MDQ6VXNlcjgzMDAyMTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/8300215?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/truatpasteurdotfr",
"html_url": "https://github.com/truatpasteurdotfr",
"followers_url": "https://api.github.com/users/truatpasteurdotfr/followers",
"following_url": "https://api.github.com/users/truatpasteurdotfr/following{/other_user}",
"gists_url": "https://api.github.com/users/truatpasteurdotfr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/truatpasteurdotfr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/truatpasteurdotfr/subscriptions",
"organizations_url": "https://api.github.com/users/truatpasteurdotfr/orgs",
"repos_url": "https://api.github.com/users/truatpasteurdotfr/repos",
"events_url": "https://api.github.com/users/truatpasteurdotfr/events{/privacy}",
"received_events_url": "https://api.github.com/users/truatpasteurdotfr/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2332/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2332/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4561
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4561/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4561/comments
|
https://api.github.com/repos/ollama/ollama/issues/4561/events
|
https://github.com/ollama/ollama/issues/4561
| 2,308,666,586
|
I_kwDOJ0Z1Ps6Jm3Ta
| 4,561
|
Is llava license correct (possibly should be Llama2 not Apache)?
|
{
"login": "asmith26",
"id": 6988036,
"node_id": "MDQ6VXNlcjY5ODgwMzY=",
"avatar_url": "https://avatars.githubusercontent.com/u/6988036?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/asmith26",
"html_url": "https://github.com/asmith26",
"followers_url": "https://api.github.com/users/asmith26/followers",
"following_url": "https://api.github.com/users/asmith26/following{/other_user}",
"gists_url": "https://api.github.com/users/asmith26/gists{/gist_id}",
"starred_url": "https://api.github.com/users/asmith26/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/asmith26/subscriptions",
"organizations_url": "https://api.github.com/users/asmith26/orgs",
"repos_url": "https://api.github.com/users/asmith26/repos",
"events_url": "https://api.github.com/users/asmith26/events{/privacy}",
"received_events_url": "https://api.github.com/users/asmith26/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 3
| 2024-05-21T16:21:28
| 2024-11-17T19:04:44
| 2024-11-17T19:04:43
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Looking at the llava ollama page, it lists the license as Apache: https://ollama.com/library/llava

Looking at the link to huggingface, it implies it's possibly Llama 2: https://huggingface.co/liuhaotian/llava-v1.5-7b#license
Not sure if @haotian-liu may be also help clarify things also?
Thanks!
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4561/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4561/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/83
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/83/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/83/comments
|
https://api.github.com/repos/ollama/ollama/issues/83/events
|
https://github.com/ollama/ollama/pull/83
| 1,805,796,907
|
PR_kwDOJ0Z1Ps5Vkd13
| 83
|
fix multibyte responses
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-07-15T01:30:51
| 2023-07-15T03:14:38
| 2023-07-15T03:12:12
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/83",
"html_url": "https://github.com/ollama/ollama/pull/83",
"diff_url": "https://github.com/ollama/ollama/pull/83.diff",
"patch_url": "https://github.com/ollama/ollama/pull/83.patch",
"merged_at": "2023-07-15T03:12:12"
}
| null |
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/83/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/83/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4907
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4907/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4907/comments
|
https://api.github.com/repos/ollama/ollama/issues/4907/events
|
https://github.com/ollama/ollama/issues/4907
| 2,340,585,613
|
I_kwDOJ0Z1Ps6LgoCN
| 4,907
|
Cannot run qwen2 7B, 1.5b
|
{
"login": "SAXN-SYNX",
"id": 59173145,
"node_id": "MDQ6VXNlcjU5MTczMTQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/59173145?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SAXN-SYNX",
"html_url": "https://github.com/SAXN-SYNX",
"followers_url": "https://api.github.com/users/SAXN-SYNX/followers",
"following_url": "https://api.github.com/users/SAXN-SYNX/following{/other_user}",
"gists_url": "https://api.github.com/users/SAXN-SYNX/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SAXN-SYNX/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SAXN-SYNX/subscriptions",
"organizations_url": "https://api.github.com/users/SAXN-SYNX/orgs",
"repos_url": "https://api.github.com/users/SAXN-SYNX/repos",
"events_url": "https://api.github.com/users/SAXN-SYNX/events{/privacy}",
"received_events_url": "https://api.github.com/users/SAXN-SYNX/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 8
| 2024-06-07T14:20:11
| 2024-06-09T14:06:03
| 2024-06-07T22:57:28
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Shows error while running it.
```
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = qwen2
llama_model_loader: - kv 1: general.name str = Qwen2-7B-Instruct
llama_model_loader: - kv 2: qwen2.block_count u32 = 28
llama_model_loader: - kv 3: qwen2.context_length u32 = 32768
llama_model_loader: - kv 4: qwen2.embedding_length u32 = 3584
llama_model_loader: - kv 5: qwen2.feed_forward_length u32 = 18944
llama_model_loader: - kv 6: qwen2.attention.head_count u32 = 28
llama_model_loader: - kv 7: qwen2.attention.head_count_kv u32 = 4
llama_model_loader: - kv 8: qwen2.rope.freq_base f32 = 1000000.000000
llama_model_loader: - kv 9: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001
llama_model_loader: - kv 10: general.file_type u32 = 2
llama_model_loader: - kv 11: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 12: tokenizer.ggml.pre str = qwen2
llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,152064] = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 14: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 15: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv 16: tokenizer.ggml.eos_token_id u32 = 151645
llama_model_loader: - kv 17: tokenizer.ggml.padding_token_id u32 = 151643
llama_model_loader: - kv 18: tokenizer.ggml.bos_token_id u32 = 151643
llama_model_loader: - kv 19: tokenizer.chat_template str = {% for message in messages %}{% if lo...
llama_model_loader: - kv 20: general.quantization_version u32 = 2
llama_model_loader: - type f32: 141 tensors
llama_model_loader: - type q4_0: 197 tensors
llama_model_loader: - type q6_K: 1 tensors
llama_model_load: error loading model: error loading model vocabulary: unknown pre-tokenizer type: 'qwen2'
llama_load_model_from_file: exception loading model
terminate called after throwing an instance of 'std::runtime_error'
what(): error loading model vocabulary: unknown pre-tokenizer type: 'qwen2'
```
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.34
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4907/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4907/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3509
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3509/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3509/comments
|
https://api.github.com/repos/ollama/ollama/issues/3509/events
|
https://github.com/ollama/ollama/issues/3509
| 2,229,050,488
|
I_kwDOJ0Z1Ps6E3Jx4
| 3,509
|
Can Ollama use both CPU and GPU for inference?
|
{
"login": "OPDEV001",
"id": 120762872,
"node_id": "U_kgDOBzKx-A",
"avatar_url": "https://avatars.githubusercontent.com/u/120762872?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/OPDEV001",
"html_url": "https://github.com/OPDEV001",
"followers_url": "https://api.github.com/users/OPDEV001/followers",
"following_url": "https://api.github.com/users/OPDEV001/following{/other_user}",
"gists_url": "https://api.github.com/users/OPDEV001/gists{/gist_id}",
"starred_url": "https://api.github.com/users/OPDEV001/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/OPDEV001/subscriptions",
"organizations_url": "https://api.github.com/users/OPDEV001/orgs",
"repos_url": "https://api.github.com/users/OPDEV001/repos",
"events_url": "https://api.github.com/users/OPDEV001/events{/privacy}",
"received_events_url": "https://api.github.com/users/OPDEV001/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] |
closed
| false
| null |
[] | null | 3
| 2024-04-06T03:20:18
| 2024-04-12T21:53:18
| 2024-04-12T21:53:18
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What are you trying to do?
May I know whether ollama support to mix CPU and GPU together for running on windows? I know my hardware is not enough for ollama, but I still want to use the part ability of GPU.
But I checked the parameter information from link below, I still can not mix CPU&GPU, most load by CPU.
https://github.com/ollama/ollama/blob/main/docs/modelfile.md
If I put all load to GPU, it will say "Out of VRam", :) you know it.
I am guessing if it have, maybe like we can specify GPU sharing part load, and CPU on most load?
Thanks,
### How should we solve this?
Please see content.
### What is the impact of not solving this?
If not, all load on GPU will crash.
### Anything else?
Thanks for all
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3509/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3509/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/378
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/378/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/378/comments
|
https://api.github.com/repos/ollama/ollama/issues/378/events
|
https://github.com/ollama/ollama/pull/378
| 1,856,053,749
|
PR_kwDOJ0Z1Ps5YNr72
| 378
|
copy metadata from source
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-08-18T04:56:04
| 2023-08-18T20:49:10
| 2023-08-18T20:49:09
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/378",
"html_url": "https://github.com/ollama/ollama/pull/378",
"diff_url": "https://github.com/ollama/ollama/pull/378.diff",
"patch_url": "https://github.com/ollama/ollama/pull/378.patch",
"merged_at": "2023-08-18T20:49:09"
}
| null |
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/378/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/378/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7516
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7516/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7516/comments
|
https://api.github.com/repos/ollama/ollama/issues/7516/events
|
https://github.com/ollama/ollama/pull/7516
| 2,636,184,712
|
PR_kwDOJ0Z1Ps6A90R2
| 7,516
|
Update README.md
|
{
"login": "rapidarchitect",
"id": 126218667,
"node_id": "U_kgDOB4Xxqw",
"avatar_url": "https://avatars.githubusercontent.com/u/126218667?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rapidarchitect",
"html_url": "https://github.com/rapidarchitect",
"followers_url": "https://api.github.com/users/rapidarchitect/followers",
"following_url": "https://api.github.com/users/rapidarchitect/following{/other_user}",
"gists_url": "https://api.github.com/users/rapidarchitect/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rapidarchitect/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rapidarchitect/subscriptions",
"organizations_url": "https://api.github.com/users/rapidarchitect/orgs",
"repos_url": "https://api.github.com/users/rapidarchitect/repos",
"events_url": "https://api.github.com/users/rapidarchitect/events{/privacy}",
"received_events_url": "https://api.github.com/users/rapidarchitect/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-11-05T18:33:16
| 2024-11-05T23:07:26
| 2024-11-05T23:07:26
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7516",
"html_url": "https://github.com/ollama/ollama/pull/7516",
"diff_url": "https://github.com/ollama/ollama/pull/7516.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7516.patch",
"merged_at": "2024-11-05T23:07:26"
}
|
added reddit rate below hexabot, ollama powered reddit search and analysis with streamlit for the intervace
|
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7516/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7516/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/324
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/324/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/324/comments
|
https://api.github.com/repos/ollama/ollama/issues/324/events
|
https://github.com/ollama/ollama/pull/324
| 1,846,012,991
|
PR_kwDOJ0Z1Ps5XrwE1
| 324
|
Generate private/public keypair for use w/ auth
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-08-10T23:24:30
| 2023-08-11T22:28:28
| 2023-08-11T17:58:23
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/324",
"html_url": "https://github.com/ollama/ollama/pull/324",
"diff_url": "https://github.com/ollama/ollama/pull/324.diff",
"patch_url": "https://github.com/ollama/ollama/pull/324.patch",
"merged_at": "2023-08-11T17:58:23"
}
|
This change automatically creates a new OpenSSH compatible ed25519 key pair in your `~/.ollama` directory. The public key can be uploaded to Ollama and can be subsequently used to authenticate.
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/324/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/324/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7317
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7317/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7317/comments
|
https://api.github.com/repos/ollama/ollama/issues/7317/events
|
https://github.com/ollama/ollama/issues/7317
| 2,605,703,293
|
I_kwDOJ0Z1Ps6bT-B9
| 7,317
|
ollama won't start as a service, will start using 'serve'?
|
{
"login": "MikeB2019x",
"id": 49003263,
"node_id": "MDQ6VXNlcjQ5MDAzMjYz",
"avatar_url": "https://avatars.githubusercontent.com/u/49003263?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MikeB2019x",
"html_url": "https://github.com/MikeB2019x",
"followers_url": "https://api.github.com/users/MikeB2019x/followers",
"following_url": "https://api.github.com/users/MikeB2019x/following{/other_user}",
"gists_url": "https://api.github.com/users/MikeB2019x/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MikeB2019x/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MikeB2019x/subscriptions",
"organizations_url": "https://api.github.com/users/MikeB2019x/orgs",
"repos_url": "https://api.github.com/users/MikeB2019x/repos",
"events_url": "https://api.github.com/users/MikeB2019x/events{/privacy}",
"received_events_url": "https://api.github.com/users/MikeB2019x/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info",
"name": "needs more info",
"color": "BA8041",
"default": false,
"description": "More information is needed to assist"
}
] |
closed
| false
| null |
[] | null | 6
| 2024-10-22T14:54:17
| 2024-10-23T17:14:07
| 2024-10-23T17:14:07
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I am trying to run ollama after a manual install on an Ubuntu VM with no internet connectivity. There is no GPU at the moment.
I am able to run ollama successfully from the CLI with:
```
ollama serve
```
When I try to run ollama as a service with:
```
sudo systemctl daemon-reload
sudo systemctl enable ollama
sudo systemctl start ollama
```
ollama fails to start. The only error is using `systemctl --failed` is:
```
UNIT LOAD ACTIVE SUB DESCRIPTION
● zfs-import-scan.service loaded failed failed Import ZFS pools by device scanning
```
I assumed that if using `ollama serve` worked then the service would as well b/c both would be using the same services.
### OS
Linux
### GPU
_No response_
### CPU
AMD
### Ollama version
0.3.6
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7317/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7317/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6857
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6857/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6857/comments
|
https://api.github.com/repos/ollama/ollama/issues/6857/events
|
https://github.com/ollama/ollama/issues/6857
| 2,533,694,266
|
I_kwDOJ0Z1Ps6XBRs6
| 6,857
|
Issues getting rocm support to compile on Gentoo
|
{
"login": "Roger-Roger-debug",
"id": 29002762,
"node_id": "MDQ6VXNlcjI5MDAyNzYy",
"avatar_url": "https://avatars.githubusercontent.com/u/29002762?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Roger-Roger-debug",
"html_url": "https://github.com/Roger-Roger-debug",
"followers_url": "https://api.github.com/users/Roger-Roger-debug/followers",
"following_url": "https://api.github.com/users/Roger-Roger-debug/following{/other_user}",
"gists_url": "https://api.github.com/users/Roger-Roger-debug/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Roger-Roger-debug/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Roger-Roger-debug/subscriptions",
"organizations_url": "https://api.github.com/users/Roger-Roger-debug/orgs",
"repos_url": "https://api.github.com/users/Roger-Roger-debug/repos",
"events_url": "https://api.github.com/users/Roger-Roger-debug/events{/privacy}",
"received_events_url": "https://api.github.com/users/Roger-Roger-debug/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 19
| 2024-09-18T13:05:40
| 2024-12-10T17:47:24
| 2024-12-10T17:47:24
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I'm trying to get the project to compile on Gentoo but am running into some issues as Gentoo uses different paths.
On Gentoo, rocm libraries get installed into /usr/lib64, hip-clang lives somewhere else, and I'm sure there are some other differences as well.
As suggested in the wiki, I set the following environment variables to point the build script to the right point `ROCM_PATH=/usr/lib64 CLBlast_DIR=/usr/lib64/cmake/CLBlast`. This got me a bit further, but compilation still failed because the compiler paths were wrong.
I edited gen_linux.sh and changed the cmake definition for rocm
```
CMAKE_DEFS="${COMMON_CMAKE_DEFS} ${CMAKE_DEFS} -DGGML_HIPBLAS=on
-DGGML_CUDA_NO_PEER_COPY=on -DCMAKE_C_COMPILER=$ROCM_PATH/llvm/bin/clang
-DCMAKE_CXX_COMPILER=$ROCM_PATH/llvm/bin/clang++ -DAMDGPU_TARGETS=$(amdGPUs) -DGPU_TARGETS=$(amdGPUs)"
```
to
```
CMAKE_DEFS="${COMMON_CMAKE_DEFS} ${CMAKE_DEFS} -DGGML_HIPBLAS=on
-DGGML_CUDA_NO_PEER_COPY=on -DCMAKE_C_COMPILER=$(hipconfig -l)/clang
-DCMAKE_CXX_COMPILER=$(hipconfig -l)/clang++ -DAMDGPU_TARGETS=$(amdGPUs) -DGPU_TARGETS=$(amdGPUs)"
```
([this](https://github.com/ggerganov/llama.cpp/blob/8962422b1c6f9b8b15f5aeaea42600bcc2d44177/docs/build.md#hipblas) seems to be how llama sets their `HIPCXX` path and points to the correct path for me). This got me one step further again, but this time it complained about not finding some cmake files. Looking at the llama documentation again it sets `HIP_PATH` for compilation as well (though wrong) and I modified the build function to export
```
export HIPCXX="$(hipconfig -l)/clang" HIP_PATH="$(hipconfig -p)"
```
before compilation.
After that, the project compiles correctly, but trying to load any model crashes ollama. The `ollama serve` process reports
```
rocBLAS error: Tensile solution found, but exception thrown for { a_type: "f16_r", b_type: "f16_r", c_type: "f16_r", d_type: "f16_r", compute_type: "f16_r", transA: 'T', transB: 'N', M: 32, N: 2, K: 256, alpha: 1, row_stride_a: 1, col_stride_a: 1024, row_stride_b: 1, col_stride_b: 2048, row_stride_c: 1, col_stride_c: 32, row_stride_d: 1, col_stride_d: 32, beta: 0, batch_count: 8, strided_batch: false, stride_a: 32768, stride_b: 4096, stride_c: 64, stride_d: 64, atomics_mode: atomics_allowed }
Alpha value -0.0281982 doesn't match that set in problem: 1
This message will be only be displayed once, unless the ROCBLAS_VERBOSE_TENSILE_ERROR environment variable is set.
CUDA error: CUBLAS_STATUS_INTERNAL_ERROR
current device: 0, in function ggml_cuda_mul_mat_batched_cublas at /home/roger/Git/ollama/llm/llama.cpp/ggml/src/ggml-cuda.cu:1890
hipblasGemmBatchedEx(ctx.cublas_handle(), HIPBLAS_OP_T, HIPBLAS_OP_N, ne01, ne11, ne10, alpha, (const void **) (ptrs_src.get() + 0*ne23), HIPBLAS_R_16F, nb01/nb00, (const void **) (ptrs_src.get() + 1*ne23), HIPBLAS_R_16F, nb11/nb10, beta, ( void **) (ptrs_dst.get() + 0*ne23), cu_data_type, ne01, ne23, cu_compute_type, HIPBLAS_GEMM_DEFAULT)
/home/roger/Git/ollama/llm/llama.cpp/ggml/src/ggml-cuda.cu:102: CUDA error
time=2024-09-18T14:50:34.883+02:00 level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server not responding"
time=2024-09-18T14:50:36.936+02:00 level=ERROR source=sched.go:455 msg="error loading llama server" error="llama runner process has terminated: CUDA error: CUBLAS_STATUS_INTERNAL_ERROR\n current device: 0, in function ggml_cuda_mul_mat_batched_cublas at /home/roger/Git/ollama/llm/llama.cpp/ggml/src/ggml-cuda.cu:1890\n hipblasGemmBatchedEx(ctx.cublas_handle(), HIPBLAS_OP_T, HIPBLAS_OP_N, ne01, ne11, ne10, alpha, (const void **) (ptrs_src.get() + 0*ne23), HIPBLAS_R_16F, nb01/nb00, (const void **) (ptrs_src.get() + 1*ne23), HIPBLAS_R_16F, nb11/nb10, beta, ( void **) (ptrs_dst.get() + 0*ne23), cu_data_type, ne01, ne23, cu_compute_type, HIPBLAS_GEMM_DEFAULT)\n/home/roger/Git/ollama/llm/llama.cpp/ggml/src/ggml-cuda.cu:102: CUDA error"
```
the `ollama run` process crashes with
```
Error: llama runner process has terminated: CUDA error: CUBLAS_STATUS_INTERNAL_ERROR
current device: 0, in function ggml_cuda_mul_mat_batched_cublas at /home/roger/Git/ollama/llm/llama.cpp/ggml/src/ggml-cuda.cu:1890
hipblasGemmBatchedEx(ctx.cublas_handle(), HIPBLAS_OP_T, HIPBLAS_OP_N, ne01, ne11, ne10, alpha, (const void **) (ptrs_src.get() + 0*ne23), HIPBLAS_R_16F, nb01/nb00, (const void **) (ptrs_src.get() + 1*ne23), HIPBLAS_R_16F, nb11/nb10, beta, ( void **) (ptrs_dst.get() + 0*ne23), cu_data_type, ne01, ne23, cu_compute_type, HIPBLAS_GEMM_DEFAULT)
/home/roger/Git/ollama/llm/llama.cpp/ggml/src/ggml-cuda.cu:102: CUDA error
```
I can't make any sense of these errors and don't know what else to try.
### OS
Linux
### GPU
AMD
### CPU
AMD
### Ollama version
git head
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6857/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6857/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3920
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3920/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3920/comments
|
https://api.github.com/repos/ollama/ollama/issues/3920/events
|
https://github.com/ollama/ollama/pull/3920
| 2,264,330,323
|
PR_kwDOJ0Z1Ps5tw_4f
| 3,920
|
Reload model if `num_gpu` changes
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-04-25T19:20:28
| 2024-04-25T23:02:41
| 2024-04-25T23:02:40
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3920",
"html_url": "https://github.com/ollama/ollama/pull/3920",
"diff_url": "https://github.com/ollama/ollama/pull/3920.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3920.patch",
"merged_at": "2024-04-25T23:02:40"
}
| null |
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3920/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3920/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5577
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5577/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5577/comments
|
https://api.github.com/repos/ollama/ollama/issues/5577/events
|
https://github.com/ollama/ollama/issues/5577
| 2,398,786,428
|
I_kwDOJ0Z1Ps6O-pN8
| 5,577
|
Pulling model in docker-compose command
|
{
"login": "aditya6767",
"id": 77670575,
"node_id": "MDQ6VXNlcjc3NjcwNTc1",
"avatar_url": "https://avatars.githubusercontent.com/u/77670575?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aditya6767",
"html_url": "https://github.com/aditya6767",
"followers_url": "https://api.github.com/users/aditya6767/followers",
"following_url": "https://api.github.com/users/aditya6767/following{/other_user}",
"gists_url": "https://api.github.com/users/aditya6767/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aditya6767/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aditya6767/subscriptions",
"organizations_url": "https://api.github.com/users/aditya6767/orgs",
"repos_url": "https://api.github.com/users/aditya6767/repos",
"events_url": "https://api.github.com/users/aditya6767/events{/privacy}",
"received_events_url": "https://api.github.com/users/aditya6767/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-07-09T17:36:27
| 2024-11-06T12:31:21
| 2024-11-06T12:31:20
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
- [ ] To run ollama pull llama2 in the docker-compose command
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5577/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5577/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8687
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8687/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8687/comments
|
https://api.github.com/repos/ollama/ollama/issues/8687/events
|
https://github.com/ollama/ollama/issues/8687
| 2,820,126,992
|
I_kwDOJ0Z1Ps6oF7kQ
| 8,687
|
Issue with Ollama Model Download: Restart Automatically or Throws an Error.
|
{
"login": "baraich",
"id": 146362414,
"node_id": "U_kgDOCLlQLg",
"avatar_url": "https://avatars.githubusercontent.com/u/146362414?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/baraich",
"html_url": "https://github.com/baraich",
"followers_url": "https://api.github.com/users/baraich/followers",
"following_url": "https://api.github.com/users/baraich/following{/other_user}",
"gists_url": "https://api.github.com/users/baraich/gists{/gist_id}",
"starred_url": "https://api.github.com/users/baraich/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/baraich/subscriptions",
"organizations_url": "https://api.github.com/users/baraich/orgs",
"repos_url": "https://api.github.com/users/baraich/repos",
"events_url": "https://api.github.com/users/baraich/events{/privacy}",
"received_events_url": "https://api.github.com/users/baraich/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677370291,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCVsw",
"url": "https://api.github.com/repos/ollama/ollama/labels/networking",
"name": "networking",
"color": "0B5368",
"default": false,
"description": "Issues relating to ollama pull and push"
}
] |
open
| false
|
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 3
| 2025-01-30T07:43:14
| 2025-01-30T08:50:40
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
## Setting the Context
While downloading a model using the `ollama pull` command the downloading process is initiated. However, the process automatically decided
to restart and begins to downloading again from 0%.
I have seen other issues that were related to downloading, and I believe this problem is caused mainly due to Inactivity Monitoring (https://github.com/ollama/ollama/pull/1916).
## Examples
> A case when the commands threw error.

> A case when the downloading restarts
https://github.com/user-attachments/assets/8ffd5d31-a879-4570-9839-576f2d3a0bf7
## Remarks
The downloading is now purely luck based, I am trying to download this model from last 2 hours now and most of the times I see this restart around 34MB, 186MB and 276MB. I don't think these megabytes number should matter but still I am sharing those.
Also, I did download this model once, but when I tried to `ollama list` after pulling the model the list was empty. I don't know why was that as I am not able to download the model again so I am not sure.
### OS
Linux
### GPU
_No response_
### CPU
_No response_
### Ollama version
0.5.7
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8687/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8687/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/7869
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7869/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7869/comments
|
https://api.github.com/repos/ollama/ollama/issues/7869/events
|
https://github.com/ollama/ollama/issues/7869
| 2,701,093,707
|
I_kwDOJ0Z1Ps6g_2tL
| 7,869
|
Installation not working on Fedora 41 Linux
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 5
| 2024-11-28T07:18:31
| 2024-11-28T08:02:40
| null |
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
```
curl -fsSL https://ollama.com/install.sh | sh
> Installing ollama to /usr/local
> [sudo] password for bns:
> >>> Downloading Linux amd64 bundle
> ######################################################################## 100.0%
> >>> Creating ollama user...
> >>> Adding ollama user to render group...
> >>> Adding ollama user to video group...
> >>> Adding current user to ollama group...
> >>> Creating ollama systemd service...
> >>> Enabling and starting ollama service...
> Created symlink '/etc/systemd/system/default.target.wants/ollama.service' → '/etc/systemd/system/ollama.service'.
> >>> Installing NVIDIA repository...
> Unknown argument "--add-repo" for command "config-manager". Add "--help" for more information about the arguments.
```
### OS
Linux
### GPU
_No response_
### CPU
_No response_
### Ollama version
_No response_
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7869/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7869/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/208
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/208/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/208/comments
|
https://api.github.com/repos/ollama/ollama/issues/208/events
|
https://github.com/ollama/ollama/pull/208
| 1,820,587,068
|
PR_kwDOJ0Z1Ps5WWNTq
| 208
|
github issue templates
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-07-25T15:22:35
| 2023-08-04T14:06:41
| 2023-07-25T15:25:39
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/208",
"html_url": "https://github.com/ollama/ollama/pull/208",
"diff_url": "https://github.com/ollama/ollama/pull/208.diff",
"patch_url": "https://github.com/ollama/ollama/pull/208.patch",
"merged_at": null
}
|
resolves #182
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/208/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/208/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5926
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5926/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5926/comments
|
https://api.github.com/repos/ollama/ollama/issues/5926/events
|
https://github.com/ollama/ollama/pull/5926
| 2,428,401,602
|
PR_kwDOJ0Z1Ps52Yp_X
| 5,926
|
Prevent loading too large models on windows
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-07-24T20:19:25
| 2024-08-12T16:08:31
| 2024-08-11T18:30:20
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5926",
"html_url": "https://github.com/ollama/ollama/pull/5926",
"diff_url": "https://github.com/ollama/ollama/pull/5926.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5926.patch",
"merged_at": "2024-08-11T18:30:20"
}
|
We already blocked linux memory exhaustion, but should apply the same check for Windows as well
We can't apply the same logic to MacOS, as it uses fully dynamic swap space and has no concept of free swap space.
Fixes #5882
Fixes #4955
Fixes #5958
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5926/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5926/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4054
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4054/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4054/comments
|
https://api.github.com/repos/ollama/ollama/issues/4054/events
|
https://github.com/ollama/ollama/issues/4054
| 2,271,742,386
|
I_kwDOJ0Z1Ps6HaAmy
| 4,054
|
llama-3-chinese-8b-instruct model infinite loop generate & cannot stop
|
{
"login": "gavinliu",
"id": 3281741,
"node_id": "MDQ6VXNlcjMyODE3NDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/3281741?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gavinliu",
"html_url": "https://github.com/gavinliu",
"followers_url": "https://api.github.com/users/gavinliu/followers",
"following_url": "https://api.github.com/users/gavinliu/following{/other_user}",
"gists_url": "https://api.github.com/users/gavinliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gavinliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gavinliu/subscriptions",
"organizations_url": "https://api.github.com/users/gavinliu/orgs",
"repos_url": "https://api.github.com/users/gavinliu/repos",
"events_url": "https://api.github.com/users/gavinliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/gavinliu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-04-30T15:01:33
| 2024-05-24T00:33:08
| 2024-05-24T00:33:08
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Hey, I found an issue of infinite generation that cannot be stopped, when deploying a [Chinese fine-tuned model of llama3 ](https://huggingface.co/hfl/llama-3-chinese-8b-instruct-gguf)
How to solve this problem?
Modelfile file:
```Modelfile
FROM /llama-3-chinese-8b-instruct/ggml-model-q8_0.gguf
TEMPLATE """{{ if .System }}<|start_header_id|>system<|end_header_id|>
{{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|>
{{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|>
{{ .Response }}<|eot_id|>"""
SYSTEM """"""
PARAMETER num_keep 24
PARAMETER stop <|start_header_id|>
PARAMETER stop <|end_header_id|>
PARAMETER stop <|eot_id|>
PARAMETER stop assistant
```
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.1.32
|
{
"login": "gavinliu",
"id": 3281741,
"node_id": "MDQ6VXNlcjMyODE3NDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/3281741?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gavinliu",
"html_url": "https://github.com/gavinliu",
"followers_url": "https://api.github.com/users/gavinliu/followers",
"following_url": "https://api.github.com/users/gavinliu/following{/other_user}",
"gists_url": "https://api.github.com/users/gavinliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gavinliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gavinliu/subscriptions",
"organizations_url": "https://api.github.com/users/gavinliu/orgs",
"repos_url": "https://api.github.com/users/gavinliu/repos",
"events_url": "https://api.github.com/users/gavinliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/gavinliu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4054/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4054/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3595
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3595/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3595/comments
|
https://api.github.com/repos/ollama/ollama/issues/3595/events
|
https://github.com/ollama/ollama/pull/3595
| 2,237,694,351
|
PR_kwDOJ0Z1Ps5sW9eo
| 3,595
|
Added MindsDB information
|
{
"login": "chandrevdw31",
"id": 32901682,
"node_id": "MDQ6VXNlcjMyOTAxNjgy",
"avatar_url": "https://avatars.githubusercontent.com/u/32901682?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chandrevdw31",
"html_url": "https://github.com/chandrevdw31",
"followers_url": "https://api.github.com/users/chandrevdw31/followers",
"following_url": "https://api.github.com/users/chandrevdw31/following{/other_user}",
"gists_url": "https://api.github.com/users/chandrevdw31/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chandrevdw31/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chandrevdw31/subscriptions",
"organizations_url": "https://api.github.com/users/chandrevdw31/orgs",
"repos_url": "https://api.github.com/users/chandrevdw31/repos",
"events_url": "https://api.github.com/users/chandrevdw31/events{/privacy}",
"received_events_url": "https://api.github.com/users/chandrevdw31/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-04-11T13:10:33
| 2024-04-15T22:35:30
| 2024-04-15T22:35:30
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3595",
"html_url": "https://github.com/ollama/ollama/pull/3595",
"diff_url": "https://github.com/ollama/ollama/pull/3595.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3595.patch",
"merged_at": "2024-04-15T22:35:30"
}
|
Added more details to MindsDB so that Ollama users can know that they can connect their Ollama model with nearly 200 data platforms, including databases, vector stores, and applications.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3595/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3595/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7096
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7096/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7096/comments
|
https://api.github.com/repos/ollama/ollama/issues/7096/events
|
https://github.com/ollama/ollama/pull/7096
| 2,565,166,694
|
PR_kwDOJ0Z1Ps59j2bp
| 7,096
|
Add G1 to list of integrations
|
{
"login": "hidden1nin",
"id": 8339670,
"node_id": "MDQ6VXNlcjgzMzk2NzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/8339670?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hidden1nin",
"html_url": "https://github.com/hidden1nin",
"followers_url": "https://api.github.com/users/hidden1nin/followers",
"following_url": "https://api.github.com/users/hidden1nin/following{/other_user}",
"gists_url": "https://api.github.com/users/hidden1nin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hidden1nin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hidden1nin/subscriptions",
"organizations_url": "https://api.github.com/users/hidden1nin/orgs",
"repos_url": "https://api.github.com/users/hidden1nin/repos",
"events_url": "https://api.github.com/users/hidden1nin/events{/privacy}",
"received_events_url": "https://api.github.com/users/hidden1nin/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-10-03T23:49:55
| 2024-10-05T18:57:53
| 2024-10-05T18:57:53
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7096",
"html_url": "https://github.com/ollama/ollama/pull/7096",
"diff_url": "https://github.com/ollama/ollama/pull/7096.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7096.patch",
"merged_at": "2024-10-05T18:57:53"
}
|
I added g1 to the list of integrations in the readme file. Hopefully this can bring this project more attention.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7096/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7096/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3038
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3038/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3038/comments
|
https://api.github.com/repos/ollama/ollama/issues/3038/events
|
https://github.com/ollama/ollama/issues/3038
| 2,177,610,819
|
I_kwDOJ0Z1Ps6By7RD
| 3,038
|
Log says "Nvidia GPU detected" and then "no GPU detected"
|
{
"login": "jimstevens2001",
"id": 250203,
"node_id": "MDQ6VXNlcjI1MDIwMw==",
"avatar_url": "https://avatars.githubusercontent.com/u/250203?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jimstevens2001",
"html_url": "https://github.com/jimstevens2001",
"followers_url": "https://api.github.com/users/jimstevens2001/followers",
"following_url": "https://api.github.com/users/jimstevens2001/following{/other_user}",
"gists_url": "https://api.github.com/users/jimstevens2001/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jimstevens2001/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jimstevens2001/subscriptions",
"organizations_url": "https://api.github.com/users/jimstevens2001/orgs",
"repos_url": "https://api.github.com/users/jimstevens2001/repos",
"events_url": "https://api.github.com/users/jimstevens2001/events{/privacy}",
"received_events_url": "https://api.github.com/users/jimstevens2001/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-03-10T08:57:57
| 2024-03-10T12:43:27
| 2024-03-10T12:33:46
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I am running a fresh install of Ollama inside of an Ubuntu 22.04 VM running an Nvidia RTX 4090 via pci passthrough (installed with "curl -fsSL https://ollama.com/install.sh | sh"). I have verified that nvidia-smi works as expected and a pytorch program can detect the GPU, but when I run Ollama, it uses the CPU to execute. Note that I have an almost identical setup (except on the host rather than in a guest) running a version of Ollama from late December with "ollama run mixtral:8x7b-instruct-v0.1-q2_K" and it uses the GPU properly.
Here is the log output that shows the inconsistent messages "Nvidia GPU detected" and then "no GPU detected"...
Mar 10 07:14:14 hinton systemd[1]: Started Ollama Service.
Mar 10 07:14:14 hinton ollama[998]: time=2024-03-10T07:14:14.622Z level=INFO source=images.go:710 msg="total blobs: 0"
Mar 10 07:14:14 hinton ollama[998]: time=2024-03-10T07:14:14.622Z level=INFO source=images.go:717 msg="total unused blobs removed: 0"
Mar 10 07:14:14 hinton ollama[998]: time=2024-03-10T07:14:14.622Z level=INFO source=routes.go:1021 msg="Listening on 127.0.0.1:11434 (version 0.1>
Mar 10 07:14:14 hinton ollama[998]: time=2024-03-10T07:14:14.622Z level=INFO source=payload_common.go:107 msg="Extracting dynamic libraries..."
Mar 10 07:14:16 hinton ollama[998]: time=2024-03-10T07:14:16.159Z level=INFO source=payload_common.go:146 msg="Dynamic LLM libraries [cpu_avx2 ro>
Mar 10 07:14:16 hinton ollama[998]: time=2024-03-10T07:14:16.159Z level=INFO source=gpu.go:94 msg="Detecting GPU type"
Mar 10 07:14:16 hinton ollama[998]: time=2024-03-10T07:14:16.159Z level=INFO source=gpu.go:265 msg="Searching for GPU management library libnvidi>
Mar 10 07:14:16 hinton ollama[998]: time=2024-03-10T07:14:16.160Z level=INFO source=gpu.go:311 msg="Discovered GPU libraries: [/usr/lib/x86_64-li>
Mar 10 07:14:16 hinton ollama[998]: time=2024-03-10T07:14:16.165Z level=INFO source=gpu.go:99 msg="Nvidia GPU detected"
Mar 10 07:14:16 hinton ollama[998]: time=2024-03-10T07:14:16.165Z level=INFO source=cpu_common.go:18 msg="CPU does not have vector extensions"
Mar 10 07:14:16 hinton ollama[998]: time=2024-03-10T07:14:16.165Z level=WARN source=gpu.go:128 msg="CPU does not have AVX or AVX2, disabling GPU >
Mar 10 07:14:16 hinton ollama[998]: time=2024-03-10T07:14:16.165Z level=INFO source=routes.go:1044 msg="no GPU detected"
|
{
"login": "jimstevens2001",
"id": 250203,
"node_id": "MDQ6VXNlcjI1MDIwMw==",
"avatar_url": "https://avatars.githubusercontent.com/u/250203?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jimstevens2001",
"html_url": "https://github.com/jimstevens2001",
"followers_url": "https://api.github.com/users/jimstevens2001/followers",
"following_url": "https://api.github.com/users/jimstevens2001/following{/other_user}",
"gists_url": "https://api.github.com/users/jimstevens2001/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jimstevens2001/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jimstevens2001/subscriptions",
"organizations_url": "https://api.github.com/users/jimstevens2001/orgs",
"repos_url": "https://api.github.com/users/jimstevens2001/repos",
"events_url": "https://api.github.com/users/jimstevens2001/events{/privacy}",
"received_events_url": "https://api.github.com/users/jimstevens2001/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3038/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3038/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5185
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5185/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5185/comments
|
https://api.github.com/repos/ollama/ollama/issues/5185/events
|
https://github.com/ollama/ollama/issues/5185
| 2,364,595,820
|
I_kwDOJ0Z1Ps6M8N5s
| 5,185
|
florance vision model
|
{
"login": "iplayfast",
"id": 751306,
"node_id": "MDQ6VXNlcjc1MTMwNg==",
"avatar_url": "https://avatars.githubusercontent.com/u/751306?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iplayfast",
"html_url": "https://github.com/iplayfast",
"followers_url": "https://api.github.com/users/iplayfast/followers",
"following_url": "https://api.github.com/users/iplayfast/following{/other_user}",
"gists_url": "https://api.github.com/users/iplayfast/gists{/gist_id}",
"starred_url": "https://api.github.com/users/iplayfast/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iplayfast/subscriptions",
"organizations_url": "https://api.github.com/users/iplayfast/orgs",
"repos_url": "https://api.github.com/users/iplayfast/repos",
"events_url": "https://api.github.com/users/iplayfast/events{/privacy}",
"received_events_url": "https://api.github.com/users/iplayfast/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
open
| false
| null |
[] | null | 6
| 2024-06-20T14:25:47
| 2024-09-03T16:58:13
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
https://huggingface.co/microsoft/Florence-2-large/tree/main uses pytorch
https://huggingface.co/spaces/SixOpen/Florence-2-large-ft
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5185/reactions",
"total_count": 26,
"+1": 26,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5185/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/3216
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3216/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3216/comments
|
https://api.github.com/repos/ollama/ollama/issues/3216/events
|
https://github.com/ollama/ollama/issues/3216
| 2,191,563,226
|
I_kwDOJ0Z1Ps6CoJna
| 3,216
|
baichuan-inc/Baichuan2-13B-Chat not supported. Can it be supported later
|
{
"login": "wangshuai67",
"id": 13214849,
"node_id": "MDQ6VXNlcjEzMjE0ODQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/13214849?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wangshuai67",
"html_url": "https://github.com/wangshuai67",
"followers_url": "https://api.github.com/users/wangshuai67/followers",
"following_url": "https://api.github.com/users/wangshuai67/following{/other_user}",
"gists_url": "https://api.github.com/users/wangshuai67/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wangshuai67/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wangshuai67/subscriptions",
"organizations_url": "https://api.github.com/users/wangshuai67/orgs",
"repos_url": "https://api.github.com/users/wangshuai67/repos",
"events_url": "https://api.github.com/users/wangshuai67/events{/privacy}",
"received_events_url": "https://api.github.com/users/wangshuai67/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
open
| false
| null |
[] | null | 1
| 2024-03-18T08:46:09
| 2024-03-22T03:58:15
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What are you trying to do?
https://huggingface.co/baichuan-inc/Baichuan2-13B-Chat not supported
### How should we solve this?
baichuan-inc/Baichuan2-13B-Chat not supported. Can it be supported later
### What is the impact of not solving this?
_No response_
### Anything else?
_No response_
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3216/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3216/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/2289
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2289/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2289/comments
|
https://api.github.com/repos/ollama/ollama/issues/2289/events
|
https://github.com/ollama/ollama/pull/2289
| 2,110,603,109
|
PR_kwDOJ0Z1Ps5lmRzw
| 2,289
|
fix: preserve last system message from modelfile
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-01-31T17:22:03
| 2024-02-01T02:45:02
| 2024-02-01T02:45:01
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/2289",
"html_url": "https://github.com/ollama/ollama/pull/2289",
"diff_url": "https://github.com/ollama/ollama/pull/2289.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2289.patch",
"merged_at": "2024-02-01T02:45:01"
}
|
When truncating messages to fit in the context window if the system message from the modelfile was used it was not carried over, this preserves the modelfile system message in the case of truncation.
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2289/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2289/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/8078
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8078/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8078/comments
|
https://api.github.com/repos/ollama/ollama/issues/8078/events
|
https://github.com/ollama/ollama/pull/8078
| 2,736,953,410
|
PR_kwDOJ0Z1Ps6FEyRs
| 8,078
|
llama: update grammar test to expose lack of insertion order for JSON schema to grammar conversion
|
{
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "https://api.github.com/users/ParthSareen/followers",
"following_url": "https://api.github.com/users/ParthSareen/following{/other_user}",
"gists_url": "https://api.github.com/users/ParthSareen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ParthSareen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ParthSareen/subscriptions",
"organizations_url": "https://api.github.com/users/ParthSareen/orgs",
"repos_url": "https://api.github.com/users/ParthSareen/repos",
"events_url": "https://api.github.com/users/ParthSareen/events{/privacy}",
"received_events_url": "https://api.github.com/users/ParthSareen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-12-12T21:54:27
| 2024-12-19T03:44:52
| 2024-12-19T03:44:50
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/8078",
"html_url": "https://github.com/ollama/ollama/pull/8078",
"diff_url": "https://github.com/ollama/ollama/pull/8078.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8078.patch",
"merged_at": "2024-12-19T03:44:50"
}
|
This test is updated with a more complex JSON schema to expose the lack of maintaining insertion order generated from `json-schema-to-grammar`
Documents behavior in: #7978
|
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8078/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8078/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7109
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7109/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7109/comments
|
https://api.github.com/repos/ollama/ollama/issues/7109/events
|
https://github.com/ollama/ollama/issues/7109
| 2,568,969,243
|
I_kwDOJ0Z1Ps6ZH1wb
| 7,109
|
Downloading models too slow
|
{
"login": "rubenmejiac",
"id": 20344715,
"node_id": "MDQ6VXNlcjIwMzQ0NzE1",
"avatar_url": "https://avatars.githubusercontent.com/u/20344715?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rubenmejiac",
"html_url": "https://github.com/rubenmejiac",
"followers_url": "https://api.github.com/users/rubenmejiac/followers",
"following_url": "https://api.github.com/users/rubenmejiac/following{/other_user}",
"gists_url": "https://api.github.com/users/rubenmejiac/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rubenmejiac/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rubenmejiac/subscriptions",
"organizations_url": "https://api.github.com/users/rubenmejiac/orgs",
"repos_url": "https://api.github.com/users/rubenmejiac/repos",
"events_url": "https://api.github.com/users/rubenmejiac/events{/privacy}",
"received_events_url": "https://api.github.com/users/rubenmejiac/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg",
"url": "https://api.github.com/repos/ollama/ollama/labels/windows",
"name": "windows",
"color": "0052CC",
"default": false,
"description": ""
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info",
"name": "needs more info",
"color": "BA8041",
"default": false,
"description": "More information is needed to assist"
},
{
"id": 6677370291,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCVsw",
"url": "https://api.github.com/repos/ollama/ollama/labels/networking",
"name": "networking",
"color": "0B5368",
"default": false,
"description": "Issues relating to ollama pull and push"
}
] |
open
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 2
| 2024-10-06T23:20:14
| 2024-11-05T22:39:45
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I have very slow downloads of models since I installed Ollama in Windows 11. No problems running models, etc. it's only the download speeds.
The terminal seems to report a different speed than shown in my network monitor.
I include screens of two downloads and the network monitor, which report 30Mbps approx. while the Ollama progress bars indicates 1.7-1.9MBps. I've got no problems with firewalls or proxies, and downloads from any other clients of big files work usually at 50-300MBps. My network speed is 600MBps
`PowerShell 7.4.5
PS C:\Users\v6u2mop> ollama pull deepseek-coder-v2:16b
pulling manifest
pulling 5ff0abeeac1d... 32% ▕██████████████████ ▏ 2.9 GB/8.9 GB 1.9 MB/s 53m27s `
`PS C:\Users\ruben_v6u2mop> ollama pull qwen2.5-coder
pulling manifest
pulling ced7796abcbb... 69% ▕████████████████████████████ ▏ 3.2 GB/4.7 GB 1.8 MB/s 13m0s `

### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.3.12
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7109/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7109/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/5970
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5970/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5970/comments
|
https://api.github.com/repos/ollama/ollama/issues/5970/events
|
https://github.com/ollama/ollama/issues/5970
| 2,431,342,195
|
I_kwDOJ0Z1Ps6Q61Zz
| 5,970
|
run glm4 Error: llama runner process has terminated: signal: aborted (core dumped)
|
{
"login": "x-future",
"id": 23043471,
"node_id": "MDQ6VXNlcjIzMDQzNDcx",
"avatar_url": "https://avatars.githubusercontent.com/u/23043471?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/x-future",
"html_url": "https://github.com/x-future",
"followers_url": "https://api.github.com/users/x-future/followers",
"following_url": "https://api.github.com/users/x-future/following{/other_user}",
"gists_url": "https://api.github.com/users/x-future/gists{/gist_id}",
"starred_url": "https://api.github.com/users/x-future/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/x-future/subscriptions",
"organizations_url": "https://api.github.com/users/x-future/orgs",
"repos_url": "https://api.github.com/users/x-future/repos",
"events_url": "https://api.github.com/users/x-future/events{/privacy}",
"received_events_url": "https://api.github.com/users/x-future/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info",
"name": "needs more info",
"color": "BA8041",
"default": false,
"description": "More information is needed to assist"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 9
| 2024-07-26T03:35:56
| 2024-07-29T16:34:37
| 2024-07-29T16:34:37
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Error: llama runner process has terminated: signal: aborted (core dumped)
# ollama run glm4
pulling manifest
pulling b506a070d115... 100% ▕█████████████████████████████████████████████████████████████████████████████████████▏ 5.5 GB
pulling e7e7aebd710c... 100% ▕█████████████████████████████████████████████████████████████████████████████████████▏ 137 B
pulling e4f0dc83900a... 100% ▕█████████████████████████████████████████████████████████████████████████████████████▏ 6.5 KB
pulling 4134f3eb0516... 100% ▕█████████████████████████████████████████████████████████████████████████████████████▏ 81 B
pulling ca0dd08dd282... 100% ▕█████████████████████████████████████████████████████████████████████████████████████▏ 489 B
verifying sha256 digest
writing manifest
removing any unused layers
success
Error: llama runner process has terminated: signal: aborted (core dumped)
system info: LSB Version: core-11.1.0ubuntu4-noarch:security-11.1.0ubuntu4-noarch
gpu info:
<img width="760" alt="image" src="https://github.com/user-attachments/assets/accf2f79-e24b-4eb0-a094-525db8c31f96">
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5970/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5970/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/590
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/590/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/590/comments
|
https://api.github.com/repos/ollama/ollama/issues/590/events
|
https://github.com/ollama/ollama/pull/590
| 1,912,061,080
|
PR_kwDOJ0Z1Ps5bJ4WE
| 590
|
fix dkms install
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-09-25T18:28:28
| 2023-09-25T19:17:32
| 2023-09-25T19:17:32
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/590",
"html_url": "https://github.com/ollama/ollama/pull/590",
"diff_url": "https://github.com/ollama/ollama/pull/590.diff",
"patch_url": "https://github.com/ollama/ollama/pull/590.patch",
"merged_at": "2023-09-25T19:17:32"
}
| null |
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/590/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/590/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6904
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6904/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6904/comments
|
https://api.github.com/repos/ollama/ollama/issues/6904/events
|
https://github.com/ollama/ollama/issues/6904
| 2,540,478,396
|
I_kwDOJ0Z1Ps6XbJ-8
| 6,904
|
Option to know number of running request in ollama
|
{
"login": "Jegatheesh001",
"id": 14847813,
"node_id": "MDQ6VXNlcjE0ODQ3ODEz",
"avatar_url": "https://avatars.githubusercontent.com/u/14847813?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Jegatheesh001",
"html_url": "https://github.com/Jegatheesh001",
"followers_url": "https://api.github.com/users/Jegatheesh001/followers",
"following_url": "https://api.github.com/users/Jegatheesh001/following{/other_user}",
"gists_url": "https://api.github.com/users/Jegatheesh001/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Jegatheesh001/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Jegatheesh001/subscriptions",
"organizations_url": "https://api.github.com/users/Jegatheesh001/orgs",
"repos_url": "https://api.github.com/users/Jegatheesh001/repos",
"events_url": "https://api.github.com/users/Jegatheesh001/events{/privacy}",
"received_events_url": "https://api.github.com/users/Jegatheesh001/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 3
| 2024-09-21T19:23:52
| 2024-09-25T00:23:39
| 2024-09-25T00:23:39
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Option to know number of running request in ollama
|
{
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users/jessegross/followers",
"following_url": "https://api.github.com/users/jessegross/following{/other_user}",
"gists_url": "https://api.github.com/users/jessegross/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jessegross/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jessegross/subscriptions",
"organizations_url": "https://api.github.com/users/jessegross/orgs",
"repos_url": "https://api.github.com/users/jessegross/repos",
"events_url": "https://api.github.com/users/jessegross/events{/privacy}",
"received_events_url": "https://api.github.com/users/jessegross/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6904/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6904/timeline
| null |
not_planned
| false
|
https://api.github.com/repos/ollama/ollama/issues/4139
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4139/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4139/comments
|
https://api.github.com/repos/ollama/ollama/issues/4139/events
|
https://github.com/ollama/ollama/issues/4139
| 2,278,414,912
|
I_kwDOJ0Z1Ps6HzdpA
| 4,139
|
only 1 GPU found -- regression 1.32 -> 1.33
|
{
"login": "AlexLJordan",
"id": 10133257,
"node_id": "MDQ6VXNlcjEwMTMzMjU3",
"avatar_url": "https://avatars.githubusercontent.com/u/10133257?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AlexLJordan",
"html_url": "https://github.com/AlexLJordan",
"followers_url": "https://api.github.com/users/AlexLJordan/followers",
"following_url": "https://api.github.com/users/AlexLJordan/following{/other_user}",
"gists_url": "https://api.github.com/users/AlexLJordan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AlexLJordan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AlexLJordan/subscriptions",
"organizations_url": "https://api.github.com/users/AlexLJordan/orgs",
"repos_url": "https://api.github.com/users/AlexLJordan/repos",
"events_url": "https://api.github.com/users/AlexLJordan/events{/privacy}",
"received_events_url": "https://api.github.com/users/AlexLJordan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg",
"url": "https://api.github.com/repos/ollama/ollama/labels/nvidia",
"name": "nvidia",
"color": "8CDB00",
"default": false,
"description": "Issues relating to Nvidia GPUs and CUDA"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 25
| 2024-05-03T20:58:34
| 2025-01-10T12:48:37
| 2024-05-21T15:24:01
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Hi everyone,
Sorry I don't have much time to write much; but going from 1.32 to 1.33, this:
```
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: yes
ggml_cuda_init: CUDA_USE_TENSOR_CORES: no
ggml_cuda_init: found 3 CUDA devices:
Device 0: Tesla V100S-PCIE-32GB, compute capability 7.0, VMM: yes
Device 1: Tesla V100S-PCIE-32GB, compute capability 7.0, VMM: yes
Device 2: Tesla V100S-PCIE-32GB, compute capability 7.0, VMM: yes
llm_load_tensors: ggml ctx size = 0.45 MiB
llm_load_tensors: offloading 32 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 33/33 layers to GPU
llm_load_tensors: CPU buffer size = 70.31 MiB
llm_load_tensors: CUDA0 buffer size = 1194.53 MiB
llm_load_tensors: CUDA1 buffer size = 1194.53 MiB
llm_load_tensors: CUDA2 buffer size = 1188.49 MiB
```
changed into this:
```
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: yes
ggml_cuda_init: CUDA_USE_TENSOR_CORES: no
ggml_cuda_init: found 1 CUDA devices:
Device 0: Tesla V100S-PCIE-32GB, compute capability 7.0, VMM: yes
llm_load_tensors: ggml ctx size = 0.30 MiB
llm_load_tensors: offloading 3 repeating layers to GPU
llm_load_tensors: offloaded 3/33 layers to GPU
llm_load_tensors: CPU buffer size = 3647.87 MiB
llm_load_tensors: CUDA0 buffer size = 325.78 MiB
```
1.33 hammers my CPU cores, is generally slower and doesn't even utilize the one GPU it *does* find properly.
I need the new concurrency features, so I'd really appreciate it if 1.33 worked on my machine.
Please help.
### OS
Linux
### GPU
Nvidia
### CPU
AMD
### Ollama version
1.33
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4139/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4139/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7448
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7448/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7448/comments
|
https://api.github.com/repos/ollama/ollama/issues/7448/events
|
https://github.com/ollama/ollama/issues/7448
| 2,626,605,042
|
I_kwDOJ0Z1Ps6cjs_y
| 7,448
|
Easily see latest version
|
{
"login": "jococo",
"id": 3506048,
"node_id": "MDQ6VXNlcjM1MDYwNDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/3506048?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jococo",
"html_url": "https://github.com/jococo",
"followers_url": "https://api.github.com/users/jococo/followers",
"following_url": "https://api.github.com/users/jococo/following{/other_user}",
"gists_url": "https://api.github.com/users/jococo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jococo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jococo/subscriptions",
"organizations_url": "https://api.github.com/users/jococo/orgs",
"repos_url": "https://api.github.com/users/jococo/repos",
"events_url": "https://api.github.com/users/jococo/events{/privacy}",
"received_events_url": "https://api.github.com/users/jococo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 6573197867,
"node_id": "LA_kwDOJ0Z1Ps8AAAABh8sKKw",
"url": "https://api.github.com/repos/ollama/ollama/labels/ollama.com",
"name": "ollama.com",
"color": "ffffff",
"default": false,
"description": ""
}
] |
open
| false
| null |
[] | null | 1
| 2024-10-31T11:19:44
| 2024-11-01T15:45:07
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Would it be possible to have the version of the latest version of Ollama on the ollama.com website? So we don't have to click through to Github to find the info.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7448/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7448/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/8352
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8352/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8352/comments
|
https://api.github.com/repos/ollama/ollama/issues/8352/events
|
https://github.com/ollama/ollama/pull/8352
| 2,776,467,113
|
PR_kwDOJ0Z1Ps6HIkMO
| 8,352
|
Add LangChain for .NET to libraries list
|
{
"login": "steveberdy",
"id": 86739818,
"node_id": "MDQ6VXNlcjg2NzM5ODE4",
"avatar_url": "https://avatars.githubusercontent.com/u/86739818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/steveberdy",
"html_url": "https://github.com/steveberdy",
"followers_url": "https://api.github.com/users/steveberdy/followers",
"following_url": "https://api.github.com/users/steveberdy/following{/other_user}",
"gists_url": "https://api.github.com/users/steveberdy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/steveberdy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/steveberdy/subscriptions",
"organizations_url": "https://api.github.com/users/steveberdy/orgs",
"repos_url": "https://api.github.com/users/steveberdy/repos",
"events_url": "https://api.github.com/users/steveberdy/events{/privacy}",
"received_events_url": "https://api.github.com/users/steveberdy/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2025-01-08T22:28:42
| 2025-01-14T17:37:35
| 2025-01-14T17:37:35
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/8352",
"html_url": "https://github.com/ollama/ollama/pull/8352",
"diff_url": "https://github.com/ollama/ollama/pull/8352.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8352.patch",
"merged_at": "2025-01-14T17:37:35"
}
|
This is definitely not important, but for discoverability purposes, it would be nice to include the .NET LangChain library.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8352/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8352/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3414
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3414/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3414/comments
|
https://api.github.com/repos/ollama/ollama/issues/3414/events
|
https://github.com/ollama/ollama/pull/3414
| 2,216,376,711
|
PR_kwDOJ0Z1Ps5rOHbj
| 3,414
|
Add 'Knowledge Cutoff' column to model library table
|
{
"login": "saket3199",
"id": 57292901,
"node_id": "MDQ6VXNlcjU3MjkyOTAx",
"avatar_url": "https://avatars.githubusercontent.com/u/57292901?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/saket3199",
"html_url": "https://github.com/saket3199",
"followers_url": "https://api.github.com/users/saket3199/followers",
"following_url": "https://api.github.com/users/saket3199/following{/other_user}",
"gists_url": "https://api.github.com/users/saket3199/gists{/gist_id}",
"starred_url": "https://api.github.com/users/saket3199/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/saket3199/subscriptions",
"organizations_url": "https://api.github.com/users/saket3199/orgs",
"repos_url": "https://api.github.com/users/saket3199/repos",
"events_url": "https://api.github.com/users/saket3199/events{/privacy}",
"received_events_url": "https://api.github.com/users/saket3199/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 5
| 2024-03-30T10:27:21
| 2024-04-06T17:43:13
| 2024-03-31T17:11:46
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3414",
"html_url": "https://github.com/ollama/ollama/pull/3414",
"diff_url": "https://github.com/ollama/ollama/pull/3414.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3414.patch",
"merged_at": null
}
|
resolves #3412
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3414/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3414/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/8631
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8631/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8631/comments
|
https://api.github.com/repos/ollama/ollama/issues/8631/events
|
https://github.com/ollama/ollama/issues/8631
| 2,815,555,450
|
I_kwDOJ0Z1Ps6n0fd6
| 8,631
|
Please provide information about the model license in the search model interface
|
{
"login": "cquike",
"id": 17937361,
"node_id": "MDQ6VXNlcjE3OTM3MzYx",
"avatar_url": "https://avatars.githubusercontent.com/u/17937361?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cquike",
"html_url": "https://github.com/cquike",
"followers_url": "https://api.github.com/users/cquike/followers",
"following_url": "https://api.github.com/users/cquike/following{/other_user}",
"gists_url": "https://api.github.com/users/cquike/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cquike/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cquike/subscriptions",
"organizations_url": "https://api.github.com/users/cquike/orgs",
"repos_url": "https://api.github.com/users/cquike/repos",
"events_url": "https://api.github.com/users/cquike/events{/privacy}",
"received_events_url": "https://api.github.com/users/cquike/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 6573197867,
"node_id": "LA_kwDOJ0Z1Ps8AAAABh8sKKw",
"url": "https://api.github.com/repos/ollama/ollama/labels/ollama.com",
"name": "ollama.com",
"color": "ffffff",
"default": false,
"description": ""
}
] |
open
| false
| null |
[] | null | 0
| 2025-01-28T12:42:38
| 2025-01-29T00:28:28
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
It would be useful to show the license of each model in the model search page https://ollama.com/search. Even better would be an option to filter by license.
### OS
_No response_
### GPU
_No response_
### CPU
_No response_
### Ollama version
_No response_
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8631/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8631/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/439
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/439/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/439/comments
|
https://api.github.com/repos/ollama/ollama/issues/439/events
|
https://github.com/ollama/ollama/pull/439
| 1,870,822,307
|
PR_kwDOJ0Z1Ps5Y_aUK
| 439
|
add model IDs
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-08-29T03:37:36
| 2023-08-29T03:50:25
| 2023-08-29T03:50:24
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/439",
"html_url": "https://github.com/ollama/ollama/pull/439",
"diff_url": "https://github.com/ollama/ollama/pull/439.diff",
"patch_url": "https://github.com/ollama/ollama/pull/439.patch",
"merged_at": "2023-08-29T03:50:24"
}
|
This change shows a portion (first 12 hex chars) of the sha256 sum of the manifest when running `ollama ls`. This makes it really easy at a glance to tell if two models are the same, and will make it easier in the future to match models inside of the ollama library.
It looks something like:
```
NAME ID SIZE MODIFIED
codellama:34b-instruct 901abb8f0f4b 19 GB 3 days ago
codellama:latest adf065e2ff94 3.8 GB 3 days ago
codeup:13b 400f83199325 7.4 GB 2 weeks ago
llama-mario:latest d5793f033f5c 7.3 GB 4 weeks ago
llama2:13b 156106c1e540 7.3 GB 4 weeks ago
llama2:latest 5c1a4ea68dd0 3.8 GB 9 hours ago
llama2-uncensored:7b-chat-q6_K 26cf13ee4cfe 5.5 GB 8 days ago
llama2-uncensored:latest 5823fb1154c5 3.8 GB 4 weeks ago
nous-hermes:latest bfba379045c1 7.3 GB 6 weeks ago
```
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/439/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/439/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7054
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7054/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7054/comments
|
https://api.github.com/repos/ollama/ollama/issues/7054/events
|
https://github.com/ollama/ollama/issues/7054
| 2,558,039,172
|
I_kwDOJ0Z1Ps6YeJSE
| 7,054
|
Support for Zamba2
|
{
"login": "hg0428",
"id": 45984899,
"node_id": "MDQ6VXNlcjQ1OTg0ODk5",
"avatar_url": "https://avatars.githubusercontent.com/u/45984899?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hg0428",
"html_url": "https://github.com/hg0428",
"followers_url": "https://api.github.com/users/hg0428/followers",
"following_url": "https://api.github.com/users/hg0428/following{/other_user}",
"gists_url": "https://api.github.com/users/hg0428/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hg0428/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hg0428/subscriptions",
"organizations_url": "https://api.github.com/users/hg0428/orgs",
"repos_url": "https://api.github.com/users/hg0428/repos",
"events_url": "https://api.github.com/users/hg0428/events{/privacy}",
"received_events_url": "https://api.github.com/users/hg0428/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
open
| false
| null |
[] | null | 3
| 2024-10-01T02:37:49
| 2024-10-01T02:47:42
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Zamba2 is a really cool model that uses a hybrid Mamba-Transformer system.
https://huggingface.co/Zyphra/Zamba2-2.7B
https://www.zyphra.com/post/zamba2-small
I have been wanting to use this for a while and I would love if ollama could add this model soon.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7054/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 2,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7054/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/8554
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8554/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8554/comments
|
https://api.github.com/repos/ollama/ollama/issues/8554/events
|
https://github.com/ollama/ollama/issues/8554
| 2,807,741,863
|
I_kwDOJ0Z1Ps6nWr2n
| 8,554
|
JSON With Ollama Library Contents
|
{
"login": "slyyyle",
"id": 78447050,
"node_id": "MDQ6VXNlcjc4NDQ3MDUw",
"avatar_url": "https://avatars.githubusercontent.com/u/78447050?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/slyyyle",
"html_url": "https://github.com/slyyyle",
"followers_url": "https://api.github.com/users/slyyyle/followers",
"following_url": "https://api.github.com/users/slyyyle/following{/other_user}",
"gists_url": "https://api.github.com/users/slyyyle/gists{/gist_id}",
"starred_url": "https://api.github.com/users/slyyyle/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/slyyyle/subscriptions",
"organizations_url": "https://api.github.com/users/slyyyle/orgs",
"repos_url": "https://api.github.com/users/slyyyle/repos",
"events_url": "https://api.github.com/users/slyyyle/events{/privacy}",
"received_events_url": "https://api.github.com/users/slyyyle/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 0
| 2025-01-23T19:30:08
| 2025-01-23T19:30:08
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Would it be possible to add a JSON object that reflects all models contained in the library? I would prefer not to scrape against /search.
It could have info from the model card in the search, and the more specific info about it contained found on library/model_name.
It would be nice for many reasons - UI Ollama Model Manager, matching model names with size modifiers {model_name}:{size}, which could then be supplemented by external information about the models w RAG and scraping, user testimonials, and parsed/organized by an LLM for guided iteration.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8554/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8554/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/6808
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6808/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6808/comments
|
https://api.github.com/repos/ollama/ollama/issues/6808/events
|
https://github.com/ollama/ollama/issues/6808
| 2,526,609,178
|
I_kwDOJ0Z1Ps6WmP8a
| 6,808
|
qos, serving websites with the server, but when downloading a model...
|
{
"login": "remco-pc",
"id": 8077908,
"node_id": "MDQ6VXNlcjgwNzc5MDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/8077908?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/remco-pc",
"html_url": "https://github.com/remco-pc",
"followers_url": "https://api.github.com/users/remco-pc/followers",
"following_url": "https://api.github.com/users/remco-pc/following{/other_user}",
"gists_url": "https://api.github.com/users/remco-pc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/remco-pc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/remco-pc/subscriptions",
"organizations_url": "https://api.github.com/users/remco-pc/orgs",
"repos_url": "https://api.github.com/users/remco-pc/repos",
"events_url": "https://api.github.com/users/remco-pc/events{/privacy}",
"received_events_url": "https://api.github.com/users/remco-pc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 0
| 2024-09-14T21:13:19
| 2024-09-14T21:13:19
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
if a big model gets loaded, it loads at full speed, slowing down other services. can you make a throttle to only allow a certain amount of bandwidth consumed by downloading the model.
I try to have websites running on that server, and they become unresponsive due to the model download (tested it with llama3.1:70b)
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6808/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6808/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/3131
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3131/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3131/comments
|
https://api.github.com/repos/ollama/ollama/issues/3131/events
|
https://github.com/ollama/ollama/issues/3131
| 2,185,225,928
|
I_kwDOJ0Z1Ps6CP-bI
| 3,131
|
Clip model isn't being freed correctly
|
{
"login": "RandomGitUser321",
"id": 27916165,
"node_id": "MDQ6VXNlcjI3OTE2MTY1",
"avatar_url": "https://avatars.githubusercontent.com/u/27916165?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RandomGitUser321",
"html_url": "https://github.com/RandomGitUser321",
"followers_url": "https://api.github.com/users/RandomGitUser321/followers",
"following_url": "https://api.github.com/users/RandomGitUser321/following{/other_user}",
"gists_url": "https://api.github.com/users/RandomGitUser321/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RandomGitUser321/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RandomGitUser321/subscriptions",
"organizations_url": "https://api.github.com/users/RandomGitUser321/orgs",
"repos_url": "https://api.github.com/users/RandomGitUser321/repos",
"events_url": "https://api.github.com/users/RandomGitUser321/events{/privacy}",
"received_events_url": "https://api.github.com/users/RandomGitUser321/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 6
| 2024-03-14T01:55:07
| 2024-03-15T00:55:09
| 2024-03-14T20:35:41
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I'm on Windows and do a lot of things with models. Mostly a VLM->Get a detailed description of an image->Use a different LLM that's better at writing prompts to inject/mix my ideas in with->Stable diffusion->Image type workflow with ComfyUI. Obviously, I need all the VRAM I can get, but I sometimes run into scenarios where every megabyte of VRAM is precious(ipadapter, controlnets, etc)
I have my own custom nodes that I've made, that incorperate sending the command to unload the model after they are used, so that I don't run into any OOM/OOVM scenarios leading to using shared memory(destroys performance):
For example, it pretty much ultimately runs the command:
`client.generate(model=model, prompt=prompt, images=images_b64, system=system, options={'num_predict': num_predict, 'temperature': temperature, 'seed': seed}, keep_alive="0")`
Everything works fine and the `keep_alive = "0"` does indeed unload the model when it's done, but it seems like it leaves the `mmproj-model-f16` portion of the associated model still loaded in VRAM. Whatever VRAM load I was at before starting->loading->unloading remains higher until I exit out of the Ollama Windows icon in my task tray; freeing up the chunk that was trapped.
I've tested this with regular LLMs and the command does completely work and return my VRAM load back to what it was before I loaded the model.
EDIT: Oh and I can also replicate this identical behaviour with a regular *.py script doing the same thing with a basic template
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3131/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3131/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4276
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4276/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4276/comments
|
https://api.github.com/repos/ollama/ollama/issues/4276/events
|
https://github.com/ollama/ollama/issues/4276
| 2,287,010,979
|
I_kwDOJ0Z1Ps6IUQSj
| 4,276
|
bge-m3
|
{
"login": "Mimicvat",
"id": 141440461,
"node_id": "U_kgDOCG41zQ",
"avatar_url": "https://avatars.githubusercontent.com/u/141440461?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Mimicvat",
"html_url": "https://github.com/Mimicvat",
"followers_url": "https://api.github.com/users/Mimicvat/followers",
"following_url": "https://api.github.com/users/Mimicvat/following{/other_user}",
"gists_url": "https://api.github.com/users/Mimicvat/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Mimicvat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mimicvat/subscriptions",
"organizations_url": "https://api.github.com/users/Mimicvat/orgs",
"repos_url": "https://api.github.com/users/Mimicvat/repos",
"events_url": "https://api.github.com/users/Mimicvat/events{/privacy}",
"received_events_url": "https://api.github.com/users/Mimicvat/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
open
| false
| null |
[] | null | 5
| 2024-05-09T06:43:29
| 2024-05-21T14:08:16
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
https://huggingface.co/vonjack/bge-m3-gguf
from: https://github.com/ggerganov/llama.cpp/issues/6007
I am looking for recommendations on a high-quality multilingual embedder that includes support for Portuguese. Anything better than https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2 would be nice.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4276/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4276/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/6531
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6531/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6531/comments
|
https://api.github.com/repos/ollama/ollama/issues/6531/events
|
https://github.com/ollama/ollama/issues/6531
| 2,490,378,992
|
I_kwDOJ0Z1Ps6UcCrw
| 6,531
|
Prebuilt `ollama-linux-amd64.tgz` without cuda libs, please?
|
{
"login": "sevaseva",
"id": 1168195,
"node_id": "MDQ6VXNlcjExNjgxOTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/1168195?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sevaseva",
"html_url": "https://github.com/sevaseva",
"followers_url": "https://api.github.com/users/sevaseva/followers",
"following_url": "https://api.github.com/users/sevaseva/following{/other_user}",
"gists_url": "https://api.github.com/users/sevaseva/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sevaseva/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sevaseva/subscriptions",
"organizations_url": "https://api.github.com/users/sevaseva/orgs",
"repos_url": "https://api.github.com/users/sevaseva/repos",
"events_url": "https://api.github.com/users/sevaseva/events{/privacy}",
"received_events_url": "https://api.github.com/users/sevaseva/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 5755339642,
"node_id": "LA_kwDOJ0Z1Ps8AAAABVwuDeg",
"url": "https://api.github.com/repos/ollama/ollama/labels/linux",
"name": "linux",
"color": "516E70",
"default": false,
"description": ""
}
] |
open
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 4
| 2024-08-27T21:12:39
| 2024-12-02T11:34:44
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I occasionally update ollama on a linux box by downloading URLs like `https://github.com/ollama/ollama/releases/download/v0.3.7-rc6/ollama-linux-amd64.tgz` and extracting/overwriting files into a local directory (not into `/usr` as a root mind you, just into a local directory as a non-privileged user; that is how I prefer to use it).
I have necessary cuda libs installed in the system.
I don't care to use the libs distributed with ollama to begin with (and if `bin/ollama` defaults to searching libs in `../lib` first I don't love that but that's fine).
But I certainly don't care to download the same 1GB of libs every time I update.
(I wonder how many users are like me).
**I can haz a version of `linux-amd64` without cuda libs included in https://github.com/ollama/ollama/releases prebuilt assets?**
...or should I instead just `git pull` and build the binary from source whenever I want to update (which would be fine with me) or what would you guys recommend?
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6531/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6531/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/3992
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3992/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3992/comments
|
https://api.github.com/repos/ollama/ollama/issues/3992/events
|
https://github.com/ollama/ollama/issues/3992
| 2,267,334,707
|
I_kwDOJ0Z1Ps6HJMgz
| 3,992
|
how to config octopus on ollama ?
|
{
"login": "taozhiyuai",
"id": 146583103,
"node_id": "U_kgDOCLyuPw",
"avatar_url": "https://avatars.githubusercontent.com/u/146583103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/taozhiyuai",
"html_url": "https://github.com/taozhiyuai",
"followers_url": "https://api.github.com/users/taozhiyuai/followers",
"following_url": "https://api.github.com/users/taozhiyuai/following{/other_user}",
"gists_url": "https://api.github.com/users/taozhiyuai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/taozhiyuai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/taozhiyuai/subscriptions",
"organizations_url": "https://api.github.com/users/taozhiyuai/orgs",
"repos_url": "https://api.github.com/users/taozhiyuai/repos",
"events_url": "https://api.github.com/users/taozhiyuai/events{/privacy}",
"received_events_url": "https://api.github.com/users/taozhiyuai/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 4
| 2024-04-28T04:33:04
| 2024-05-26T13:44:45
| 2024-05-09T08:57:37
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
this is output of octopus on my MAC. anyone know how to config it for better output?
Set 'verbose' mode.
>>> hi
<nexa_end>
Response: <nexa_13>('hi')<nexa_end>
Function description:
def search_youtube_videos(query):
"""
Searches YouTube for videos matching a query.
Parameters:
- query (str): Search query.
Returns:
- list[str]: A list of strings, each string includes video names and
URLs.
"""
total duration: 1.008565s
load duration: 2.621042ms
prompt eval count: 9 token(s)
prompt eval duration: 109.909ms
prompt eval rate: 81.89 tokens/s
eval count: 82 token(s)
eval duration: 890.205ms
eval rate: 92.11 tokens/s
>>> /show modelfile
# Modelfile generated by "ollama show"
# To build a new Modelfile based on this one, replace the FROM line with:
# FROM octopus-v2-q8:latest
FROM /Users/taozhiyu/.ollama/models/blobs/sha256-a85db45807a0d26b2c14753cea10f947a26196bde3770c95a2d0688b1bd6c127
TEMPLATE """{{ .System }}
USER: {{ .Prompt }}
ASSISTANT: """
PARAMETER num_ctx 4096
PARAMETER stop "</s>"
PARAMETER stop "USER:"
PARAMETER stop "\"ASSISTANT:\""
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.1.32
|
{
"login": "taozhiyuai",
"id": 146583103,
"node_id": "U_kgDOCLyuPw",
"avatar_url": "https://avatars.githubusercontent.com/u/146583103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/taozhiyuai",
"html_url": "https://github.com/taozhiyuai",
"followers_url": "https://api.github.com/users/taozhiyuai/followers",
"following_url": "https://api.github.com/users/taozhiyuai/following{/other_user}",
"gists_url": "https://api.github.com/users/taozhiyuai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/taozhiyuai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/taozhiyuai/subscriptions",
"organizations_url": "https://api.github.com/users/taozhiyuai/orgs",
"repos_url": "https://api.github.com/users/taozhiyuai/repos",
"events_url": "https://api.github.com/users/taozhiyuai/events{/privacy}",
"received_events_url": "https://api.github.com/users/taozhiyuai/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3992/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3992/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4223
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4223/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4223/comments
|
https://api.github.com/repos/ollama/ollama/issues/4223/events
|
https://github.com/ollama/ollama/issues/4223
| 2,282,598,028
|
I_kwDOJ0Z1Ps6IDa6M
| 4,223
|
qwen:72b-chat-q4_K_S does not load
|
{
"login": "saddy001",
"id": 13658554,
"node_id": "MDQ6VXNlcjEzNjU4NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/13658554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/saddy001",
"html_url": "https://github.com/saddy001",
"followers_url": "https://api.github.com/users/saddy001/followers",
"following_url": "https://api.github.com/users/saddy001/following{/other_user}",
"gists_url": "https://api.github.com/users/saddy001/gists{/gist_id}",
"starred_url": "https://api.github.com/users/saddy001/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/saddy001/subscriptions",
"organizations_url": "https://api.github.com/users/saddy001/orgs",
"repos_url": "https://api.github.com/users/saddy001/repos",
"events_url": "https://api.github.com/users/saddy001/events{/privacy}",
"received_events_url": "https://api.github.com/users/saddy001/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6849881759,
"node_id": "LA_kwDOJ0Z1Ps8AAAABmEjmnw",
"url": "https://api.github.com/repos/ollama/ollama/labels/memory",
"name": "memory",
"color": "5017EA",
"default": false,
"description": ""
}
] |
open
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 1
| 2024-05-07T08:17:32
| 2024-07-25T18:33:59
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Where the model "qwen:72b" loads successfully, the model "qwen:72b-chat-q4_K_S" does not load. The loading spinner just doesn't stop even after waiting a long time. Since the models occupy the same amount of memory (41 GB) I assume the RAM usage is roughly the same. Can somebody reproduce this?
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.33
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4223/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4223/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/5030
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5030/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5030/comments
|
https://api.github.com/repos/ollama/ollama/issues/5030/events
|
https://github.com/ollama/ollama/pull/5030
| 2,351,920,701
|
PR_kwDOJ0Z1Ps5yZ8Ea
| 5,030
|
Update README.md
|
{
"login": "Drlordbasil",
"id": 126736516,
"node_id": "U_kgDOB43YhA",
"avatar_url": "https://avatars.githubusercontent.com/u/126736516?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Drlordbasil",
"html_url": "https://github.com/Drlordbasil",
"followers_url": "https://api.github.com/users/Drlordbasil/followers",
"following_url": "https://api.github.com/users/Drlordbasil/following{/other_user}",
"gists_url": "https://api.github.com/users/Drlordbasil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Drlordbasil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Drlordbasil/subscriptions",
"organizations_url": "https://api.github.com/users/Drlordbasil/orgs",
"repos_url": "https://api.github.com/users/Drlordbasil/repos",
"events_url": "https://api.github.com/users/Drlordbasil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Drlordbasil/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 3
| 2024-06-13T19:38:21
| 2024-11-22T00:38:09
| 2024-11-21T08:35:51
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5030",
"html_url": "https://github.com/ollama/ollama/pull/5030",
"diff_url": "https://github.com/ollama/ollama/pull/5030.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5030.patch",
"merged_at": null
}
|
add my embedding example for ollama, but it includes Groq API calls too, is this allowed?
|
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5030/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5030/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3752
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3752/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3752/comments
|
https://api.github.com/repos/ollama/ollama/issues/3752/events
|
https://github.com/ollama/ollama/issues/3752
| 2,252,557,053
|
I_kwDOJ0Z1Ps6GQ0r9
| 3,752
|
command-r:latest run exception
|
{
"login": "zw6234336",
"id": 5389245,
"node_id": "MDQ6VXNlcjUzODkyNDU=",
"avatar_url": "https://avatars.githubusercontent.com/u/5389245?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zw6234336",
"html_url": "https://github.com/zw6234336",
"followers_url": "https://api.github.com/users/zw6234336/followers",
"following_url": "https://api.github.com/users/zw6234336/following{/other_user}",
"gists_url": "https://api.github.com/users/zw6234336/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zw6234336/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zw6234336/subscriptions",
"organizations_url": "https://api.github.com/users/zw6234336/orgs",
"repos_url": "https://api.github.com/users/zw6234336/repos",
"events_url": "https://api.github.com/users/zw6234336/events{/privacy}",
"received_events_url": "https://api.github.com/users/zw6234336/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-04-19T09:46:28
| 2024-05-10T00:11:44
| 2024-05-10T00:11:43
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
<img width="947" alt="Xnapper-2024-04-19-17 45 26" src="https://github.com/ollama/ollama/assets/5389245/4eeeee44-f4a1-4b8f-a202-ed78665d9772">
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.1.27
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3752/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3752/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5450
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5450/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5450/comments
|
https://api.github.com/repos/ollama/ollama/issues/5450/events
|
https://github.com/ollama/ollama/issues/5450
| 2,387,379,664
|
I_kwDOJ0Z1Ps6OTIXQ
| 5,450
|
Inference fails on AMD when using >1 GPU.
|
{
"login": "Speedway1",
"id": 100301611,
"node_id": "U_kgDOBfp7Kw",
"avatar_url": "https://avatars.githubusercontent.com/u/100301611?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Speedway1",
"html_url": "https://github.com/Speedway1",
"followers_url": "https://api.github.com/users/Speedway1/followers",
"following_url": "https://api.github.com/users/Speedway1/following{/other_user}",
"gists_url": "https://api.github.com/users/Speedway1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Speedway1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Speedway1/subscriptions",
"organizations_url": "https://api.github.com/users/Speedway1/orgs",
"repos_url": "https://api.github.com/users/Speedway1/repos",
"events_url": "https://api.github.com/users/Speedway1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Speedway1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6433346500,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf3UTxA",
"url": "https://api.github.com/repos/ollama/ollama/labels/amd",
"name": "amd",
"color": "000000",
"default": false,
"description": "Issues relating to AMD GPUs and ROCm"
},
{
"id": 6677745918,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgZQ_g",
"url": "https://api.github.com/repos/ollama/ollama/labels/gpu",
"name": "gpu",
"color": "76C49E",
"default": false,
"description": ""
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 3
| 2024-07-03T00:18:37
| 2024-07-10T18:48:02
| 2024-07-10T18:48:01
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
This is on AMD. I have 2 x Radeon 7900 XCX cards (24gb each).
For models/memory use that only uses 1 GPU, everything works fine.
As soon as both cards are required, the inference fails with garbage. As seen in this output:
```
ollama@TH-AI2:~$ ollama list
NAME ID SIZE MODIFIED
deepseek-coder-v2:latest 8577f96d693e 8.9 GB 10 days ago
codestral:latest fcc0019dcee9 12 GB 11 days ago
qwen2:latest e0d4e1163c58 4.4 GB 11 days ago
command-r:latest b8cdfff0263c 20 GB 11 days ago
mxbai-embed-large:latest 468836162de7 669 MB 11 days ago
llama3:70b 786f3184aec0 39 GB 11 days ago
phi3:14b-medium-128k-instruct-f16 e89861c3ba63 27 GB 11 days ago
ollama@TH-AI2:~$ ollama run command-r:latest
>>> Hello how are you?
???????????????????????????????
>>> /bye
```
Codetral is only 12 GB and runs in 1GPU, it works fine:
```
ollama@TH-AI2:~$ ollama run command-r:latest
>>> Hello how are you?
???????????????????????????????
>>> /bye
ollama@TH-AI2:~$ ollama run codestral:latest
>>> Create a ruby script that counts from 1 to 100 and outputs to the console.
Here's a simple Ruby script that counts from 1 to 100 and outputs to the console:
```ruby
(1..100).each do |number|
puts number
end
```
The `(1..100)` creates a range of numbers from 1 to 100. The `each` method is then used to iterate over each number in the range. Finally, the `puts` method outputs the current number to the console.
```
phi3:14b-medium requires 2 GPUs for its 27GB size and it too outputs garbage:
```
ollama@TH-AI2:~$ ollama run phi3:14b-medium-128k-instruct-f16
>>> Hello how are you?
###############################
>>> Send a message (/? for help)
```
### OS
Linux
### GPU
AMD
### CPU
AMD
### Ollama version
0.1.48
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5450/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5450/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3849
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3849/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3849/comments
|
https://api.github.com/repos/ollama/ollama/issues/3849/events
|
https://github.com/ollama/ollama/issues/3849
| 2,259,573,320
|
I_kwDOJ0Z1Ps6GrlpI
| 3,849
|
Ollama super slow on macOS M1 in Docker
|
{
"login": "rb81",
"id": 48117105,
"node_id": "MDQ6VXNlcjQ4MTE3MTA1",
"avatar_url": "https://avatars.githubusercontent.com/u/48117105?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rb81",
"html_url": "https://github.com/rb81",
"followers_url": "https://api.github.com/users/rb81/followers",
"following_url": "https://api.github.com/users/rb81/following{/other_user}",
"gists_url": "https://api.github.com/users/rb81/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rb81/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rb81/subscriptions",
"organizations_url": "https://api.github.com/users/rb81/orgs",
"repos_url": "https://api.github.com/users/rb81/repos",
"events_url": "https://api.github.com/users/rb81/events{/privacy}",
"received_events_url": "https://api.github.com/users/rb81/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] |
closed
| false
| null |
[] | null | 6
| 2024-04-23T18:59:43
| 2024-11-12T23:34:10
| 2024-04-24T16:21:38
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Ollama running natively on macOS is excellent.
Ollama running on Docker is about 50% slower.
(Unsure if this is a bug or config issue, but I am running default settings.)
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.1.32
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3849/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3849/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5854
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5854/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5854/comments
|
https://api.github.com/repos/ollama/ollama/issues/5854/events
|
https://github.com/ollama/ollama/pull/5854
| 2,423,220,386
|
PR_kwDOJ0Z1Ps52HC98
| 5,854
|
Refine error reporting for subprocess crash
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-07-22T15:56:13
| 2024-07-22T17:40:25
| 2024-07-22T17:40:22
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5854",
"html_url": "https://github.com/ollama/ollama/pull/5854",
"diff_url": "https://github.com/ollama/ollama/pull/5854.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5854.patch",
"merged_at": "2024-07-22T17:40:22"
}
|
On windows, the exit status winds up being the search term many users search for and end up piling in on issues that are unrelated. This refines the reporting so that if we have a more detailed message we'll suppress the exit status portion of the message.
Example:
Before
```
> ollama run akuldatta/mistral-nemo-instruct-12b:q5km
Error: llama runner process has terminated: exit status 0xc0000409 error loading model: check_tensor_dims: tensor 'blk.0.attn_q.weight' has wrong shape; expected 5120, 5120, got 5120, 4096, 1, 1
```
After:
```
> ollama run akuldatta/mistral-nemo-instruct-12b:q5km
Error: llama runner process has terminated: error loading model: check_tensor_dims: tensor 'blk.0.attn_q.weight' has wrong shape; expected 5120, 5120, got 5120, 4096, 1, 1
```
This should reduce the amount of users posting unrelated problems on whatever open issue(s) happen to have `0xc0000409` in the title.
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5854/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5854/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/8500
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8500/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8500/comments
|
https://api.github.com/repos/ollama/ollama/issues/8500/events
|
https://github.com/ollama/ollama/issues/8500
| 2,798,912,420
|
I_kwDOJ0Z1Ps6m1AOk
| 8,500
|
when using gguf files of qwen2-vl,something wrong happen:Error: invalid file magic!
|
{
"login": "twythebest",
"id": 89891289,
"node_id": "MDQ6VXNlcjg5ODkxMjg5",
"avatar_url": "https://avatars.githubusercontent.com/u/89891289?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/twythebest",
"html_url": "https://github.com/twythebest",
"followers_url": "https://api.github.com/users/twythebest/followers",
"following_url": "https://api.github.com/users/twythebest/following{/other_user}",
"gists_url": "https://api.github.com/users/twythebest/gists{/gist_id}",
"starred_url": "https://api.github.com/users/twythebest/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/twythebest/subscriptions",
"organizations_url": "https://api.github.com/users/twythebest/orgs",
"repos_url": "https://api.github.com/users/twythebest/repos",
"events_url": "https://api.github.com/users/twythebest/events{/privacy}",
"received_events_url": "https://api.github.com/users/twythebest/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 1
| 2025-01-20T10:50:55
| 2025-01-20T11:32:52
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I have download two files:mmproj-model-f32.gguf and Qwen2-VL-7B-Instruct-Q8_0.gguf. Here is my modelfile:
FROM ./mmproj-model-f32.gguf
FROM ./Qwen2-VL-7B-Instruct-Q8_0.gguf
TEMPLATE """{{- range $index, $_ := .Messages }}<|start_header_id|>{{ .Role }}<|end_header_id|>
{{ .Content }}
{{- if gt (len (slice $.Messages $index)) 1 }}<|eot_id|>
{{- else if ne .Role "assistant" }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
{{ end }}
{{- end }}"""
PARAMETER stop "<|im_start|>"
PARAMETER stop "<|im_end|>"
when I use comand:ollama create qwen2-vl -f config.txt,error shows:'Error: invalid file magic'!
please help me solve thids problem,thanks!
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8500/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8500/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/468
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/468/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/468/comments
|
https://api.github.com/repos/ollama/ollama/issues/468/events
|
https://github.com/ollama/ollama/issues/468
| 1,882,521,633
|
I_kwDOJ0Z1Ps5wNQAh
| 468
|
Add Refact model
|
{
"login": "Alainx277",
"id": 26800509,
"node_id": "MDQ6VXNlcjI2ODAwNTA5",
"avatar_url": "https://avatars.githubusercontent.com/u/26800509?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Alainx277",
"html_url": "https://github.com/Alainx277",
"followers_url": "https://api.github.com/users/Alainx277/followers",
"following_url": "https://api.github.com/users/Alainx277/following{/other_user}",
"gists_url": "https://api.github.com/users/Alainx277/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Alainx277/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Alainx277/subscriptions",
"organizations_url": "https://api.github.com/users/Alainx277/orgs",
"repos_url": "https://api.github.com/users/Alainx277/repos",
"events_url": "https://api.github.com/users/Alainx277/events{/privacy}",
"received_events_url": "https://api.github.com/users/Alainx277/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
closed
| false
| null |
[] | null | 3
| 2023-09-05T18:28:46
| 2024-12-23T00:53:16
| 2024-12-23T00:53:16
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
A new 1.6b parameter model called "Refact" has been released.
[Blog post](https://refact.ai/blog/2023/introducing-refact-code-llm/)
[Hugging Face](https://huggingface.co/smallcloudai/Refact-1_6B-fim)
I tried adding it myself, but the llama.cpp scripts to convert to GGML format did not work. Keep in mind that I'm a novice in this area, and it may work with the correct arguments.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/468/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/468/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/406
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/406/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/406/comments
|
https://api.github.com/repos/ollama/ollama/issues/406/events
|
https://github.com/ollama/ollama/issues/406
| 1,865,900,465
|
I_kwDOJ0Z1Ps5vN2Gx
| 406
|
Model request: brand new “Code Llama” released by Facebook
|
{
"login": "strangelearning",
"id": 80677888,
"node_id": "MDQ6VXNlcjgwNjc3ODg4",
"avatar_url": "https://avatars.githubusercontent.com/u/80677888?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/strangelearning",
"html_url": "https://github.com/strangelearning",
"followers_url": "https://api.github.com/users/strangelearning/followers",
"following_url": "https://api.github.com/users/strangelearning/following{/other_user}",
"gists_url": "https://api.github.com/users/strangelearning/gists{/gist_id}",
"starred_url": "https://api.github.com/users/strangelearning/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/strangelearning/subscriptions",
"organizations_url": "https://api.github.com/users/strangelearning/orgs",
"repos_url": "https://api.github.com/users/strangelearning/repos",
"events_url": "https://api.github.com/users/strangelearning/events{/privacy}",
"received_events_url": "https://api.github.com/users/strangelearning/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 4
| 2023-08-24T21:18:52
| 2023-08-25T14:11:28
| 2023-08-24T22:16:13
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
https://ai.meta.com/blog/code-llama-large-language-model-coding/
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/406/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/406/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7966
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7966/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7966/comments
|
https://api.github.com/repos/ollama/ollama/issues/7966/events
|
https://github.com/ollama/ollama/issues/7966
| 2,722,613,300
|
I_kwDOJ0Z1Ps6iR8g0
| 7,966
|
ggml_cuda_cpy_fn: unsupported type combination (q4_0 to f32) in pre-release version
|
{
"login": "dkkb",
"id": 82504881,
"node_id": "MDQ6VXNlcjgyNTA0ODgx",
"avatar_url": "https://avatars.githubusercontent.com/u/82504881?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dkkb",
"html_url": "https://github.com/dkkb",
"followers_url": "https://api.github.com/users/dkkb/followers",
"following_url": "https://api.github.com/users/dkkb/following{/other_user}",
"gists_url": "https://api.github.com/users/dkkb/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dkkb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dkkb/subscriptions",
"organizations_url": "https://api.github.com/users/dkkb/orgs",
"repos_url": "https://api.github.com/users/dkkb/repos",
"events_url": "https://api.github.com/users/dkkb/events{/privacy}",
"received_events_url": "https://api.github.com/users/dkkb/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-12-06T10:13:32
| 2024-12-07T00:44:16
| 2024-12-07T00:44:15
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I'm using this model `https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-32b-GGUF` with the v0.5.0 pre-release.
After upgrading to the latest version, I was hoping to see improved performance. However, after making several API calls, I encountered the following error on the client side. I also noticed that GPU memory usage dropped to 0.
```
error making request: an error was encountered while running the model:
read tcp 127.0.0.1:3914->127.0.0.1:3890: wsarecv: An existing connection was forcibly closed by the remote host
```
Environment variables:
```
set OLLAMA_FLASH_ATTENTION=1
set OLLAMA_KV_CACHE_TYPE=q4_0
set CUDA_VISIBLE_DEVICES=xxx
set OLLAMA_HOST=0.0.0.0:11434
set OLLAMA_ORIGINS=*
```
Server error log:
```
ggml_cuda_cpy_fn: unsupported type combination (q4_0 to f32)
```
Maybe same issue with https://github.com/ggerganov/llama.cpp/issues/5652?
Disable the OLLAMA_KV_CACHE_TYPE=q4_0 feature, it seems OK now.
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
v0.5.0 pre-release
|
{
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users/jessegross/followers",
"following_url": "https://api.github.com/users/jessegross/following{/other_user}",
"gists_url": "https://api.github.com/users/jessegross/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jessegross/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jessegross/subscriptions",
"organizations_url": "https://api.github.com/users/jessegross/orgs",
"repos_url": "https://api.github.com/users/jessegross/repos",
"events_url": "https://api.github.com/users/jessegross/events{/privacy}",
"received_events_url": "https://api.github.com/users/jessegross/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7966/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7966/timeline
| null |
not_planned
| false
|
https://api.github.com/repos/ollama/ollama/issues/1799
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1799/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1799/comments
|
https://api.github.com/repos/ollama/ollama/issues/1799/events
|
https://github.com/ollama/ollama/pull/1799
| 2,066,634,912
|
PR_kwDOJ0Z1Ps5jRnFd
| 1,799
|
fix to use ARCH var on downloading cuda driver
|
{
"login": "gimslab",
"id": 1457044,
"node_id": "MDQ6VXNlcjE0NTcwNDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1457044?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gimslab",
"html_url": "https://github.com/gimslab",
"followers_url": "https://api.github.com/users/gimslab/followers",
"following_url": "https://api.github.com/users/gimslab/following{/other_user}",
"gists_url": "https://api.github.com/users/gimslab/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gimslab/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gimslab/subscriptions",
"organizations_url": "https://api.github.com/users/gimslab/orgs",
"repos_url": "https://api.github.com/users/gimslab/repos",
"events_url": "https://api.github.com/users/gimslab/events{/privacy}",
"received_events_url": "https://api.github.com/users/gimslab/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 0
| 2024-01-05T02:31:04
| 2025-01-14T23:04:07
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1799",
"html_url": "https://github.com/ollama/ollama/pull/1799",
"diff_url": "https://github.com/ollama/ollama/pull/1799.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1799.patch",
"merged_at": null
}
|
I attempted to install Ollama on an AWS g5g instance with Ubuntu2204. but failed on that point.
The link for Nvidia driver uses 'arm64' instead of 'aarch64'
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1799/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1799/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2689
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2689/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2689/comments
|
https://api.github.com/repos/ollama/ollama/issues/2689/events
|
https://github.com/ollama/ollama/issues/2689
| 2,149,616,155
|
I_kwDOJ0Z1Ps6AIIob
| 2,689
|
Gemma model quantization or implementation seems botched
|
{
"login": "horiacristescu",
"id": 1104033,
"node_id": "MDQ6VXNlcjExMDQwMzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/1104033?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/horiacristescu",
"html_url": "https://github.com/horiacristescu",
"followers_url": "https://api.github.com/users/horiacristescu/followers",
"following_url": "https://api.github.com/users/horiacristescu/following{/other_user}",
"gists_url": "https://api.github.com/users/horiacristescu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/horiacristescu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/horiacristescu/subscriptions",
"organizations_url": "https://api.github.com/users/horiacristescu/orgs",
"repos_url": "https://api.github.com/users/horiacristescu/repos",
"events_url": "https://api.github.com/users/horiacristescu/events{/privacy}",
"received_events_url": "https://api.github.com/users/horiacristescu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 4
| 2024-02-22T17:57:44
| 2024-08-04T22:32:48
| 2024-02-23T01:06:00
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I tried the gemma model today and it responds with
- inconsistent formatting, such as using two commas instead of one
- inconsistent phrasing, such as a noun not being at plural when it should, or absurd phrases like "copyright infringement is being violated"
I tried the same model on labs.perplexity.ai and their version seems to be coherent.
I used `Gemma:7b-Instruct-Q5_K_M`
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2689/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2689/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8470
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8470/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8470/comments
|
https://api.github.com/repos/ollama/ollama/issues/8470/events
|
https://github.com/ollama/ollama/issues/8470
| 2,795,437,782
|
I_kwDOJ0Z1Ps6mnv7W
| 8,470
|
ollama._types.ResponseError: timed out waiting for llama runner to start - progress 0.00 -
|
{
"login": "legendier",
"id": 116647945,
"node_id": "U_kgDOBvPoCQ",
"avatar_url": "https://avatars.githubusercontent.com/u/116647945?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/legendier",
"html_url": "https://github.com/legendier",
"followers_url": "https://api.github.com/users/legendier/followers",
"following_url": "https://api.github.com/users/legendier/following{/other_user}",
"gists_url": "https://api.github.com/users/legendier/gists{/gist_id}",
"starred_url": "https://api.github.com/users/legendier/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/legendier/subscriptions",
"organizations_url": "https://api.github.com/users/legendier/orgs",
"repos_url": "https://api.github.com/users/legendier/repos",
"events_url": "https://api.github.com/users/legendier/events{/privacy}",
"received_events_url": "https://api.github.com/users/legendier/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2025-01-17T13:05:17
| 2025-01-20T09:52:00
| 2025-01-20T09:52:00
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
The process of loading large models into GPU memory is very slow.
Then an error will occur:
”**ollama._types.ResponseError: timed out waiting for llama runner to start - progress 0.00 -** “
Previously used normally, but recently the large model has been unable to load successfully.
Why is this?
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8470/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8470/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2163
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2163/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2163/comments
|
https://api.github.com/repos/ollama/ollama/issues/2163/events
|
https://github.com/ollama/ollama/pull/2163
| 2,096,921,001
|
PR_kwDOJ0Z1Ps5k4aou
| 2,163
|
Expose llm library and layer info in verbose output
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-01-23T20:24:48
| 2024-01-24T01:41:08
| 2024-01-24T01:40:52
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/2163",
"html_url": "https://github.com/ollama/ollama/pull/2163",
"diff_url": "https://github.com/ollama/ollama/pull/2163.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2163.patch",
"merged_at": null
}
|
This wires up additional information in our verbose metrics so you can see which llm library was used, and how many layers were loaded into the GPU.
Example output in the CLI:
```
./ollama run orca-mini
>>> /set verbose
Set 'verbose' mode.
>>> hello
Hello, how can I assist you today?
total duration: 835.322625ms
load duration: 452.875µs
prompt eval count: 42 token(s)
prompt eval duration: 593.785ms
prompt eval rate: 70.73 tokens/s
eval count: 10 token(s)
eval duration: 240.374ms
eval rate: 41.60 tokens/s
llm library: metal
GPU loaded layers: 1/27
>>>
```
The JSON payload:
```json
{
"model": "orca-mini",
"created_at": "2024-01-23T20:23:02.168454Z",
"response": " Hello, what can I assist you with today?",
"done": true,
"context": [
31822,
...
],
"total_duration": 849287875,
"load_duration": 185542,
"prompt_eval_count": 42,
"prompt_eval_duration": 558791000,
"eval_count": 11,
"eval_duration": 290096000,
"runtime": {
"library": "metal",
"layers": 1,
"max_layers": 27
}
}
```
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2163/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2163/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3370
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3370/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3370/comments
|
https://api.github.com/repos/ollama/ollama/issues/3370/events
|
https://github.com/ollama/ollama/issues/3370
| 2,210,836,518
|
I_kwDOJ0Z1Ps6DxrAm
| 3,370
|
databricks-dbrx
|
{
"login": "Sparkenstein",
"id": 24642451,
"node_id": "MDQ6VXNlcjI0NjQyNDUx",
"avatar_url": "https://avatars.githubusercontent.com/u/24642451?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Sparkenstein",
"html_url": "https://github.com/Sparkenstein",
"followers_url": "https://api.github.com/users/Sparkenstein/followers",
"following_url": "https://api.github.com/users/Sparkenstein/following{/other_user}",
"gists_url": "https://api.github.com/users/Sparkenstein/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Sparkenstein/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Sparkenstein/subscriptions",
"organizations_url": "https://api.github.com/users/Sparkenstein/orgs",
"repos_url": "https://api.github.com/users/Sparkenstein/repos",
"events_url": "https://api.github.com/users/Sparkenstein/events{/privacy}",
"received_events_url": "https://api.github.com/users/Sparkenstein/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
closed
| false
| null |
[] | null | 21
| 2024-03-27T13:39:40
| 2024-04-18T11:24:09
| 2024-04-17T15:45:42
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What model would you like?
Databricks just released a new model that is supposed to perform better than mistral. IMO would be a good addition
https://www.databricks.com/blog/introducing-dbrx-new-state-art-open-llm
https://huggingface.co/databricks/dbrx-instruct
_No response_
|
{
"login": "Sparkenstein",
"id": 24642451,
"node_id": "MDQ6VXNlcjI0NjQyNDUx",
"avatar_url": "https://avatars.githubusercontent.com/u/24642451?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Sparkenstein",
"html_url": "https://github.com/Sparkenstein",
"followers_url": "https://api.github.com/users/Sparkenstein/followers",
"following_url": "https://api.github.com/users/Sparkenstein/following{/other_user}",
"gists_url": "https://api.github.com/users/Sparkenstein/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Sparkenstein/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Sparkenstein/subscriptions",
"organizations_url": "https://api.github.com/users/Sparkenstein/orgs",
"repos_url": "https://api.github.com/users/Sparkenstein/repos",
"events_url": "https://api.github.com/users/Sparkenstein/events{/privacy}",
"received_events_url": "https://api.github.com/users/Sparkenstein/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3370/reactions",
"total_count": 115,
"+1": 115,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3370/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1820
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1820/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1820/comments
|
https://api.github.com/repos/ollama/ollama/issues/1820/events
|
https://github.com/ollama/ollama/issues/1820
| 2,068,412,448
|
I_kwDOJ0Z1Ps57SXgg
| 1,820
|
Pulled SQLCoder2 even though it's not listed in the library
|
{
"login": "lestan",
"id": 1471736,
"node_id": "MDQ6VXNlcjE0NzE3MzY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1471736?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lestan",
"html_url": "https://github.com/lestan",
"followers_url": "https://api.github.com/users/lestan/followers",
"following_url": "https://api.github.com/users/lestan/following{/other_user}",
"gists_url": "https://api.github.com/users/lestan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lestan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lestan/subscriptions",
"organizations_url": "https://api.github.com/users/lestan/orgs",
"repos_url": "https://api.github.com/users/lestan/repos",
"events_url": "https://api.github.com/users/lestan/events{/privacy}",
"received_events_url": "https://api.github.com/users/lestan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-01-06T05:56:58
| 2024-03-11T22:14:33
| 2024-03-11T22:14:33
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I wanted to test out sqlcoder2, but only saw sqlcoder on the [model library page](https://ollama.ai/library?sort=newest&q=llama)
I still tried to see what would happen if I ran Ollama pull sqlcoder2...and it worked
It pulled down the model named sqlcoder2:latest
Is this an issue with the model library not being up to date or is it downloading sqlcoder (assuming v1) even though I'm asking for sqlcoder2.
Here's the output of the modelfile
```
lestan@Lestans-MacBook-Pro learn-text-to-sql % ollama show sqlcoder2 --modelfile
# Modelfile generated by "ollama show"
# To build a new Modelfile based on this one, replace the FROM line with:
# FROM sqlcoder2:latest
FROM /Users/lestan/.ollama/models/blobs/sha256:4018b30faaf8b1e4cedad4dff4871f74e369950ddd25a0a4e8b0657a18710517
TEMPLATE """{{ .Prompt }}"""
PARAMETER stop "<|endoftext|>"
```
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1820/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1820/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4507
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4507/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4507/comments
|
https://api.github.com/repos/ollama/ollama/issues/4507/events
|
https://github.com/ollama/ollama/issues/4507
| 2,303,686,330
|
I_kwDOJ0Z1Ps6JT3a6
| 4,507
|
I hope ollama completes my command input.
|
{
"login": "taozhiyuai",
"id": 146583103,
"node_id": "U_kgDOCLyuPw",
"avatar_url": "https://avatars.githubusercontent.com/u/146583103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/taozhiyuai",
"html_url": "https://github.com/taozhiyuai",
"followers_url": "https://api.github.com/users/taozhiyuai/followers",
"following_url": "https://api.github.com/users/taozhiyuai/following{/other_user}",
"gists_url": "https://api.github.com/users/taozhiyuai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/taozhiyuai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/taozhiyuai/subscriptions",
"organizations_url": "https://api.github.com/users/taozhiyuai/orgs",
"repos_url": "https://api.github.com/users/taozhiyuai/repos",
"events_url": "https://api.github.com/users/taozhiyuai/events{/privacy}",
"received_events_url": "https://api.github.com/users/taozhiyuai/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-05-17T23:33:49
| 2024-05-21T20:27:21
| 2024-05-21T20:27:21
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I hope ollama completes my command input.
for example below,, I hope when I press TAB button, ollama complete 'qwen:32b-chat-v1.5-q8_0 '
thanks.
`taozhiyu@192 ~ % ollama list
NAME ID SIZE MODIFIED
qwen:32b-chat-v1.5-q8_0 33c6cb647280 34 GB 3 days ago
llama3:70b-instruct-q8_0 d6fa8cffc283 74 GB 4 days ago
taozhiyuai/openbiollm-llama-3:70b-q8_0 881f678ac039 74 GB 8 days ago
taozhiyu@192 ~ % ollama run qwe`
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4507/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4507/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7383
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7383/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7383/comments
|
https://api.github.com/repos/ollama/ollama/issues/7383/events
|
https://github.com/ollama/ollama/pull/7383
| 2,616,439,116
|
PR_kwDOJ0Z1Ps6AAgod
| 7,383
|
Add Swollama links to README.md
|
{
"login": "marcusziade",
"id": 47460844,
"node_id": "MDQ6VXNlcjQ3NDYwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/47460844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/marcusziade",
"html_url": "https://github.com/marcusziade",
"followers_url": "https://api.github.com/users/marcusziade/followers",
"following_url": "https://api.github.com/users/marcusziade/following{/other_user}",
"gists_url": "https://api.github.com/users/marcusziade/gists{/gist_id}",
"starred_url": "https://api.github.com/users/marcusziade/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/marcusziade/subscriptions",
"organizations_url": "https://api.github.com/users/marcusziade/orgs",
"repos_url": "https://api.github.com/users/marcusziade/repos",
"events_url": "https://api.github.com/users/marcusziade/events{/privacy}",
"received_events_url": "https://api.github.com/users/marcusziade/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-10-27T09:10:28
| 2024-11-21T18:24:55
| 2024-11-20T18:49:15
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7383",
"html_url": "https://github.com/ollama/ollama/pull/7383",
"diff_url": "https://github.com/ollama/ollama/pull/7383.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7383.patch",
"merged_at": "2024-11-20T18:49:15"
}
|
This PR updates the README by adding a link to a feature-complete Swift client library I built called [Swollama](https://github.com/marcusziade/Swollama)
I have _extensive_ documentation in [DocC](https://marcusziade.github.io/Swollama/documentation/swollama/), and I already have a draft PR open for Linux and Docker support.
The feature-complete CLI is also purely written in Swift without dependencies and has a cyberpunk theme:


|
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7383/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7383/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3180
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3180/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3180/comments
|
https://api.github.com/repos/ollama/ollama/issues/3180/events
|
https://github.com/ollama/ollama/issues/3180
| 2,190,097,551
|
I_kwDOJ0Z1Ps6CijyP
| 3,180
|
Add support for AMD iGPUs, such as gfx1103.
|
{
"login": "louwangzhiyuY",
"id": 6920071,
"node_id": "MDQ6VXNlcjY5MjAwNzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/6920071?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/louwangzhiyuY",
"html_url": "https://github.com/louwangzhiyuY",
"followers_url": "https://api.github.com/users/louwangzhiyuY/followers",
"following_url": "https://api.github.com/users/louwangzhiyuY/following{/other_user}",
"gists_url": "https://api.github.com/users/louwangzhiyuY/gists{/gist_id}",
"starred_url": "https://api.github.com/users/louwangzhiyuY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/louwangzhiyuY/subscriptions",
"organizations_url": "https://api.github.com/users/louwangzhiyuY/orgs",
"repos_url": "https://api.github.com/users/louwangzhiyuY/repos",
"events_url": "https://api.github.com/users/louwangzhiyuY/events{/privacy}",
"received_events_url": "https://api.github.com/users/louwangzhiyuY/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 2
| 2024-03-16T15:23:03
| 2024-07-02T04:13:18
| 2024-03-16T18:15:39
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What are you trying to do?
_No response_
### How should we solve this?
_No response_
### What is the impact of not solving this?
_No response_
### Anything else?
_No response_
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3180/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3180/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4622
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4622/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4622/comments
|
https://api.github.com/repos/ollama/ollama/issues/4622/events
|
https://github.com/ollama/ollama/pull/4622
| 2,316,271,617
|
PR_kwDOJ0Z1Ps5wgxVN
| 4,622
|
Update README.md
|
{
"login": "rajatrocks",
"id": 7295726,
"node_id": "MDQ6VXNlcjcyOTU3MjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/7295726?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rajatrocks",
"html_url": "https://github.com/rajatrocks",
"followers_url": "https://api.github.com/users/rajatrocks/followers",
"following_url": "https://api.github.com/users/rajatrocks/following{/other_user}",
"gists_url": "https://api.github.com/users/rajatrocks/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rajatrocks/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rajatrocks/subscriptions",
"organizations_url": "https://api.github.com/users/rajatrocks/orgs",
"repos_url": "https://api.github.com/users/rajatrocks/repos",
"events_url": "https://api.github.com/users/rajatrocks/events{/privacy}",
"received_events_url": "https://api.github.com/users/rajatrocks/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-05-24T21:13:58
| 2024-11-21T08:38:03
| 2024-11-21T08:38:02
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4622",
"html_url": "https://github.com/ollama/ollama/pull/4622",
"diff_url": "https://github.com/ollama/ollama/pull/4622.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4622.patch",
"merged_at": null
}
|
Added the Ask Steve Chrome Extension, which enables you to connect Ollama: https://www.asksteve.to/docs/local-models#how-do-i-use-ollama-with-ask-steve
|
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4622/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4622/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1505
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1505/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1505/comments
|
https://api.github.com/repos/ollama/ollama/issues/1505/events
|
https://github.com/ollama/ollama/pull/1505
| 2,040,029,408
|
PR_kwDOJ0Z1Ps5h6gnD
| 1,505
|
set version string to current (pre)release
|
{
"login": "tohn",
"id": 427159,
"node_id": "MDQ6VXNlcjQyNzE1OQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/427159?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tohn",
"html_url": "https://github.com/tohn",
"followers_url": "https://api.github.com/users/tohn/followers",
"following_url": "https://api.github.com/users/tohn/following{/other_user}",
"gists_url": "https://api.github.com/users/tohn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tohn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tohn/subscriptions",
"organizations_url": "https://api.github.com/users/tohn/orgs",
"repos_url": "https://api.github.com/users/tohn/repos",
"events_url": "https://api.github.com/users/tohn/events{/privacy}",
"received_events_url": "https://api.github.com/users/tohn/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 5
| 2023-12-13T16:06:18
| 2024-01-06T19:39:19
| 2023-12-13T16:15:55
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1505",
"html_url": "https://github.com/ollama/ollama/pull/1505",
"diff_url": "https://github.com/ollama/ollama/pull/1505.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1505.patch",
"merged_at": null
}
|
according to the github tags and by using <https://semver.org>
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1505/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1505/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4308
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4308/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4308/comments
|
https://api.github.com/repos/ollama/ollama/issues/4308/events
|
https://github.com/ollama/ollama/issues/4308
| 2,288,937,781
|
I_kwDOJ0Z1Ps6Ibms1
| 4,308
|
I have uploaded this model, but it is not shown on my page.
|
{
"login": "taozhiyuai",
"id": 146583103,
"node_id": "U_kgDOCLyuPw",
"avatar_url": "https://avatars.githubusercontent.com/u/146583103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/taozhiyuai",
"html_url": "https://github.com/taozhiyuai",
"followers_url": "https://api.github.com/users/taozhiyuai/followers",
"following_url": "https://api.github.com/users/taozhiyuai/following{/other_user}",
"gists_url": "https://api.github.com/users/taozhiyuai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/taozhiyuai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/taozhiyuai/subscriptions",
"organizations_url": "https://api.github.com/users/taozhiyuai/orgs",
"repos_url": "https://api.github.com/users/taozhiyuai/repos",
"events_url": "https://api.github.com/users/taozhiyuai/events{/privacy}",
"received_events_url": "https://api.github.com/users/taozhiyuai/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 0
| 2024-05-10T05:04:33
| 2024-05-10T05:12:57
| 2024-05-10T05:12:57
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
<img width="1091" alt="截屏2024-05-10 13 00 11" src="https://github.com/ollama/ollama/assets/146583103/f809d253-4deb-4224-99f8-3a20501ad869">
I have uploaded this model, but it is not shown on my page.
### OS
_No response_
### GPU
_No response_
### CPU
_No response_
### Ollama version
1.34
|
{
"login": "taozhiyuai",
"id": 146583103,
"node_id": "U_kgDOCLyuPw",
"avatar_url": "https://avatars.githubusercontent.com/u/146583103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/taozhiyuai",
"html_url": "https://github.com/taozhiyuai",
"followers_url": "https://api.github.com/users/taozhiyuai/followers",
"following_url": "https://api.github.com/users/taozhiyuai/following{/other_user}",
"gists_url": "https://api.github.com/users/taozhiyuai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/taozhiyuai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/taozhiyuai/subscriptions",
"organizations_url": "https://api.github.com/users/taozhiyuai/orgs",
"repos_url": "https://api.github.com/users/taozhiyuai/repos",
"events_url": "https://api.github.com/users/taozhiyuai/events{/privacy}",
"received_events_url": "https://api.github.com/users/taozhiyuai/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4308/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4308/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3406
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3406/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3406/comments
|
https://api.github.com/repos/ollama/ollama/issues/3406/events
|
https://github.com/ollama/ollama/issues/3406
| 2,215,082,120
|
I_kwDOJ0Z1Ps6EB3iI
| 3,406
|
Official arm64 build does not work on Jetson Nano Orin
|
{
"login": "gab0220",
"id": 127881776,
"node_id": "U_kgDOB59SMA",
"avatar_url": "https://avatars.githubusercontent.com/u/127881776?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gab0220",
"html_url": "https://github.com/gab0220",
"followers_url": "https://api.github.com/users/gab0220/followers",
"following_url": "https://api.github.com/users/gab0220/following{/other_user}",
"gists_url": "https://api.github.com/users/gab0220/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gab0220/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gab0220/subscriptions",
"organizations_url": "https://api.github.com/users/gab0220/orgs",
"repos_url": "https://api.github.com/users/gab0220/repos",
"events_url": "https://api.github.com/users/gab0220/events{/privacy}",
"received_events_url": "https://api.github.com/users/gab0220/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg",
"url": "https://api.github.com/repos/ollama/ollama/labels/nvidia",
"name": "nvidia",
"color": "8CDB00",
"default": false,
"description": "Issues relating to Nvidia GPUs and CUDA"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 21
| 2024-03-29T10:26:26
| 2024-09-13T12:34:00
| 2024-05-21T17:58:22
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Hello everyone, thank you for your work.
I'm using a Jetson Nano Orin. Following #3098, some days ago I done a ```git checkout``` using #2279 commit and install this version on my device. It works.
Today I tried to:
* Install the v0.1.30 using [this tutorial](https://github.com/ollama/ollama/blob/main/docs/tutorials/nvidia-jetson.md#running-ollama-on-nvidia-jetson-devices)
* Clean ```ollama list```
* Run ```ollama pull <model>```
* Run ```OLLAMA_DEBUG="1" ollama run <model>```
Output:
```
Error: Post "http://127.0.0.1:11434/api/chat": EOF
```
I also attach the output of ```journalctl -u ollama```:
```
Mar 29 11:16:09 ubuntu ollama[4168]: time=2024-03-29T11:16:09.687+01:00 level=INFO source=gpu.go:115 msg="Detecting GPU type"
Mar 29 11:16:09 ubuntu ollama[4168]: time=2024-03-29T11:16:09.687+01:00 level=INFO source=gpu.go:265 msg="Searching for GPU management library [libcudart.so](https://libcudart.so/)*"
Mar 29 11:16:09 ubuntu ollama[4168]: time=2024-03-29T11:16:09.692+01:00 level=INFO source=gpu.go:311 msg="Discovered GPU libraries: [/tmp/ollama3349183846/runners/cuda_v11/[libcudart.so](https://libcudart.so/).11.0 /usr/local/cuda/lib64/libcudart.so.12.2.140 /usr/local/cuda/targets/aarch64-linux/lib/[libcudart.so](https://libcudart.so/).12.2.140 /usr/local/cuda-12/targets/aarch64-linux/lib/[libcudart.so](https://libcudart.so/).12.2.140 /usr/local/cuda-12.2/targets/aarch64-linux/lib/[libcudart.so](https://libcudart.so/).12.2.140]"
Mar 29 11:16:09 ubuntu ollama[4168]: time=2024-03-29T11:16:09.714+01:00 level=INFO source=gpu.go:120 msg="Nvidia GPU detected via cudart"
Mar 29 11:16:09 ubuntu ollama[4168]: time=2024-03-29T11:16:09.714+01:00 level=INFO source=cpu_common.go:18 msg="CPU does not have vector extensions"
Mar 29 11:16:09 ubuntu ollama[4168]: time=2024-03-29T11:16:09.801+01:00 level=INFO source=gpu.go:188 msg="[cudart] CUDART CUDA Compute Capability detected: 8.7"
Mar 29 11:16:17 ubuntu systemd[1]: Stopping Ollama Service...
Mar 29 11:16:17 ubuntu systemd[1]: ollama.service: Deactivated successfully.
Mar 29 11:16:17 ubuntu systemd[1]: Stopped Ollama Service.
Mar 29 11:16:17 ubuntu systemd[1]: ollama.service: Consumed 9.601s CPU time.
```
### What did you expect to see?
So the I can't use model.
### Steps to reproduce
_No response_
### Are there any recent changes that introduced the issue?
_No response_
### OS
Linux
### Architecture
Other
### Platform
_No response_
### Ollama version
v0.1.30
### GPU
Nvidia
### GPU info
_No response_
### CPU
_No response_
### Other software
_No response_
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3406/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3406/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7194
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7194/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7194/comments
|
https://api.github.com/repos/ollama/ollama/issues/7194/events
|
https://github.com/ollama/ollama/pull/7194
| 2,584,278,438
|
PR_kwDOJ0Z1Ps5-dUf9
| 7,194
|
Update README.md - New Mobile Client
|
{
"login": "Calvicii",
"id": 80085756,
"node_id": "MDQ6VXNlcjgwMDg1NzU2",
"avatar_url": "https://avatars.githubusercontent.com/u/80085756?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Calvicii",
"html_url": "https://github.com/Calvicii",
"followers_url": "https://api.github.com/users/Calvicii/followers",
"following_url": "https://api.github.com/users/Calvicii/following{/other_user}",
"gists_url": "https://api.github.com/users/Calvicii/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Calvicii/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Calvicii/subscriptions",
"organizations_url": "https://api.github.com/users/Calvicii/orgs",
"repos_url": "https://api.github.com/users/Calvicii/repos",
"events_url": "https://api.github.com/users/Calvicii/events{/privacy}",
"received_events_url": "https://api.github.com/users/Calvicii/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-10-13T21:18:42
| 2024-11-21T07:58:45
| 2024-11-21T07:58:45
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7194",
"html_url": "https://github.com/ollama/ollama/pull/7194",
"diff_url": "https://github.com/ollama/ollama/pull/7194.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7194.patch",
"merged_at": null
}
|
Added my mobile Ollama client to the list.
|
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7194/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7194/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2304
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2304/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2304/comments
|
https://api.github.com/repos/ollama/ollama/issues/2304/events
|
https://github.com/ollama/ollama/issues/2304
| 2,111,802,827
|
I_kwDOJ0Z1Ps59343L
| 2,304
|
Adding Yi-VL models
|
{
"login": "ddpasa",
"id": 112642920,
"node_id": "U_kgDOBrbLaA",
"avatar_url": "https://avatars.githubusercontent.com/u/112642920?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ddpasa",
"html_url": "https://github.com/ddpasa",
"followers_url": "https://api.github.com/users/ddpasa/followers",
"following_url": "https://api.github.com/users/ddpasa/following{/other_user}",
"gists_url": "https://api.github.com/users/ddpasa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ddpasa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ddpasa/subscriptions",
"organizations_url": "https://api.github.com/users/ddpasa/orgs",
"repos_url": "https://api.github.com/users/ddpasa/repos",
"events_url": "https://api.github.com/users/ddpasa/events{/privacy}",
"received_events_url": "https://api.github.com/users/ddpasa/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
|
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 4
| 2024-02-01T08:04:24
| 2024-11-15T09:13:34
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Yi LM [is supported in ollama](https://ollama.ai/library/yi), but I don't think the multimodel Yi-VL models are. These are supposed to be very good, so it would be great to have them.
Here are the huggingface links:
6B: https://huggingface.co/01-ai/Yi-VL-6B
34B: https://huggingface.co/01-ai/Yi-VL-34B
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2304/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 2
}
|
https://api.github.com/repos/ollama/ollama/issues/2304/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/5046
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5046/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5046/comments
|
https://api.github.com/repos/ollama/ollama/issues/5046/events
|
https://github.com/ollama/ollama/pull/5046
| 2,353,725,529
|
PR_kwDOJ0Z1Ps5ygFhm
| 5,046
|
server: longer timeout in `TestRequests`
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-06-14T16:37:12
| 2024-06-14T16:48:25
| 2024-06-14T16:48:25
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5046",
"html_url": "https://github.com/ollama/ollama/pull/5046",
"diff_url": "https://github.com/ollama/ollama/pull/5046.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5046.patch",
"merged_at": "2024-06-14T16:48:25"
}
|
@dhiltgen this seems like a band-aid - is there something deeper we should fix in this test?
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5046/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5046/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2557
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2557/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2557/comments
|
https://api.github.com/repos/ollama/ollama/issues/2557/events
|
https://github.com/ollama/ollama/issues/2557
| 2,139,867,544
|
I_kwDOJ0Z1Ps5_i8mY
| 2,557
|
How can I use ollama in pycharm
|
{
"login": "Matrixsun",
"id": 11818446,
"node_id": "MDQ6VXNlcjExODE4NDQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/11818446?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Matrixsun",
"html_url": "https://github.com/Matrixsun",
"followers_url": "https://api.github.com/users/Matrixsun/followers",
"following_url": "https://api.github.com/users/Matrixsun/following{/other_user}",
"gists_url": "https://api.github.com/users/Matrixsun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Matrixsun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Matrixsun/subscriptions",
"organizations_url": "https://api.github.com/users/Matrixsun/orgs",
"repos_url": "https://api.github.com/users/Matrixsun/repos",
"events_url": "https://api.github.com/users/Matrixsun/events{/privacy}",
"received_events_url": "https://api.github.com/users/Matrixsun/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] |
closed
| false
| null |
[] | null | 3
| 2024-02-17T07:02:06
| 2024-05-17T22:42:34
| 2024-05-17T22:42:33
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi all. I want use ollama in pycharm, how to do it?
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2557/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2557/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2941
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2941/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2941/comments
|
https://api.github.com/repos/ollama/ollama/issues/2941/events
|
https://github.com/ollama/ollama/issues/2941
| 2,170,179,604
|
I_kwDOJ0Z1Ps6BWlAU
| 2,941
|
Global Configuration Variables for Ollama
|
{
"login": "bkawakami",
"id": 1881935,
"node_id": "MDQ6VXNlcjE4ODE5MzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/1881935?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bkawakami",
"html_url": "https://github.com/bkawakami",
"followers_url": "https://api.github.com/users/bkawakami/followers",
"following_url": "https://api.github.com/users/bkawakami/following{/other_user}",
"gists_url": "https://api.github.com/users/bkawakami/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bkawakami/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bkawakami/subscriptions",
"organizations_url": "https://api.github.com/users/bkawakami/orgs",
"repos_url": "https://api.github.com/users/bkawakami/repos",
"events_url": "https://api.github.com/users/bkawakami/events{/privacy}",
"received_events_url": "https://api.github.com/users/bkawakami/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 7
| 2024-03-05T21:32:44
| 2025-01-30T00:53:36
| 2024-03-06T01:12:19
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I am currently using Ollama for running LLMs locally and am greatly appreciative of the functionality it offers. However, I've come across a point of confusion regarding the global configuration of the Ollama environment, especially when it comes to setting it up for different use cases.
Could you provide more detailed information or documentation on the following aspects:
1. What are all the global configuration variables available for Ollama, and where can I find a comprehensive list?
2. Is there a way to set these configurations globally via a YAML file or a similar approach, rather than setting individual environment variables?
3. If YAML or similar file-based configurations are possible, could you provide an example of how to structure this file for different scenarios (e.g., different models, host configurations)?
|
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2941/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2941/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/619
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/619/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/619/comments
|
https://api.github.com/repos/ollama/ollama/issues/619/events
|
https://github.com/ollama/ollama/issues/619
| 1,914,831,152
|
I_kwDOJ0Z1Ps5yIgEw
| 619
|
Segfault when using /show parameters
|
{
"login": "lstep",
"id": 2028,
"node_id": "MDQ6VXNlcjIwMjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2028?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lstep",
"html_url": "https://github.com/lstep",
"followers_url": "https://api.github.com/users/lstep/followers",
"following_url": "https://api.github.com/users/lstep/following{/other_user}",
"gists_url": "https://api.github.com/users/lstep/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lstep/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lstep/subscriptions",
"organizations_url": "https://api.github.com/users/lstep/orgs",
"repos_url": "https://api.github.com/users/lstep/repos",
"events_url": "https://api.github.com/users/lstep/events{/privacy}",
"received_events_url": "https://api.github.com/users/lstep/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2023-09-27T06:58:07
| 2023-09-28T21:25:24
| 2023-09-28T21:25:24
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
From a fresh install (`curl https://ollama.ai/install.sh | sh` on Ubuntu Linux 22.04) using `ollama run codeup:13b-llama2-chat-q4_K_M`, runs but when I try `/show parameters`, generates a segfault:
```
>>> /list
NAME ID SIZE MODIFIED
codeup:13b-llama2-chat-q4_K_M d9c411941357 7.9 GB 12 hours ago
>>> /show parameters
error: couldn't get model
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x20 pc=0xb06e2b]
goroutine 1 [running]:
github.com/jmorganca/ollama/cmd.generateInteractive(0xb043a7?, {0x7fff9c765523, 0x1d})
/go/src/github.com/jmorganca/ollama/cmd/cmd.go:660 +0x17cb
github.com/jmorganca/ollama/cmd.RunGenerate(0x7fff9c765523?, {0xc0003df100, 0x1, 0x1?})
/go/src/github.com/jmorganca/ollama/cmd/cmd.go:389 +0xcf
github.com/jmorganca/ollama/cmd.RunHandler(0xc0001d9c00?, {0xc0003df100?, 0x1, 0x1})
/go/src/github.com/jmorganca/ollama/cmd/cmd.go:145 +0x25c
github.com/spf13/cobra.(*Command).execute(0xc0003a5800, {0xc0003df0d0, 0x1, 0x1})
/root/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:940 +0x87c
github.com/spf13/cobra.(*Command).ExecuteC(0xc0003a4f00)
/root/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068 +0x3a5
github.com/spf13/cobra.(*Command).Execute(...)
/root/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992
github.com/spf13/cobra.(*Command).ExecuteContext(...)
/root/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:985
main.main()
/go/src/github.com/jmorganca/ollama/main.go:11 +0x4d
```
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/619/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/619/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3538
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3538/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3538/comments
|
https://api.github.com/repos/ollama/ollama/issues/3538/events
|
https://github.com/ollama/ollama/issues/3538
| 2,230,925,586
|
I_kwDOJ0Z1Ps6E-TkS
| 3,538
|
binary install on a cluster produces extra information in responses in both cpu and gpu mode
|
{
"login": "bozo32",
"id": 102033973,
"node_id": "U_kgDOBhTqNQ",
"avatar_url": "https://avatars.githubusercontent.com/u/102033973?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bozo32",
"html_url": "https://github.com/bozo32",
"followers_url": "https://api.github.com/users/bozo32/followers",
"following_url": "https://api.github.com/users/bozo32/following{/other_user}",
"gists_url": "https://api.github.com/users/bozo32/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bozo32/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bozo32/subscriptions",
"organizations_url": "https://api.github.com/users/bozo32/orgs",
"repos_url": "https://api.github.com/users/bozo32/repos",
"events_url": "https://api.github.com/users/bozo32/events{/privacy}",
"received_events_url": "https://api.github.com/users/bozo32/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info",
"name": "needs more info",
"color": "BA8041",
"default": false,
"description": "More information is needed to assist"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 4
| 2024-04-08T11:16:47
| 2024-06-22T00:12:52
| 2024-06-22T00:12:52
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I installed ollama on the university cluster following the instructions here:
The download page has a list of assets, one of them is binary for Linux named ollama-linux-amd64.
Just download it to your Linux cluster, then run the following:
start the server in background
./ollama-linux-amd64 serve&
run a local model afterwards
./ollama-linux-amd64 run llama2
I had to
chmod +x ollama-linux-amd64
but then it worked
when running
./ollama-linux-amd64 run llama2
everything worked fine...if slowly...but there is extra information in the responses
when I then used sinteractive to grab a gpu (A100 80GB)
I had to re-install everything
fine
and it, again, produced lots of extra information in the responses
session starts in CPU mode
```
(base) tamas002@login0:~/ai$ wget https://github.com/ollama/ollama/releases/download/v0.1.30/ollama-linux-amd64
--2024-04-08 13:01:04-- https://github.com/ollama/ollama/releases/download/v0.1.30/ollama-linux-amd64Resolving github.com (github.com)... 140.82.121.3
Connecting to github.com (github.com)|140.82.121.3|:443... connected.
HTTP request sent, awaiting response... 302 FoundLocation: https://objects.githubusercontent.com/github-production-release-asset-2e65be/658928958/bdcdb212-95c5-426d-9879-9e5b50876d89?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAVCODYLSA53PQK4ZA%2F20240408%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20240408T110105Z&X-Amz-Expires=300&X-Amz-Signature=08db18a78d027fd8a9cdbf030599bb52ee8b576f3cc397c3d5553c9ef4ce68ce&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=658928958&response-content-disposition=attachment%3B%20filename%3Dollama-linux-amd64&response-content-type=application%2Foctet-stream [following]
--2024-04-08 13:01:05-- https://objects.githubusercontent.com/github-production-release-asset-2e65be/658928958/bdcdb212-95c5-426d-9879-9e5b50876d89?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAVCODYLSA53PQK4ZA%2F20240408%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20240408T110105Z&X-Amz-Expires=300&X-Amz-Signature=08db18a78d027fd8a9cdbf030599bb52ee8b576f3cc397c3d5553c9ef4ce68ce&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=658928958&response-content-disposition=attachment%3B%20filename%3Dollama-linux-amd64&response-content-type=application%2Foctet-streamResolving objects.githubusercontent.com (objects.githubusercontent.com)... 185.199.111.133, 185.199.108.133, 185.199.109.133, ...Connecting to objects.githubusercontent.com (objects.githubusercontent.com)|185.199.111.133|:443... connected.HTTP request sent, awaiting response... 200 OK
Length: 297108760 (283M) [application/octet-stream]Saving to: ‘ollama-linux-amd64’
ollama-linux-amd64 100%[===================================================>] 283.34M 265MB/s in 1.1s
2024-04-08 13:01:06 (265 MB/s) - ‘ollama-linux-amd64’ saved [297108760/297108760]
(base) tamas002@login0:~/ai$ chmod +x ollama-*
(base) tamas002@login0:~/ai$ ./ollama-linux-amd64 serve&
[1] 1387761(base) tamas002@login0:~/ai$ time=2024-04-08T13:01:31.191+02:00 level=INFO source=images.go:804 msg="total blobs: 114"
time=2024-04-08T13:01:33.095+02:00 level=INFO source=images.go:811 msg="total unused blobs removed: 95"time=2024-04-08T13:01:33.098+02:00 level=INFO source=routes.go:1118 msg="Listening on 127.0.0.1:11434 (version 0.1.30)"
time=2024-04-08T13:01:33.112+02:00 level=INFO source=payload_common.go:113 msg="Extracting dynamic libraries to /tmp/ollama1248928676/runners ..."
time=2024-04-08T13:01:36.039+02:00 level=INFO source=payload_common.go:140 msg="Dynamic LLM libraries [rocm_v60000 cuda_v11 cpu_avx cpu_avx2 cpu]"time=2024-04-08T13:01:36.039+02:00 level=INFO source=gpu.go:115 msg="Detecting GPU type"time=2024-04-08T13:01:36.039+02:00 level=INFO source=gpu.go:265 msg="Searching for GPU management library libcudart.so*"time=2024-04-08T13:01:36.041+02:00 level=INFO source=gpu.go:311 msg="Discovered GPU libraries: [/tmp/ollama1248928676/runners/cuda_v11/libcudart.so.11.0]"time=2024-04-08T13:01:36.042+02:00 level=INFO source=gpu.go:340 msg="Unable to load cudart CUDA management library /tmp/ollama1248928676/runners/cuda_v11/libcudart.so.11.0: cudart init failure: 35"
time=2024-04-08T13:01:36.042+02:00 level=INFO source=gpu.go:265 msg="Searching for GPU management library libnvidia-ml.so"time=2024-04-08T13:01:36.044+02:00 level=INFO source=gpu.go:311 msg="Discovered GPU libraries: []"time=2024-04-08T13:01:36.044+02:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-04-08T13:01:36.044+02:00 level=INFO source=routes.go:1141 msg="no GPU detected"
^C
(base) tamas002@login0:~/ai$ ps -x
PID TTY STAT TIME COMMAND1316477 ? S 0:00 /shared/webapps/jupyterhub/central/3.10.9-3.1.1/bin/python3 /shared/webapps/jupyterhub/central
1316478 ? Rl 0:23 /shared/webapps/jupyterhub/central/3.10.9-3.1.1/bin/python /shared/webapps/jupyterhub/central/
1316584 pts/0 Ss 0:00 /bin/bash -l
1387761 pts/0 Sl 0:05 ./ollama-linux-amd64 serve
1388237 pts/0 R+ 0:00 ps -x
(base) tamas002@login0:~/ai$ ./ollama-linux-amd64 run llama2
[GIN] 2024/04/08 - 13:03:00 | 200 | 103.664µs | 127.0.0.1 | HEAD "/"
[GIN] 2024/04/08 - 13:03:00 | 200 | 4.81742ms | 127.0.0.1 | POST "/api/show"
[GIN] 2024/04/08 - 13:03:00 | 200 | 2.043975ms | 127.0.0.1 | POST "/api/show"
⠸ time=2024-04-08T13:03:00.764+02:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-04-08T13:03:00.764+02:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-04-08T13:03:00.764+02:00 level=INFO source=llm.go:85 msg="GPU not available, falling back to CPU"
loading library /tmp/ollama1248928676/runners/cpu_avx2/libext_server.so
time=2024-04-08T13:03:00.766+02:00 level=INFO source=dyn_ext_server.go:87 msg="Loading Dynamic llm server: /tmp/ollama1248928676/runners/cpu_avx2/libext_server.so"
time=2024-04-08T13:03:00.766+02:00 level=INFO source=dyn_ext_server.go:147 msg="Initializing llama server"
llama_model_loader: loaded meta data with 23 key-value pairs and 291 tensors from /home/WUR/tamas002/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.name str = LLaMA v2
llama_model_loader: - kv 2: llama.context_length u32 = 4096
llama_model_loader: - kv 3: llama.embedding_length u32 = 4096
llama_model_loader: - kv 4: llama.block_count u32 = 32
llama_model_loader: - kv 5: llama.feed_forward_length u32 = 11008
llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 7: llama.attention.head_count u32 = 32
llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 32
llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 10: general.file_type u32 = 2
llama_model_loader: - kv 11: tokenizer.ggml.model str = llama
llama_model_loader: - kv 12: tokenizer.ggml.tokens arr[str,32000] = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv 13: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000,0.0000...
llama_model_loader: - kv 14: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6,6, 6, ...
llama_model_loader: - kv 15: tokenizer.ggml.merges arr[str,61249] = ["▁ t", "e r", "i n", "▁ a", "e n...
llama_model_loader: - kv 16: tokenizer.ggml.bos_token_id u32 = 1
llama_model_loader: - kv 17: tokenizer.ggml.eos_token_id u32 = 2
llama_model_loader: - kv 18: tokenizer.ggml.unknown_token_id u32 = 0
llama_model_loader: - kv 19: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 20: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 21: tokenizer.chat_template str = {% if messages[0]['role'] == 'system'...
llama_model_loader: - kv 22: general.quantization_version u32 = 2
llama_model_loader: - type f32: 65 tensors
llama_model_loader: - type q4_0: 225 tensors
llama_model_loader: - type q6_K: 1 tensors
⠼ llm_load_vocab: special tokens definition check successful ( 259/32000 ).
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = SPM
llm_load_print_meta: n_vocab = 32000
llm_load_print_meta: n_merges = 0
llm_load_print_meta: n_ctx_train = 4096
llm_load_print_meta: n_embd = 4096
llm_load_print_meta: n_head = 32
llm_load_print_meta: n_head_kv = 32
llm_load_print_meta: n_layer = 32
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 1
llm_load_print_meta: n_embd_k_gqa = 4096
llm_load_print_meta: n_embd_v_gqa = 4096
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 11008
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx = 4096
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: model type = 7B
llm_load_print_meta: model ftype = Q4_0
llm_load_print_meta: model params = 6.74 B
llm_load_print_meta: model size = 3.56 GiB (4.54 BPW)
llm_load_print_meta: general.name = LLaMA v2
llm_load_print_meta: BOS token = 1 '<s>'
llm_load_print_meta: EOS token = 2 '</s>'
llm_load_print_meta: UNK token = 0 '<unk>'
llm_load_print_meta: LF token = 13 '<0x0A>'
llm_load_tensors: ggml ctx size = 0.11 MiB
⠴ llm_load_tensors: CPU buffer size = 3647.87 MiB
..................................................................................................
llama_new_context_with_model: n_ctx = 2048
llama_new_context_with_model: n_batch = 512
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: freq_base = 10000.0
llama_new_context_with_model: freq_scale = 1
⠙ llama_kv_cache_init: CPU KV buffer size = 1024.00 MiB
llama_new_context_with_model: KV self size = 1024.00 MiB, K (f16): 512.00 MiB, V (f16): 512.00 MiB
llama_new_context_with_model: CPU output buffer size = 70.50 MiB
llama_new_context_with_model: CPU compute buffer size = 164.00 MiB
llama_new_context_with_model: graph nodes = 1060
llama_new_context_with_model: graph splits = 1
⠹ {"function":"initialize","level":"INFO","line":444,"msg":"initializing slots","n_slots":1,"tid":"140593019721472","timestamp":1712574181}
{"function":"initialize","level":"INFO","line":453,"msg":"new slot","n_ctx_slot":2048,"slot_id":0,"tid":"140593019721472","timestamp":1712574181}
time=2024-04-08T13:03:01.744+02:00 level=INFO source=dyn_ext_server.go:159 msg="Starting llama main loop"
{"function":"update_slots","level":"INFO","line":1572,"msg":"all slots are idle and system prompt is empty, clear the KV cache","tid":"140587255912192","timestamp":1712574181}
[GIN] 2024/04/08 - 13:03:01 | 200 | 1.282697676s | 127.0.0.1 | POST "/api/chat"
>>> hello?
{"function":"launch_slot_with_data","level":"INFO","line":826,"msg":"slot is processing task","slot_id":0,"task_id":0,"tid":"140587255912192","timestamp":1712574184}
{"function":"update_slots","ga_i":0,"level":"INFO","line":1803,"msg":"slot progression","n_past":0,"n_past_se":0,"n_prompt_tokens_processed":22,"slot_id":0,"task_id":0,"tid":"140587255912192","timestamp":1712574184}
{"function":"update_slots","level":"INFO","line":1830,"msg":"kv cache rm [p0, end)","p0":0,"slot_id":0,"task_id":0,"tid":"140587255912192","timestamp":1712574184}
Hello! It's nice to meet you. How are you today? Is there something I can help you with or would you like to chat?{"function":"print_timings","level":"INFO","line":265,"msg":"prompt eval time = 4261.18 ms / 22 tokens ( 193.69 ms per token, 5.16 tokens per second)","n_prompt_tokens_processed":22,"n_tokens_second":5.162895210827999,"slot_id":0,"t_prompt_processing":4261.175,"t_token":193.68977272727273,"task_id":0,"tid":"140587255912192","timestamp":1712574205}
{"function":"print_timings","level":"INFO","line":279,"msg":"generation eval time = 16137.99 ms / 31 runs ( 520.58ms per token, 1.92 tokens per second)","n_decoded":31,"n_tokens_second":1.9209331521459612,"slot_id":0,"t_token":520.5803225806452,"t_token_generation":16137.99,"task_id":0,"tid":"140587255912192","timestamp":1712574205}
{"function":"print_timings","level":"INFO","line":289,"msg":" total time = 20399.17 ms","slot_id":0,"t_prompt_processing":4261.175,"t_token_generation":16137.99,"t_total":20399.165,"task_id":0,"tid":"140587255912192","timestamp":1712574205}
{"function":"update_slots","level":"INFO","line":1634,"msg":"slot released","n_cache_tokens":53,"n_ctx":2048,"n_past":52,"n_system_tokens":0,"slot_id":0,"task_id":0,"tid":"140587255912192","timestamp":1712574205,"truncated":false}
[GIN] 2024/04/08 - 13:03:25 | 200 | 20.402214217s | 127.0.0.1 | POST "/api/chat"
>>> /bye
(base) tamas002@login0:~/ai$ sinteractive -p gpu --gres=gpu:1 --accel-bind=g --cpus-per-gpu=1 --mem-per-cpu=96G
srun: job 51621252 queued and waiting for resources
srun: job 51621252 has been allocated resources
(base) tamas002@gpun203:~/ai$ ps -x
PID TTY STAT TIME COMMAND
3815 pts/0 SNs 0:00 /usr/bin/bash -i
3844 pts/0 RN+ 0:00 ps -x
(base) tamas002@gpun203:~/ai$ ./ollama-linux-amd64 run llama2
Error: could not connect to ollama app, is it running?
(base) tamas002@gpun203:~/ai$ ./ollama-linux-amd64 serve&
[1] 3856
(base) tamas002@gpun203:~/ai$ time=2024-04-08T13:04:36.181+02:00 level=INFO source=images.go:804 msg="total blobs: 19"
time=2024-04-08T13:04:36.185+02:00 level=INFO source=images.go:811 msg="total unused blobs removed: 0"
time=2024-04-08T13:04:36.186+02:00 level=INFO source=routes.go:1118 msg="Listening on 127.0.0.1:11434 (version 0.1.30)"
time=2024-04-08T13:04:36.221+02:00 level=INFO source=payload_common.go:113 msg="Extracting dynamic libraries to /tmp/ollama123086200/runners ..."
time=2024-04-08T13:04:41.078+02:00 level=INFO source=payload_common.go:140 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11 rocm_v60000]"
time=2024-04-08T13:04:41.079+02:00 level=INFO source=gpu.go:115 msg="Detecting GPU type"
time=2024-04-08T13:04:41.079+02:00 level=INFO source=gpu.go:265 msg="Searching for GPU management library libcudart.so*"
time=2024-04-08T13:04:41.080+02:00 level=INFO source=gpu.go:311 msg="Discovered GPU libraries: [/tmp/ollama123086200/runners/cuda_v11/libcudart.so.11.0]"
time=2024-04-08T13:04:43.008+02:00 level=INFO source=gpu.go:120 msg="Nvidia GPU detected via cudart"
time=2024-04-08T13:04:43.008+02:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-04-08T13:04:43.123+02:00 level=INFO source=gpu.go:188 msg="[cudart] CUDART CUDA Compute Capability detected: 8.0"
^C
(base) tamas002@gpun203:~/ai$ ps -x
PID TTY STAT TIME COMMAND
3815 pts/0 SNs 0:00 /usr/bin/bash -i
3856 pts/0 SNl 0:06 ./ollama-linux-amd64 serve
3883 pts/0 RN+ 0:00 ps -x
(base) tamas002@gpun203:~/ai$ ./ollama-linux-amd64 run llama2
[GIN] 2024/04/08 - 13:05:17 | 200 | 52.452µs | 127.0.0.1 | HEAD "/"
[GIN] 2024/04/08 - 13:05:17 | 200 | 1.755494ms | 127.0.0.1 | POST "/api/show"
[GIN] 2024/04/08 - 13:05:17 | 200 | 1.076623ms | 127.0.0.1 | POST "/api/show"
⠹ time=2024-04-08T13:05:17.589+02:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-04-08T13:05:17.589+02:00 level=INFO source=gpu.go:188 msg="[cudart] CUDART CUDA Compute Capability detected: 8.0"
time=2024-04-08T13:05:17.590+02:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-04-08T13:05:17.590+02:00 level=INFO source=gpu.go:188 msg="[cudart] CUDART CUDA Compute Capability detected: 8.0"
time=2024-04-08T13:05:17.590+02:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
loading library /tmp/ollama123086200/runners/cuda_v11/libext_server.so
time=2024-04-08T13:05:17.595+02:00 level=INFO source=dyn_ext_server.go:87 msg="Loading Dynamic llm server: /tmp/ollama123086200/runners/cuda_v11/libext_server.so"
time=2024-04-08T13:05:17.595+02:00 level=INFO source=dyn_ext_server.go:147 msg="Initializing llama server"
llama_model_loader: loaded meta data with 23 key-value pairs and 291 tensors from /home/WUR/tamas002/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.name str = LLaMA v2
llama_model_loader: - kv 2: llama.context_length u32 = 4096
llama_model_loader: - kv 3: llama.embedding_length u32 = 4096
llama_model_loader: - kv 4: llama.block_count u32 = 32
llama_model_loader: - kv 5: llama.feed_forward_length u32 = 11008
llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 7: llama.attention.head_count u32 = 32
llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 32
llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 10: general.file_type u32 = 2
llama_model_loader: - kv 11: tokenizer.ggml.model str = llama
llama_model_loader: - kv 12: tokenizer.ggml.tokens arr[str,32000] = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv 13: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000,0.0000...
llama_model_loader: - kv 14: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6,6, 6, ...
llama_model_loader: - kv 15: tokenizer.ggml.merges arr[str,61249] = ["▁ t", "e r", "i n", "▁ a", "e n...
llama_model_loader: - kv 16: tokenizer.ggml.bos_token_id u32 = 1
llama_model_loader: - kv 17: tokenizer.ggml.eos_token_id u32 = 2
llama_model_loader: - kv 18: tokenizer.ggml.unknown_token_id u32 = 0
llama_model_loader: - kv 19: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 20: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 21: tokenizer.chat_template str = {% if messages[0]['role'] == 'system'...
llama_model_loader: - kv 22: general.quantization_version u32 = 2
llama_model_loader: - type f32: 65 tensors
llama_model_loader: - type q4_0: 225 tensors
llama_model_loader: - type q6_K: 1 tensors
llm_load_vocab: special tokens definition check successful ( 259/32000 ).
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = SPM
llm_load_print_meta: n_vocab = 32000
llm_load_print_meta: n_merges = 0
llm_load_print_meta: n_ctx_train = 4096
llm_load_print_meta: n_embd = 4096
llm_load_print_meta: n_head = 32
llm_load_print_meta: n_head_kv = 32
llm_load_print_meta: n_layer = 32
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 1
llm_load_print_meta: n_embd_k_gqa = 4096
llm_load_print_meta: n_embd_v_gqa = 4096
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 11008
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx = 4096
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: model type = 7B
llm_load_print_meta: model ftype = Q4_0
llm_load_print_meta: model params = 6.74 B
llm_load_print_meta: model size = 3.56 GiB (4.54 BPW)
llm_load_print_meta: general.name = LLaMA v2
llm_load_print_meta: BOS token = 1 '<s>'
llm_load_print_meta: EOS token = 2 '</s>'
llm_load_print_meta: UNK token = 0 '<unk>'
llm_load_print_meta: LF token = 13 '<0x0A>'
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: yes
ggml_cuda_init: CUDA_USE_TENSOR_CORES: no
ggml_cuda_init: found 1 CUDA devices:
Device 0: NVIDIA A100 80GB PCIe, compute capability 8.0, VMM: yes
⠸ llm_load_tensors: ggml ctx size = 0.22 MiB
llm_load_tensors: offloading 32 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 33/33 layers to GPU
llm_load_tensors: CPU buffer size = 70.31 MiB
llm_load_tensors: CUDA0 buffer size = 3577.56 MiB
⠧ ........
llama_new_context_with_model: n_ctx = 2048
llama_new_context_with_model: n_batch = 512
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: freq_base = 10000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: CUDA0 KV buffer size = 1024.00 MiB
llama_new_context_with_model: KV self size = 1024.00 MiB, K (f16): 512.00 MiB, V (f16): 512.00 MiB
llama_new_context_with_model: CUDA_Host output buffer size = 70.50 MiB
llama_new_context_with_model: CUDA0 compute buffer size = 164.00 MiB
llama_new_context_with_model: CUDA_Host compute buffer size = 12.00 MiB
llama_new_context_with_model: graph nodes = 1060
llama_new_context_with_model: graph splits = 2
⠸ {"function":"initialize","level":"INFO","line":444,"msg":"initializing slots","n_slots":1,"tid":"140309511649024","timestamp":1712574318}
{"function":"initialize","level":"INFO","line":453,"msg":"new slot","n_ctx_slot":2048,"slot_id":0,"tid":"140309511649024","timestamp":1712574318}
time=2024-04-08T13:05:18.756+02:00 level=INFO source=dyn_ext_server.go:159 msg="Starting llama main loop"
[GIN] 2024/04/08 - 13:05:18 | 200 | 1.385049293s | 127.0.0.1 | POST "/api/chat"
{"function":"update_slots","level":"INFO","line":1572,"msg":"all slots are idle and system prompt is empty, clear the KV cache","tid":"140307546400512","timestamp":1712574318}
>>> hello?
{"function":"launch_slot_with_data","level":"INFO","line":826,"msg":"slot is processing task","slot_id":0,"task_id":0,"tid":"140307546400512","timestamp":1712574321}
{"function":"update_slots","ga_i":0,"level":"INFO","line":1803,"msg":"slot progression","n_past":0,"n_past_se":0,"n_prompt_tokens_processed":22,"slot_id":0,"task_id":0,"tid":"140307546400512","timestamp":1712574321}
{"function":"update_slots","level":"INFO","line":1830,"msg":"kv cache rm [p0, end)","p0":0,"slot_id":0,"task_id":0,"tid":"140307546400512","timestamp":1712574321}
Hello! It's nice to meet you. Is there something I can help you with or would you like to chat?{"function":"print_timings","level":"INFO","line":265,"msg":"prompt eval time = 91.98 ms / 22 tokens ( 4.18 ms per token, 239.18 tokens per second)","n_prompt_tokens_processed":22,"n_tokens_second":239.1772303276728,"slot_id":0,"t_prompt_processing":91.982,"t_token":4.181,"task_id":0,"tid":"140307546400512","timestamp":1712574322}
{"function":"print_timings","level":"INFO","line":279,"msg":"generation eval time = 188.19 ms / 26 runs ( 7.24ms per token, 138.16 tokens per second)","n_decoded":26,"n_tokens_second":138.1589784737684,"slot_id":0,"t_token":7.238038461538461,"t_token_generation":188.189,"task_id":0,"tid":"140307546400512","timestamp":1712574322}
{"function":"print_timings","level":"INFO","line":289,"msg":" total time = 280.17 ms","slot_id":0,"t_prompt_processing":91.982,"t_token_generation":188.189,"t_total":280.171,"task_id":0,"tid":"140307546400512","timestamp":1712574322}
[GIN] 2024/04/08 - 13:05:22 | 200 | 283.225416ms | 127.0.0.1 | POST "/api/chat"
>>> {"function":"update_slots","level":"INFO","line":1634,"msg":"slot released","n_cache_tokens":48,"n_ctx":2048,"n_past":47,"n_system_tokens":0,"slot_id":0,"task_id":0,"tid":"140307546400512","timestamp":1712574322,"truncated":false}
>>> Send a message (/? for help)
```
### What did you expect to see?
normal behaviour
### Steps to reproduce
session pasted above
### Are there any recent changes that introduced the issue?
no
### OS
Linux
### Architecture
amd64
### Platform
_No response_
### Ollama version
1.30
### GPU
Nvidia
### GPU info
Mon Apr 8 13:07:26 2024
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 525.105.17 Driver Version: 525.105.17 CUDA Version: 12.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA A100 80G... Off | 00000000:CA:00.0 Off | 0 |
| N/A 40C P0 64W / 300W | 5511MiB / 81920MiB | 0% Default |
| | | Disabled |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 3856 C ./ollama-linux-amd64 5508MiB |
+-----------------------------------------------------------------------------+
### CPU
Intel
### Other software
absolutely nothing.
processor info
processor : 0-31
vendor_id : GenuineIntel
cpu family : 6
model : 85
model name : Intel(R) Xeon(R) Gold 6130 CPU @ 2.10GHz
stepping : 4
microcode : 0x2007006
cpu MHz : 3066.403
cache size : 22528 KB
physical id : 1
siblings : 16
core id : 12
cpu cores : 16
apicid : 56
initial apicid : 56
fpu : yes
fpu_exception : yes
cpuid level : 22
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single pti intel_ppin ssbd mba ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vlxsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke md_clear flush_l1d arch_capabilities
bugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs taa itlb_multihit
bogomips : 4201.39
clflush size : 64
cache_alignment : 64
address sizes : 46 bits physical, 48 bits virtual
power management:
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3538/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3538/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5438
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5438/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5438/comments
|
https://api.github.com/repos/ollama/ollama/issues/5438/events
|
https://github.com/ollama/ollama/pull/5438
| 2,386,676,661
|
PR_kwDOJ0Z1Ps50ON1l
| 5,438
|
Centos 7 EOL broke mirrors
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-07-02T16:23:13
| 2024-07-02T16:28:02
| 2024-07-02T16:28:00
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5438",
"html_url": "https://github.com/ollama/ollama/pull/5438",
"diff_url": "https://github.com/ollama/ollama/pull/5438.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5438.patch",
"merged_at": "2024-07-02T16:28:00"
}
|
As of July 1st 2024: Could not resolve host: mirrorlist.centos.org This is expected due to EOL dates.
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5438/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5438/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/825
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/825/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/825/comments
|
https://api.github.com/repos/ollama/ollama/issues/825/events
|
https://github.com/ollama/ollama/pull/825
| 1,948,172,334
|
PR_kwDOJ0Z1Ps5dDqnz
| 825
|
relay CUDA errors to the client
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-10-17T20:13:05
| 2023-10-18T19:36:58
| 2023-10-18T19:36:57
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/825",
"html_url": "https://github.com/ollama/ollama/pull/825",
"diff_url": "https://github.com/ollama/ollama/pull/825.diff",
"patch_url": "https://github.com/ollama/ollama/pull/825.patch",
"merged_at": "2023-10-18T19:36:57"
}
|
When the llama.cpp runner failed with CUDA error the error message was not relayed to the client. Instead the client would only see an EOF error. Update the llama.cpp subprocess log monitor to capture CUDA errors and relay them to the client.
Before:
```
Error: error reading llm response: unexpected EOF
```
After:
```
Error: llama runner exited, you may not have enough available memory to run this model
```
or the actual error is available
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/825/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/825/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1210
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1210/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1210/comments
|
https://api.github.com/repos/ollama/ollama/issues/1210/events
|
https://github.com/ollama/ollama/pull/1210
| 2,002,716,154
|
PR_kwDOJ0Z1Ps5f8AmN
| 1,210
|
Add `user` to prompt template
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2023-11-20T17:54:13
| 2024-02-20T04:22:26
| 2024-02-20T04:22:26
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1210",
"html_url": "https://github.com/ollama/ollama/pull/1210",
"diff_url": "https://github.com/ollama/ollama/pull/1210.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1210.patch",
"merged_at": null
}
|
With the upcoming `messages` API change the lack of symmetry between the `user` role and the `prompt` in the template is confusing. This change proposes adding `{{ .User }}` as an alternative to `{{ .Prompt }}` for the model template.
Here's an example:
```
FROM llama2
PARAMETER temperature 1
TEMPLATE """[INST] <<SYS>>{{ .System }}<</SYS>>
{{ .User }} [/INST]
"""
SYSTEM """
You are Mario from super mario bros, acting as an assistant.
"""
```
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1210/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1210/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2980
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2980/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2980/comments
|
https://api.github.com/repos/ollama/ollama/issues/2980/events
|
https://github.com/ollama/ollama/issues/2980
| 2,173,757,611
|
I_kwDOJ0Z1Ps6BkOir
| 2,980
|
Uninstall CLI ollama on Mac
|
{
"login": "X1AOX1A",
"id": 52992366,
"node_id": "MDQ6VXNlcjUyOTkyMzY2",
"avatar_url": "https://avatars.githubusercontent.com/u/52992366?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/X1AOX1A",
"html_url": "https://github.com/X1AOX1A",
"followers_url": "https://api.github.com/users/X1AOX1A/followers",
"following_url": "https://api.github.com/users/X1AOX1A/following{/other_user}",
"gists_url": "https://api.github.com/users/X1AOX1A/gists{/gist_id}",
"starred_url": "https://api.github.com/users/X1AOX1A/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/X1AOX1A/subscriptions",
"organizations_url": "https://api.github.com/users/X1AOX1A/orgs",
"repos_url": "https://api.github.com/users/X1AOX1A/repos",
"events_url": "https://api.github.com/users/X1AOX1A/events{/privacy}",
"received_events_url": "https://api.github.com/users/X1AOX1A/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 3
| 2024-03-07T12:31:15
| 2024-05-31T04:22:41
| 2024-03-07T16:21:39
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
How to uninstall CLI ollama on Mac?
|
{
"login": "X1AOX1A",
"id": 52992366,
"node_id": "MDQ6VXNlcjUyOTkyMzY2",
"avatar_url": "https://avatars.githubusercontent.com/u/52992366?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/X1AOX1A",
"html_url": "https://github.com/X1AOX1A",
"followers_url": "https://api.github.com/users/X1AOX1A/followers",
"following_url": "https://api.github.com/users/X1AOX1A/following{/other_user}",
"gists_url": "https://api.github.com/users/X1AOX1A/gists{/gist_id}",
"starred_url": "https://api.github.com/users/X1AOX1A/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/X1AOX1A/subscriptions",
"organizations_url": "https://api.github.com/users/X1AOX1A/orgs",
"repos_url": "https://api.github.com/users/X1AOX1A/repos",
"events_url": "https://api.github.com/users/X1AOX1A/events{/privacy}",
"received_events_url": "https://api.github.com/users/X1AOX1A/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2980/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2980/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1836
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1836/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1836/comments
|
https://api.github.com/repos/ollama/ollama/issues/1836/events
|
https://github.com/ollama/ollama/issues/1836
| 2,069,041,766
|
I_kwDOJ0Z1Ps57UxJm
| 1,836
|
Consult where Ollama models are saved in Linux.( in WSL on windows)
|
{
"login": "zephirusgit",
"id": 20031912,
"node_id": "MDQ6VXNlcjIwMDMxOTEy",
"avatar_url": "https://avatars.githubusercontent.com/u/20031912?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zephirusgit",
"html_url": "https://github.com/zephirusgit",
"followers_url": "https://api.github.com/users/zephirusgit/followers",
"following_url": "https://api.github.com/users/zephirusgit/following{/other_user}",
"gists_url": "https://api.github.com/users/zephirusgit/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zephirusgit/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zephirusgit/subscriptions",
"organizations_url": "https://api.github.com/users/zephirusgit/orgs",
"repos_url": "https://api.github.com/users/zephirusgit/repos",
"events_url": "https://api.github.com/users/zephirusgit/events{/privacy}",
"received_events_url": "https://api.github.com/users/zephirusgit/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 6
| 2024-01-07T08:35:11
| 2024-03-11T20:42:30
| 2024-03-11T20:42:29
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hello, I'm really running Ollama, in WSL Windows Subsystem Linux, (in Windows) Now, my problem is that when you lower a new model, call2, llava, or create some, these models are downloaded, or copied, in some folder , I imagine the WSL? De Linux? or Windows?
For example, I wanted to run the mixtral model, which occupies 26gb
And where I have it, I "double it" and I do not.
Does anyone know where those files can be putting?
From already thank you very much,
In Windows I walk very well call2 and llava, (describing images)
compared to another llava that ran before which I required 3 simultaneous processes that occupied me as 90gb of RAM
enfin any tip is appreciated, to find them,
I saw that if I believe them, and then I eliminate them, they are erased, but as I have very little disk space, I want to see how I can use them, without being doubled,
I think I move it to another album and install it, from there, so as not to run out of space, I already have very little, greetings!
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1836/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1836/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6993
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6993/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6993/comments
|
https://api.github.com/repos/ollama/ollama/issues/6993/events
|
https://github.com/ollama/ollama/issues/6993
| 2,551,916,302
|
I_kwDOJ0Z1Ps6YGycO
| 6,993
|
llama3.1:70b CPU battle neck?
|
{
"login": "jasonliuspark123",
"id": 71071196,
"node_id": "MDQ6VXNlcjcxMDcxMTk2",
"avatar_url": "https://avatars.githubusercontent.com/u/71071196?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jasonliuspark123",
"html_url": "https://github.com/jasonliuspark123",
"followers_url": "https://api.github.com/users/jasonliuspark123/followers",
"following_url": "https://api.github.com/users/jasonliuspark123/following{/other_user}",
"gists_url": "https://api.github.com/users/jasonliuspark123/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jasonliuspark123/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jasonliuspark123/subscriptions",
"organizations_url": "https://api.github.com/users/jasonliuspark123/orgs",
"repos_url": "https://api.github.com/users/jasonliuspark123/repos",
"events_url": "https://api.github.com/users/jasonliuspark123/events{/privacy}",
"received_events_url": "https://api.github.com/users/jasonliuspark123/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5808482718,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWjZpng",
"url": "https://api.github.com/repos/ollama/ollama/labels/performance",
"name": "performance",
"color": "A5B5C6",
"default": false,
"description": ""
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 2
| 2024-09-27T03:30:11
| 2024-09-28T23:26:41
| 2024-09-28T23:26:28
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
CPU only use one core at 100%, while gpu cores mostly run at at less than 20%.
Model is not responding at good speed.
I'm wondering if this usage of one cpu core becomes the bottle neck for the performance.
I have read https://github.com/ggerganov/llama.cpp/issues/8684, but have not seen an exact answer.


### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.3.10
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6993/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6993/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5033
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5033/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5033/comments
|
https://api.github.com/repos/ollama/ollama/issues/5033/events
|
https://github.com/ollama/ollama/pull/5033
| 2,352,045,586
|
PR_kwDOJ0Z1Ps5yaX11
| 5,033
|
Add ModifiedAt Field to /api/show
|
{
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjhan/followers",
"following_url": "https://api.github.com/users/royjhan/following{/other_user}",
"gists_url": "https://api.github.com/users/royjhan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/royjhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/royjhan/subscriptions",
"organizations_url": "https://api.github.com/users/royjhan/orgs",
"repos_url": "https://api.github.com/users/royjhan/repos",
"events_url": "https://api.github.com/users/royjhan/events{/privacy}",
"received_events_url": "https://api.github.com/users/royjhan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-06-13T20:53:22
| 2024-06-16T03:53:57
| 2024-06-16T03:53:57
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5033",
"html_url": "https://github.com/ollama/ollama/pull/5033",
"diff_url": "https://github.com/ollama/ollama/pull/5033.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5033.patch",
"merged_at": "2024-06-16T03:53:56"
}
|
Changed `model` variable name to `m` due to `ParseName` function from `model `package
E.g.
...
```
"template": "[INST] {{ if .System }}{{ .System }} {{ end }}{{ .Prompt }} [/INST]",
"details": {
"parent_model": "",
"format": "gguf",
"family": "llama",
"families": [
"llama",
"clip"
],
"parameter_size": "7B",
"quantization_level": "Q4_0"
},
"modified_at": "2024-06-10T13:01:22.096005938-07:00"
}
```
|
{
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjhan/followers",
"following_url": "https://api.github.com/users/royjhan/following{/other_user}",
"gists_url": "https://api.github.com/users/royjhan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/royjhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/royjhan/subscriptions",
"organizations_url": "https://api.github.com/users/royjhan/orgs",
"repos_url": "https://api.github.com/users/royjhan/repos",
"events_url": "https://api.github.com/users/royjhan/events{/privacy}",
"received_events_url": "https://api.github.com/users/royjhan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5033/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5033/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1181
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1181/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1181/comments
|
https://api.github.com/repos/ollama/ollama/issues/1181/events
|
https://github.com/ollama/ollama/issues/1181
| 2,000,001,657
|
I_kwDOJ0Z1Ps53NZp5
| 1,181
|
error: invalid cross-device link
|
{
"login": "0xRavenBlack",
"id": 71230759,
"node_id": "MDQ6VXNlcjcxMjMwNzU5",
"avatar_url": "https://avatars.githubusercontent.com/u/71230759?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/0xRavenBlack",
"html_url": "https://github.com/0xRavenBlack",
"followers_url": "https://api.github.com/users/0xRavenBlack/followers",
"following_url": "https://api.github.com/users/0xRavenBlack/following{/other_user}",
"gists_url": "https://api.github.com/users/0xRavenBlack/gists{/gist_id}",
"starred_url": "https://api.github.com/users/0xRavenBlack/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/0xRavenBlack/subscriptions",
"organizations_url": "https://api.github.com/users/0xRavenBlack/orgs",
"repos_url": "https://api.github.com/users/0xRavenBlack/repos",
"events_url": "https://api.github.com/users/0xRavenBlack/events{/privacy}",
"received_events_url": "https://api.github.com/users/0xRavenBlack/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 1
| 2023-11-17T22:06:17
| 2023-11-20T04:32:24
| 2023-11-18T05:54:55
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Description:
When attempting to create a new model using the provided Hugging Face model (https://huggingface.co/TheBloke/Leo-Mistral-Hessianai-7B-Chat-GGUF) with the following command:
ollama create game-mistral-7b -f ./Modelfile
an error occurs during the process, resulting in the following error message:
transferring context Error: rename /tmp/sha256:9fa68a1621f99d387fe0c7f70b47cfecda9be0e7a02255499beb13d240a092104036603730 /usr/share/ollam/.ollama
/models/blobs/sha256:9fa68a1621f99d387fe0c7f70b47cfecda9be0e7a02255499beb13d240a09210: invalid cross-device link`
Modelfile Contents:
```
FROM ./leo-mistral-hessianai-7b-chat.Q5_K_M.gguf
TEMPLATE """{{- if .System }}
<|im_start|>system {{ .System }}<|im_end|>
{{- end }}
<|im_start|>user
{{ .Prompt }}<|im_end|>
<|im_start|>assistant
"""
PARAMETER stop <|im_start|>
PARAMETER stop <|im_end|>
# set the system prompt
SYSTEM """
"""
```
Additional Information:
* Model Used: [Leo-Mistral-Hessianai-7B-Chat-GGUF](https://huggingface.co/TheBloke/Leo-Mistral-Hessianai-7B-Chat-GGUF)
* Ollama Version: v0.1.10
* Operating System: Arch Linux
Steps to Reproduce:
* Execute the command: ollama create game-mistral-7b -f ./Modelfile
* Observe the error mentioned above.
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1181/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1181/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6076
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6076/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6076/comments
|
https://api.github.com/repos/ollama/ollama/issues/6076/events
|
https://github.com/ollama/ollama/issues/6076
| 2,438,123,498
|
I_kwDOJ0Z1Ps6RUs_q
| 6,076
|
add mamba
|
{
"login": "windkwbs",
"id": 129468439,
"node_id": "U_kgDOB7eIFw",
"avatar_url": "https://avatars.githubusercontent.com/u/129468439?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/windkwbs",
"html_url": "https://github.com/windkwbs",
"followers_url": "https://api.github.com/users/windkwbs/followers",
"following_url": "https://api.github.com/users/windkwbs/following{/other_user}",
"gists_url": "https://api.github.com/users/windkwbs/gists{/gist_id}",
"starred_url": "https://api.github.com/users/windkwbs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/windkwbs/subscriptions",
"organizations_url": "https://api.github.com/users/windkwbs/orgs",
"repos_url": "https://api.github.com/users/windkwbs/repos",
"events_url": "https://api.github.com/users/windkwbs/events{/privacy}",
"received_events_url": "https://api.github.com/users/windkwbs/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 4
| 2024-07-30T15:31:35
| 2024-10-01T02:46:39
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
[mamba-codestral-7B-v0.1](https://huggingface.co/mistralai/mamba-codestral-7B-v0.1)
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6076/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6076/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/411
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/411/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/411/comments
|
https://api.github.com/repos/ollama/ollama/issues/411/events
|
https://github.com/ollama/ollama/pull/411
| 1,867,405,634
|
PR_kwDOJ0Z1Ps5Y0G85
| 411
|
patch llama.cpp for 34B
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-08-25T17:07:10
| 2023-08-25T18:59:06
| 2023-08-25T18:59:05
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/411",
"html_url": "https://github.com/ollama/ollama/pull/411",
"diff_url": "https://github.com/ollama/ollama/pull/411.diff",
"patch_url": "https://github.com/ollama/ollama/pull/411.patch",
"merged_at": "2023-08-25T18:59:05"
}
| null |
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/411/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/411/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6475
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6475/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6475/comments
|
https://api.github.com/repos/ollama/ollama/issues/6475/events
|
https://github.com/ollama/ollama/issues/6475
| 2,482,966,667
|
I_kwDOJ0Z1Ps6T_xCL
| 6,475
|
The issue of high CPU utilization in Ollama
|
{
"login": "fenggaobj",
"id": 13727907,
"node_id": "MDQ6VXNlcjEzNzI3OTA3",
"avatar_url": "https://avatars.githubusercontent.com/u/13727907?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fenggaobj",
"html_url": "https://github.com/fenggaobj",
"followers_url": "https://api.github.com/users/fenggaobj/followers",
"following_url": "https://api.github.com/users/fenggaobj/following{/other_user}",
"gists_url": "https://api.github.com/users/fenggaobj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fenggaobj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fenggaobj/subscriptions",
"organizations_url": "https://api.github.com/users/fenggaobj/orgs",
"repos_url": "https://api.github.com/users/fenggaobj/repos",
"events_url": "https://api.github.com/users/fenggaobj/events{/privacy}",
"received_events_url": "https://api.github.com/users/fenggaobj/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg",
"url": "https://api.github.com/repos/ollama/ollama/labels/nvidia",
"name": "nvidia",
"color": "8CDB00",
"default": false,
"description": "Issues relating to Nvidia GPUs and CUDA"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 2
| 2024-08-23T11:46:39
| 2024-08-27T21:18:31
| 2024-08-27T21:18:06
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
"ollama run qwen2" command loads until timeout
**Seeking help:**
How can I resolve this high CPU utilization issue with Ollama?
Is it possible to configure JIT compilation to support multithreading?
**Please review the following analysis process.**
**(1) Environment and version information:**
Device: Nvidia Jetson AGX Orin
CPU: 12 cores with a frequency of 2.2GHZ
Memory: 64G
GPU: 1.3GHZ
Ubuntu 22.04.4 LTS
Ollama 0.3.6
**(2) Problem phenomenon:**
When running Ollama on Orin to load the llama3.1 model, it does not return. Upon checking the CPU information, it is found that one CPU core utilization is consistently at 100%.

**(3) Problem analysis:**
By examining the problematic process, it is found to be:
`ollama 17202 98.1 0.6 66818556 394996 ? Rl 19:15 0:11 /tmp/ollama3440494107/runners/cuda_v11/ollama_llama_server --model /usr/share/ollama/.ollama/models/blobs/sha256-8eeb52dfb3bb9aefdf9d1ef24b3bdbcfbe82238798c4b918278320b6fcef18fe --ctx-size 8192 --batch-size 512 --embedding --log-disable --n-gpu-layers 33 --numa numactl --parallel 4 --port 36539`
Using top -H -p 17202 to inspect, it is found that the main thread has very high CPU utilization.
```
` PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
17202 ollama 20 0 63.8g 393652 63108 R 99.9 0.6 0:57.73 ollama_llama_se
17203 ollama 20 0 63.8g 393652 63108 S 0.0 0.6 0:00.00 ollama_llama_se
17204 ollama 20 0 63.8g 393652 63108 S 0.0 0.6 0:00.00 ollama_llama_se
17205 ollama 20 0 63.8g 393652 63108 S 0.0 0.6 0:00.00 ollama_llama_se
17206 ollama 20 0 63.8g 393652 63108 S 0.0 0.6 0:00.00 ollama_llama_se
17207 ollama 20 0 63.8g 393652 63108 S 0.0 0.6 0:00.00 ollama_llama_se
17208 ollama 20 0 63.8g 393652 63108 S 0.0 0.6 0:00.00 ollama_llama_se
17209 ollama 20 0 63.8g 393652 63108 S 0.0 0.6 0:00.00 ollama_llama_se
17210 ollama 20 0 63.8g 393652 63108 S 0.0 0.6 0:00.00 ollama_llama_se
17211 ollama 20 0 63.8g 393652 63108 S 0.0 0.6 0:00.00 ollama_llama_se
17212 ollama 20 0 63.8g 393652 63108 S 0.0 0.6 0:00.00 ollama_llama_se
17213 ollama 20 0 63.8g 393652 63108 S 0.0 0.6 0:00.00 ollama_llama_se
17214 ollama 20 0 63.8g 393652 63108 S 0.0 0.6 0:00.00 ollama_llama_se
17221 ollama 20 0 63.8g 393652 63108 S 0.0 0.6 0:00.00 cuda-EvtHandlr`
```
Using gdb to inspect the problematic thread, the issue is found in the following stack:
libnvidia-ptxjitcompiler is a component in the NVIDIA CUDA Toolkit, specifically a Just-In-Time (JIT) compiler library. Its main function is to compile PTX (Parallel Thread Execution) code into GPU-executable machine code. The cause of the issue is high CPU utilization due to JIT compilation.
```
Thread 1 (Thread 0xffff8f6f3840 (LWP 17202) "ollama_llama_se"):
#0 0x0000ffff68ed3c28 in ?? () from /usr/lib/aarch64-linux-gnu/nvidia/libnvidia-ptxjitcompiler.so.1
#1 0x0000ffff68f8633c in ?? () from /usr/lib/aarch64-linux-gnu/nvidia/libnvidia-ptxjitcompiler.so.1
#2 0x0000ffff68f879dc in ?? () from /usr/lib/aarch64-linux-gnu/nvidia/libnvidia-ptxjitcompiler.so.1
#3 0x0000ffff68f06a10 in ?? () from /usr/lib/aarch64-linux-gnu/nvidia/libnvidia-ptxjitcompiler.so.1
#4 0x0000ffff68eec014 in ?? () from /usr/lib/aarch64-linux-gnu/nvidia/libnvidia-ptxjitcompiler.so.1
#5 0x0000ffff6988ec84 in ?? () from /usr/lib/aarch64-linux-gnu/nvidia/libnvidia-ptxjitcompiler.so.1
#6 0x0000ffff6988ecfc in ?? () from /usr/lib/aarch64-linux-gnu/nvidia/libnvidia-ptxjitcompiler.so.1
#7 0x0000ffff68dd4354 in ?? () from /usr/lib/aarch64-linux-gnu/nvidia/libnvidia-ptxjitcompiler.so.1
#8 0x0000ffff68ddddd8 in ?? () from /usr/lib/aarch64-linux-gnu/nvidia/libnvidia-ptxjitcompiler.so.1
#9 0x0000ffff68de22dc in ?? () from /usr/lib/aarch64-linux-gnu/nvidia/libnvidia-ptxjitcompiler.so.1
#10 0x0000ffff68de32b4 in ?? () from /usr/lib/aarch64-linux-gnu/nvidia/libnvidia-ptxjitcompiler.so.1
#11 0x0000ffff68dd5cc8 in __cuda_CallJitEntryPoint () from /usr/lib/aarch64-linux-gnu/nvidia/libnvidia-ptxjitcompiler.so.1
#12 0x0000ffff76dffeec in ?? () from /usr/lib/aarch64-linux-gnu/nvidia/libcuda.so.1
#13 0x0000ffff76e008e8 in ?? () from /usr/lib/aarch64-linux-gnu/nvidia/libcuda.so.1
#14 0x0000ffff76c02270 in ?? () from /usr/lib/aarch64-linux-gnu/nvidia/libcuda.so.1
#15 0x0000ffff76c2bfd4 in ?? () from /usr/lib/aarch64-linux-gnu/nvidia/libcuda.so.1
#16 0x0000ffff76b9c724 in ?? () from /usr/lib/aarch64-linux-gnu/nvidia/libcuda.so.1
#17 0x0000ffff76b9d5f0 in ?? () from /usr/lib/aarch64-linux-gnu/nvidia/libcuda.so.1
#18 0x0000ffff88cde99c in ?? () from /tmp/ollama3440494107/runners/cuda_v11/libcublas.so.11
#19 0x0000ffff88ccf5d8 in ?? () from /tmp/ollama3440494107/runners/cuda_v11/libcublas.so.11
#20 0x0000ffff88ce500c in ?? () from /tmp/ollama3440494107/runners/cuda_v11/libcublas.so.11
#21 0x0000ffff88ce65dc in ?? () from /tmp/ollama3440494107/runners/cuda_v11/libcublas.so.11
#22 0x0000ffff88ce6b3c in ?? () from /tmp/ollama3440494107/runners/cuda_v11/libcublas.so.11
#23 0x0000ffff88cdc56c in ?? () from /tmp/ollama3440494107/runners/cuda_v11/libcublas.so.11
#24 0x0000ffff88cc1514 in ?? () from /tmp/ollama3440494107/runners/cuda_v11/libcublas.so.11
#25 0x0000ffff88cf8f8c in ?? () from /tmp/ollama3440494107/runners/cuda_v11/libcublas.so.11
#26 0x0000ffff87f75098 in cublasCreate_v2 () from /tmp/ollama3440494107/runners/cuda_v11/libcublas.so.11
#27 0x0000000000538518 in ggml_cuda_mul_mat_batched_cublas(ggml_backend_cuda_context&, ggml_tensor const*, ggml_tensor const*, ggml_tensor*) ()
#28 0x00000000005412a4 in ggml_backend_cuda_graph_compute(ggml_backend*, ggml_cgraph*) ()
#29 0x00000000005117b8 in ggml_backend_sched_graph_compute_async ()
#30 0x000000000068d270 in llama_decode ()
#31 0x0000000000747040 in llama_init_from_gpt_params(gpt_params&) ()
#32 0x000000000047dfa0 in llama_server_context::load_model(gpt_params const&) ()
#33 0x000000000040f3ac in main ()
[Inferior 1 (process 17202) detached]
```
### OS
Linux
### GPU
Nvidia
### CPU
Other
### Ollama version
0.3.6
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6475/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6475/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8068
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8068/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8068/comments
|
https://api.github.com/repos/ollama/ollama/issues/8068/events
|
https://github.com/ollama/ollama/issues/8068
| 2,735,516,995
|
I_kwDOJ0Z1Ps6jDK1D
| 8,068
|
0.5.2 does not use cuda on multi-gpu nvidia setups
|
{
"login": "frenzybiscuit",
"id": 190028151,
"node_id": "U_kgDOC1OZdw",
"avatar_url": "https://avatars.githubusercontent.com/u/190028151?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/frenzybiscuit",
"html_url": "https://github.com/frenzybiscuit",
"followers_url": "https://api.github.com/users/frenzybiscuit/followers",
"following_url": "https://api.github.com/users/frenzybiscuit/following{/other_user}",
"gists_url": "https://api.github.com/users/frenzybiscuit/gists{/gist_id}",
"starred_url": "https://api.github.com/users/frenzybiscuit/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/frenzybiscuit/subscriptions",
"organizations_url": "https://api.github.com/users/frenzybiscuit/orgs",
"repos_url": "https://api.github.com/users/frenzybiscuit/repos",
"events_url": "https://api.github.com/users/frenzybiscuit/events{/privacy}",
"received_events_url": "https://api.github.com/users/frenzybiscuit/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info",
"name": "needs more info",
"color": "BA8041",
"default": false,
"description": "More information is needed to assist"
},
{
"id": 6678628138,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjhPHKg",
"url": "https://api.github.com/repos/ollama/ollama/labels/install",
"name": "install",
"color": "E0B88D",
"default": false,
"description": ""
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 6
| 2024-12-12T10:37:50
| 2024-12-13T19:57:10
| 2024-12-13T19:57:10
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Basically, title. 0.5.2 doesn't use cuda (or the GPU at all) on multi GPU setups. It reverts to CPU only.
Output below.
```
root@helga:/usr/share/ollama/.ollama# journalctl -u ollama --no-pager -f
Dec 12 02:31:35 sub.domain.tld ollama[95532]: [GIN-debug] HEAD /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)
Dec 12 02:31:35 sub.domain.tld ollama[95532]: [GIN-debug] HEAD /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
Dec 12 02:31:35 sub.domain.tld ollama[95532]: time=2024-12-12T02:31:35.192-08:00 level=INFO source=routes.go:1247 msg="Listening on 192.168.11.3:11434 (version 0.5.2-rc3-0-g581a4a5-dirty)"
Dec 12 02:31:35 sub.domain.tld ollama[95532]: time=2024-12-12T02:31:35.193-08:00 level=INFO source=routes.go:1276 msg="Dynamic LLM libraries" runners=[cpu]
Dec 12 02:31:35 sub.domain.tld ollama[95532]: time=2024-12-12T02:31:35.193-08:00 level=INFO source=gpu.go:226 msg="looking for compatible GPUs"
Dec 12 02:31:35 sub.domain.tld ollama[95532]: time=2024-12-12T02:31:35.790-08:00 level=WARN source=amd_linux.go:61 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
Dec 12 02:31:35 sub.domain.tld ollama[95532]: time=2024-12-12T02:31:35.790-08:00 level=INFO source=amd_linux.go:333 msg="filtering out device per user request" id=0 visible_devices=[55]
Dec 12 02:31:35 sub.domain.tld ollama[95532]: time=2024-12-12T02:31:35.790-08:00 level=INFO source=amd_linux.go:404 msg="no compatible amdgpu devices detected"
Dec 12 02:31:35 sub.domain.tld ollama[95532]: time=2024-12-12T02:31:35.790-08:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-884a8f95-c93b-627f-cf33-0d96d887005e library=cuda variant=v12 compute=8.6 driver=12.7 name="NVIDIA GeForce RTX 3090" total="23.6 GiB" available="23.3 GiB"
Dec 12 02:31:35 sub.domain.tld ollama[95532]: time=2024-12-12T02:31:35.790-08:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-e6b34a2a-1f14-42f9-519c-27ff284c7d2f library=cuda variant=v12 compute=7.5 driver=12.7 name="NVIDIA GeForce RTX 2080 Ti" total="10.6 GiB" available="10.4 GiB"
Dec 12 02:31:44 sub.domain.tld ollama[95532]: [GIN] 2024/12/12 - 02:31:44 | 200 | 649.912µs | 192.168.11.3 | GET "/v1/models"
Dec 12 02:31:44 sub.domain.tld ollama[95532]: [GIN] 2024/12/12 - 02:31:44 | 200 | 883.533µs | 192.168.11.3 | GET "/api/tags"
Dec 12 02:31:45 sub.domain.tld ollama[95532]: [GIN] 2024/12/12 - 02:31:45 | 200 | 54.07µs | 192.168.11.3 | GET "/api/version"
Dec 12 02:31:50 sub.domain.tld ollama[95532]: [GIN] 2024/12/12 - 02:31:50 | 200 | 629.322µs | 192.168.11.3 | GET "/v1/models"
Dec 12 02:31:50 sub.domain.tld ollama[95532]: [GIN] 2024/12/12 - 02:31:50 | 200 | 493.071µs | 192.168.11.3 | GET "/api/tags"
Dec 12 02:31:53 sub.domain.tld ollama[95532]: time=2024-12-12T02:31:53.692-08:00 level=INFO source=server.go:104 msg="system memory" total="117.5 GiB" free="113.0 GiB" free_swap="20.0 GiB"
Dec 12 02:31:54 sub.domain.tld ollama[95532]: time=2024-12-12T02:31:54.016-08:00 level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=256 layers.model=65 layers.offload=65 layers.split=49,16 memory.available="[23.3 GiB 10.4 GiB]" memory.gpu_overhead="0 B" memory.required.full="33.3 GiB" memory.required.partial="33.3 GiB" memory.required.kv="4.0 GiB" memory.required.allocations="[23.1 GiB 10.2 GiB]" memory.weights.total="24.6 GiB" memory.weights.repeating="24.0 GiB" memory.weights.nonrepeating="609.1 MiB" memory.graph.full="3.2 GiB" memory.graph.partial="3.2 GiB"
Dec 12 02:31:54 sub.domain.tld ollama[95532]: time=2024-12-12T02:31:54.016-08:00 level=INFO source=server.go:223 msg="enabling flash attention"
Dec 12 02:31:54 sub.domain.tld ollama[95532]: time=2024-12-12T02:31:54.017-08:00 level=INFO source=server.go:376 msg="starting llama server" cmd="/usr/local/bin/ollama runner --model /usr/share/ollama/.ollama/models/blobs/sha256-7536df478e5e293848ac8d80ef2e9b4bcae77fb8eaf5e8f2910664ebebbdb9ec --ctx-size 32768 --batch-size 512 --n-gpu-layers 256 --threads 16 --flash-attn --kv-cache-type q8_0 --parallel 1 --tensor-split 49,16 --port 39853"
Dec 12 02:31:54 sub.domain.tld ollama[95532]: time=2024-12-12T02:31:54.017-08:00 level=INFO source=sched.go:449 msg="loaded runners" count=1
Dec 12 02:31:54 sub.domain.tld ollama[95532]: time=2024-12-12T02:31:54.017-08:00 level=INFO source=server.go:555 msg="waiting for llama runner to start responding"
Dec 12 02:31:54 sub.domain.tld ollama[95532]: time=2024-12-12T02:31:54.017-08:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error"
Dec 12 02:31:54 sub.domain.tld ollama[95532]: time=2024-12-12T02:31:54.027-08:00 level=INFO source=runner.go:945 msg="starting go runner"
Dec 12 02:31:54 sub.domain.tld ollama[95532]: time=2024-12-12T02:31:54.027-08:00 level=INFO source=runner.go:946 msg=system info="CPU : LLAMAFILE = 1 | AARCH64_REPACK = 1 | CPU : LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(gcc)" threads=16
Dec 12 02:31:54 sub.domain.tld ollama[95532]: time=2024-12-12T02:31:54.027-08:00 level=INFO source=runner.go:1004 msg="Server listening on 127.0.0.1:39853"
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llama_model_loader: loaded meta data with 34 key-value pairs and 771 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-7536df478e5e293848ac8d80ef2e9b4bcae77fb8eaf5e8f2910664ebebbdb9ec (version GGUF V3 (latest))
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llama_model_loader: - kv 0: general.architecture str = qwen2
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llama_model_loader: - kv 1: general.type str = model
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llama_model_loader: - kv 2: general.name str = Qwen2.5 32B Instruct
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llama_model_loader: - kv 3: general.finetune str = Instruct
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llama_model_loader: - kv 4: general.basename str = Qwen2.5
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llama_model_loader: - kv 5: general.size_label str = 32B
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llama_model_loader: - kv 6: general.license str = apache-2.0
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llama_model_loader: - kv 7: general.license.link str = https://huggingface.co/Qwen/Qwen2.5-3...
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llama_model_loader: - kv 8: general.base_model.count u32 = 1
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llama_model_loader: - kv 9: general.base_model.0.name str = Qwen2.5 32B
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llama_model_loader: - kv 10: general.base_model.0.organization str = Qwen
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llama_model_loader: - kv 11: general.base_model.0.repo_url str = https://huggingface.co/Qwen/Qwen2.5-32B
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llama_model_loader: - kv 12: general.tags arr[str,2] = ["chat", "text-generation"]
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llama_model_loader: - kv 13: general.languages arr[str,1] = ["en"]
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llama_model_loader: - kv 14: qwen2.block_count u32 = 64
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llama_model_loader: - kv 15: qwen2.context_length u32 = 32768
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llama_model_loader: - kv 16: qwen2.embedding_length u32 = 5120
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llama_model_loader: - kv 17: qwen2.feed_forward_length u32 = 27648
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llama_model_loader: - kv 18: qwen2.attention.head_count u32 = 40
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llama_model_loader: - kv 19: qwen2.attention.head_count_kv u32 = 8
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llama_model_loader: - kv 20: qwen2.rope.freq_base f32 = 1000000.000000
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llama_model_loader: - kv 21: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llama_model_loader: - kv 22: general.file_type u32 = 17
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llama_model_loader: - kv 23: tokenizer.ggml.model str = gpt2
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llama_model_loader: - kv 24: tokenizer.ggml.pre str = qwen2
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llama_model_loader: - kv 25: tokenizer.ggml.tokens arr[str,152064] = ["!", "\"", "#", "$", "%", "&", "'", ...
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llama_model_loader: - kv 26: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llama_model_loader: - kv 27: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llama_model_loader: - kv 28: tokenizer.ggml.eos_token_id u32 = 151645
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llama_model_loader: - kv 29: tokenizer.ggml.padding_token_id u32 = 151643
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llama_model_loader: - kv 30: tokenizer.ggml.bos_token_id u32 = 151643
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llama_model_loader: - kv 31: tokenizer.ggml.add_bos_token bool = false
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llama_model_loader: - kv 32: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>...
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llama_model_loader: - kv 33: general.quantization_version u32 = 2
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llama_model_loader: - type f32: 321 tensors
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llama_model_loader: - type q5_K: 385 tensors
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llama_model_loader: - type q6_K: 65 tensors
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llm_load_vocab: special tokens cache size = 22
Dec 12 02:31:54 sub.domain.tld ollama[95532]: time=2024-12-12T02:31:54.268-08:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model"
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llm_load_vocab: token to piece cache size = 0.9310 MB
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llm_load_print_meta: format = GGUF V3 (latest)
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llm_load_print_meta: arch = qwen2
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llm_load_print_meta: vocab type = BPE
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llm_load_print_meta: n_vocab = 152064
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llm_load_print_meta: n_merges = 151387
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llm_load_print_meta: vocab_only = 0
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llm_load_print_meta: n_ctx_train = 32768
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llm_load_print_meta: n_embd = 5120
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llm_load_print_meta: n_layer = 64
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llm_load_print_meta: n_head = 40
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llm_load_print_meta: n_head_kv = 8
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llm_load_print_meta: n_rot = 128
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llm_load_print_meta: n_swa = 0
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llm_load_print_meta: n_embd_head_k = 128
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llm_load_print_meta: n_embd_head_v = 128
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llm_load_print_meta: n_gqa = 5
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llm_load_print_meta: n_embd_k_gqa = 1024
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llm_load_print_meta: n_embd_v_gqa = 1024
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llm_load_print_meta: f_norm_eps = 0.0e+00
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llm_load_print_meta: f_norm_rms_eps = 1.0e-06
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llm_load_print_meta: f_clamp_kqv = 0.0e+00
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llm_load_print_meta: f_logit_scale = 0.0e+00
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llm_load_print_meta: n_ff = 27648
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llm_load_print_meta: n_expert = 0
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llm_load_print_meta: n_expert_used = 0
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llm_load_print_meta: causal attn = 1
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llm_load_print_meta: pooling type = 0
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llm_load_print_meta: rope type = 2
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llm_load_print_meta: rope scaling = linear
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llm_load_print_meta: freq_base_train = 1000000.0
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llm_load_print_meta: freq_scale_train = 1
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llm_load_print_meta: n_ctx_orig_yarn = 32768
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llm_load_print_meta: rope_finetuned = unknown
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llm_load_print_meta: ssm_d_conv = 0
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llm_load_print_meta: ssm_d_inner = 0
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llm_load_print_meta: ssm_d_state = 0
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llm_load_print_meta: ssm_dt_rank = 0
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llm_load_print_meta: ssm_dt_b_c_rms = 0
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llm_load_print_meta: model type = 32B
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llm_load_print_meta: model ftype = Q5_K - Medium
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llm_load_print_meta: model params = 32.76 B
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llm_load_print_meta: model size = 21.66 GiB (5.68 BPW)
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llm_load_print_meta: general.name = Qwen2.5 32B Instruct
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llm_load_print_meta: BOS token = 151643 '<|endoftext|>'
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llm_load_print_meta: EOS token = 151645 '<|im_end|>'
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llm_load_print_meta: EOT token = 151645 '<|im_end|>'
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llm_load_print_meta: PAD token = 151643 '<|endoftext|>'
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llm_load_print_meta: LF token = 148848 'ÄĬ'
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llm_load_print_meta: FIM PRE token = 151659 '<|fim_prefix|>'
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llm_load_print_meta: FIM SUF token = 151661 '<|fim_suffix|>'
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llm_load_print_meta: FIM MID token = 151660 '<|fim_middle|>'
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llm_load_print_meta: FIM PAD token = 151662 '<|fim_pad|>'
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llm_load_print_meta: FIM REP token = 151663 '<|repo_name|>'
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llm_load_print_meta: FIM SEP token = 151664 '<|file_sep|>'
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llm_load_print_meta: EOG token = 151643 '<|endoftext|>'
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llm_load_print_meta: EOG token = 151645 '<|im_end|>'
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llm_load_print_meta: EOG token = 151662 '<|fim_pad|>'
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llm_load_print_meta: EOG token = 151663 '<|repo_name|>'
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llm_load_print_meta: EOG token = 151664 '<|file_sep|>'
Dec 12 02:31:54 sub.domain.tld ollama[95842]: llm_load_print_meta: max token length = 256
Dec 12 02:31:54 sub.domain.tld ollama[95532]: [GIN] 2024/12/12 - 02:31:54 | 200 | 42.33µs | 192.168.11.3 | GET "/api/version"
Dec 12 02:31:55 sub.domain.tld ollama[95842]: llm_load_tensors: CPU_Mapped model buffer size = 22178.82 MiB
Dec 12 02:31:55 sub.domain.tld ollama[95842]: llama_new_context_with_model: n_seq_max = 1
Dec 12 02:31:55 sub.domain.tld ollama[95842]: llama_new_context_with_model: n_ctx = 32768
Dec 12 02:31:55 sub.domain.tld ollama[95842]: llama_new_context_with_model: n_ctx_per_seq = 32768
Dec 12 02:31:55 sub.domain.tld ollama[95842]: llama_new_context_with_model: n_batch = 512
Dec 12 02:31:55 sub.domain.tld ollama[95842]: llama_new_context_with_model: n_ubatch = 512
Dec 12 02:31:55 sub.domain.tld ollama[95842]: llama_new_context_with_model: flash_attn = 1
Dec 12 02:31:55 sub.domain.tld ollama[95842]: llama_new_context_with_model: freq_base = 1000000.0
Dec 12 02:31:55 sub.domain.tld ollama[95842]: llama_new_context_with_model: freq_scale = 1
Dec 12 02:31:56 sub.domain.tld ollama[95842]: llama_kv_cache_init: CPU KV buffer size = 4352.00 MiB
Dec 12 02:31:56 sub.domain.tld ollama[95842]: llama_new_context_with_model: KV self size = 4352.00 MiB, K (q8_0): 2176.00 MiB, V (q8_0): 2176.00 MiB
Dec 12 02:31:56 sub.domain.tld ollama[95842]: llama_new_context_with_model: CPU output buffer size = 0.60 MiB
Dec 12 02:31:56 sub.domain.tld ollama[95842]: llama_new_context_with_model: CPU compute buffer size = 307.00 MiB
Dec 12 02:31:56 sub.domain.tld ollama[95842]: llama_new_context_with_model: graph nodes = 1991
Dec 12 02:31:56 sub.domain.tld ollama[95842]: llama_new_context_with_model: graph splits = 1
Dec 12 02:31:57 sub.domain.tld ollama[95532]: time=2024-12-12T02:31:57.026-08:00 level=INFO source=server.go:594 msg="llama runner started in 3.01 seconds"
Dec 12 02:31:57 sub.domain.tld ollama[95532]: llama_model_loader: loaded meta data with 34 key-value pairs and 771 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-7536df478e5e293848ac8d80ef2e9b4bcae77fb8eaf5e8f2910664ebebbdb9ec (version GGUF V3 (latest))
Dec 12 02:31:57 sub.domain.tld ollama[95532]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
Dec 12 02:31:57 sub.domain.tld ollama[95532]: llama_model_loader: - kv 0: general.architecture str = qwen2
Dec 12 02:31:57 sub.domain.tld ollama[95532]: llama_model_loader: - kv 1: general.type str = model
Dec 12 02:31:57 sub.domain.tld ollama[95532]: llama_model_loader: - kv 2: general.name str = Qwen2.5 32B Instruct
Dec 12 02:31:57 sub.domain.tld ollama[95532]: llama_model_loader: - kv 3: general.finetune str = Instruct
Dec 12 02:31:57 sub.domain.tld ollama[95532]: llama_model_loader: - kv 4: general.basename str = Qwen2.5
Dec 12 02:31:57 sub.domain.tld ollama[95532]: llama_model_loader: - kv 5: general.size_label str = 32B
Dec 12 02:31:57 sub.domain.tld ollama[95532]: llama_model_loader: - kv 6: general.license str = apache-2.0
Dec 12 02:31:57 sub.domain.tld ollama[95532]: llama_model_loader: - kv 7: general.license.link str = https://huggingface.co/Qwen/Qwen2.5-3...
Dec 12 02:31:57 sub.domain.tld ollama[95532]: llama_model_loader: - kv 8: general.base_model.count u32 = 1
Dec 12 02:31:57 sub.domain.tld ollama[95532]: llama_model_loader: - kv 9: general.base_model.0.name str = Qwen2.5 32B
Dec 12 02:31:57 sub.domain.tld ollama[95532]: llama_model_loader: - kv 10: general.base_model.0.organization str = Qwen
Dec 12 02:31:57 sub.domain.tld ollama[95532]: llama_model_loader: - kv 11: general.base_model.0.repo_url str = https://huggingface.co/Qwen/Qwen2.5-32B
Dec 12 02:31:57 sub.domain.tld ollama[95532]: llama_model_loader: - kv 12: general.tags arr[str,2] = ["chat", "text-generation"]
Dec 12 02:31:57 sub.domain.tld ollama[95532]: llama_model_loader: - kv 13: general.languages arr[str,1] = ["en"]
Dec 12 02:31:57 sub.domain.tld ollama[95532]: llama_model_loader: - kv 14: qwen2.block_count u32 = 64
Dec 12 02:31:57 sub.domain.tld ollama[95532]: llama_model_loader: - kv 15: qwen2.context_length u32 = 32768
Dec 12 02:31:57 sub.domain.tld ollama[95532]: llama_model_loader: - kv 16: qwen2.embedding_length u32 = 5120
Dec 12 02:31:57 sub.domain.tld ollama[95532]: llama_model_loader: - kv 17: qwen2.feed_forward_length u32 = 27648
Dec 12 02:31:57 sub.domain.tld ollama[95532]: llama_model_loader: - kv 18: qwen2.attention.head_count u32 = 40
Dec 12 02:31:57 sub.domain.tld ollama[95532]: llama_model_loader: - kv 19: qwen2.attention.head_count_kv u32 = 8
Dec 12 02:31:57 sub.domain.tld ollama[95532]: llama_model_loader: - kv 20: qwen2.rope.freq_base f32 = 1000000.000000
Dec 12 02:31:57 sub.domain.tld ollama[95532]: llama_model_loader: - kv 21: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001
Dec 12 02:31:57 sub.domain.tld ollama[95532]: llama_model_loader: - kv 22: general.file_type u32 = 17
Dec 12 02:31:57 sub.domain.tld ollama[95532]: llama_model_loader: - kv 23: tokenizer.ggml.model str = gpt2
Dec 12 02:31:57 sub.domain.tld ollama[95532]: llama_model_loader: - kv 24: tokenizer.ggml.pre str = qwen2
Dec 12 02:31:57 sub.domain.tld ollama[95532]: llama_model_loader: - kv 25: tokenizer.ggml.tokens arr[str,152064] = ["!", "\"", "#", "$", "%", "&", "'", ...
Dec 12 02:31:57 sub.domain.tld ollama[95532]: llama_model_loader: - kv 26: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
Dec 12 02:31:57 sub.domain.tld ollama[95532]: llama_model_loader: - kv 27: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
Dec 12 02:31:57 sub.domain.tld ollama[95532]: llama_model_loader: - kv 28: tokenizer.ggml.eos_token_id u32 = 151645
Dec 12 02:31:57 sub.domain.tld ollama[95532]: llama_model_loader: - kv 29: tokenizer.ggml.padding_token_id u32 = 151643
Dec 12 02:31:57 sub.domain.tld ollama[95532]: llama_model_loader: - kv 30: tokenizer.ggml.bos_token_id u32 = 151643
Dec 12 02:31:57 sub.domain.tld ollama[95532]: llama_model_loader: - kv 31: tokenizer.ggml.add_bos_token bool = false
Dec 12 02:31:57 sub.domain.tld ollama[95532]: llama_model_loader: - kv 32: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>...
Dec 12 02:31:57 sub.domain.tld ollama[95532]: llama_model_loader: - kv 33: general.quantization_version u32 = 2
Dec 12 02:31:57 sub.domain.tld ollama[95532]: llama_model_loader: - type f32: 321 tensors
Dec 12 02:31:57 sub.domain.tld ollama[95532]: llama_model_loader: - type q5_K: 385 tensors
Dec 12 02:31:57 sub.domain.tld ollama[95532]: llama_model_loader: - type q6_K: 65 tensors
Dec 12 02:31:57 sub.domain.tld ollama[95532]: llm_load_vocab: special tokens cache size = 22
Dec 12 02:31:57 sub.domain.tld ollama[95532]: llm_load_vocab: token to piece cache size = 0.9310 MB
Dec 12 02:31:57 sub.domain.tld ollama[95532]: llm_load_print_meta: format = GGUF V3 (latest)
Dec 12 02:31:57 sub.domain.tld ollama[95532]: llm_load_print_meta: arch = qwen2
Dec 12 02:31:57 sub.domain.tld ollama[95532]: llm_load_print_meta: vocab type = BPE
Dec 12 02:31:57 sub.domain.tld ollama[95532]: llm_load_print_meta: n_vocab = 152064
Dec 12 02:31:57 sub.domain.tld ollama[95532]: llm_load_print_meta: n_merges = 151387
Dec 12 02:31:57 sub.domain.tld ollama[95532]: llm_load_print_meta: vocab_only = 1
Dec 12 02:31:57 sub.domain.tld ollama[95532]: llm_load_print_meta: model type = ?B
Dec 12 02:31:57 sub.domain.tld ollama[95532]: llm_load_print_meta: model ftype = all F32
Dec 12 02:31:57 sub.domain.tld ollama[95532]: llm_load_print_meta: model params = 32.76 B
Dec 12 02:31:57 sub.domain.tld ollama[95532]: llm_load_print_meta: model size = 21.66 GiB (5.68 BPW)
Dec 12 02:31:57 sub.domain.tld ollama[95532]: llm_load_print_meta: general.name = Qwen2.5 32B Instruct
Dec 12 02:31:57 sub.domain.tld ollama[95532]: llm_load_print_meta: BOS token = 151643 '<|endoftext|>'
Dec 12 02:31:57 sub.domain.tld ollama[95532]: llm_load_print_meta: EOS token = 151645 '<|im_end|>'
Dec 12 02:31:57 sub.domain.tld ollama[95532]: llm_load_print_meta: EOT token = 151645 '<|im_end|>'
Dec 12 02:31:57 sub.domain.tld ollama[95532]: llm_load_print_meta: PAD token = 151643 '<|endoftext|>'
Dec 12 02:31:57 sub.domain.tld ollama[95532]: llm_load_print_meta: LF token = 148848 'ÄĬ'
Dec 12 02:31:57 sub.domain.tld ollama[95532]: llm_load_print_meta: FIM PRE token = 151659 '<|fim_prefix|>'
Dec 12 02:31:57 sub.domain.tld ollama[95532]: llm_load_print_meta: FIM SUF token = 151661 '<|fim_suffix|>'
Dec 12 02:31:57 sub.domain.tld ollama[95532]: llm_load_print_meta: FIM MID token = 151660 '<|fim_middle|>'
Dec 12 02:31:57 sub.domain.tld ollama[95532]: llm_load_print_meta: FIM PAD token = 151662 '<|fim_pad|>'
Dec 12 02:31:57 sub.domain.tld ollama[95532]: llm_load_print_meta: FIM REP token = 151663 '<|repo_name|>'
Dec 12 02:31:57 sub.domain.tld ollama[95532]: llm_load_print_meta: FIM SEP token = 151664 '<|file_sep|>'
Dec 12 02:31:57 sub.domain.tld ollama[95532]: llm_load_print_meta: EOG token = 151643 '<|endoftext|>'
Dec 12 02:31:57 sub.domain.tld ollama[95532]: llm_load_print_meta: EOG token = 151645 '<|im_end|>'
Dec 12 02:31:57 sub.domain.tld ollama[95532]: llm_load_print_meta: EOG token = 151662 '<|fim_pad|>'
Dec 12 02:31:57 sub.domain.tld ollama[95532]: llm_load_print_meta: EOG token = 151663 '<|repo_name|>'
Dec 12 02:31:57 sub.domain.tld ollama[95532]: llm_load_print_meta: EOG token = 151664 '<|file_sep|>'
Dec 12 02:31:57 sub.domain.tld ollama[95532]: llm_load_print_meta: max token length = 256
Dec 12 02:31:57 sub.domain.tld ollama[95532]: llama_model_load: vocab only - skipping tensors
Dec 12 02:32:03 sub.domain.tld ollama[95532]: [GIN] 2024/12/12 - 02:32:03 | 200 | 741.052µs | 192.168.11.3 | GET "/v1/models"
Dec 12 02:32:03 sub.domain.tld ollama[95532]: [GIN] 2024/12/12 - 02:32:03 | 200 | 598.192µs | 192.168.11.3 | GET "/api/tags"
Dec 12 02:32:03 sub.domain.tld ollama[95532]: time=2024-12-12T02:32:03.560-08:00 level=INFO source=sched.go:507 msg="updated VRAM based on existing loaded models" gpu=GPU-884a8f95-c93b-627f-cf33-0d96d887005e library=cuda total="23.6 GiB" available="23.3 GiB"
Dec 12 02:32:03 sub.domain.tld ollama[95532]: time=2024-12-12T02:32:03.560-08:00 level=INFO source=sched.go:507 msg="updated VRAM based on existing loaded models" gpu=GPU-e6b34a2a-1f14-42f9-519c-27ff284c7d2f library=cuda total="10.6 GiB" available="10.4 GiB"
```
### OS
Linux
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.5.2
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8068/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8068/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1360
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1360/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1360/comments
|
https://api.github.com/repos/ollama/ollama/issues/1360/events
|
https://github.com/ollama/ollama/pull/1360
| 2,022,380,507
|
PR_kwDOJ0Z1Ps5g-hHn
| 1,360
|
Add link to Ollama Modelfiles repository
|
{
"login": "tusharhero",
"id": 54012021,
"node_id": "MDQ6VXNlcjU0MDEyMDIx",
"avatar_url": "https://avatars.githubusercontent.com/u/54012021?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tusharhero",
"html_url": "https://github.com/tusharhero",
"followers_url": "https://api.github.com/users/tusharhero/followers",
"following_url": "https://api.github.com/users/tusharhero/following{/other_user}",
"gists_url": "https://api.github.com/users/tusharhero/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tusharhero/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tusharhero/subscriptions",
"organizations_url": "https://api.github.com/users/tusharhero/orgs",
"repos_url": "https://api.github.com/users/tusharhero/repos",
"events_url": "https://api.github.com/users/tusharhero/events{/privacy}",
"received_events_url": "https://api.github.com/users/tusharhero/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 3
| 2023-12-03T05:41:10
| 2023-12-05T10:40:52
| 2023-12-05T05:09:38
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1360",
"html_url": "https://github.com/ollama/ollama/pull/1360",
"diff_url": "https://github.com/ollama/ollama/pull/1360.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1360.patch",
"merged_at": null
}
| null |
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1360/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1360/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6000
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6000/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6000/comments
|
https://api.github.com/repos/ollama/ollama/issues/6000/events
|
https://github.com/ollama/ollama/issues/6000
| 2,433,025,574
|
I_kwDOJ0Z1Ps6RBQYm
| 6,000
|
Cli broken with the new tools update
|
{
"login": "anandanand84dv",
"id": 170383551,
"node_id": "U_kgDOCifYvw",
"avatar_url": "https://avatars.githubusercontent.com/u/170383551?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anandanand84dv",
"html_url": "https://github.com/anandanand84dv",
"followers_url": "https://api.github.com/users/anandanand84dv/followers",
"following_url": "https://api.github.com/users/anandanand84dv/following{/other_user}",
"gists_url": "https://api.github.com/users/anandanand84dv/gists{/gist_id}",
"starred_url": "https://api.github.com/users/anandanand84dv/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anandanand84dv/subscriptions",
"organizations_url": "https://api.github.com/users/anandanand84dv/orgs",
"repos_url": "https://api.github.com/users/anandanand84dv/repos",
"events_url": "https://api.github.com/users/anandanand84dv/events{/privacy}",
"received_events_url": "https://api.github.com/users/anandanand84dv/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-07-26T21:53:44
| 2024-07-26T21:57:48
| 2024-07-26T21:57:48
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
After the new tools implementation. It errors out on the second question.
```Error: template: :28:7: executing "" at <.ToolCalls>: can't evaluate field ToolCalls in type *api.Message```

### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.2.5
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6000/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6000/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2521
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2521/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2521/comments
|
https://api.github.com/repos/ollama/ollama/issues/2521/events
|
https://github.com/ollama/ollama/issues/2521
| 2,137,406,899
|
I_kwDOJ0Z1Ps5_Zj2z
| 2,521
|
Restart to update shows twice on Windows
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg",
"url": "https://api.github.com/repos/ollama/ollama/labels/windows",
"name": "windows",
"color": "0052CC",
"default": false,
"description": ""
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 1
| 2024-02-15T20:37:34
| 2024-02-17T01:23:38
| 2024-02-17T01:23:38
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |

|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2521/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2521/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2014
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2014/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2014/comments
|
https://api.github.com/repos/ollama/ollama/issues/2014/events
|
https://github.com/ollama/ollama/issues/2014
| 2,083,517,238
|
I_kwDOJ0Z1Ps58L_M2
| 2,014
|
How to make output consistent
|
{
"login": "Fei-Wang",
"id": 11441526,
"node_id": "MDQ6VXNlcjExNDQxNTI2",
"avatar_url": "https://avatars.githubusercontent.com/u/11441526?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Fei-Wang",
"html_url": "https://github.com/Fei-Wang",
"followers_url": "https://api.github.com/users/Fei-Wang/followers",
"following_url": "https://api.github.com/users/Fei-Wang/following{/other_user}",
"gists_url": "https://api.github.com/users/Fei-Wang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Fei-Wang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Fei-Wang/subscriptions",
"organizations_url": "https://api.github.com/users/Fei-Wang/orgs",
"repos_url": "https://api.github.com/users/Fei-Wang/repos",
"events_url": "https://api.github.com/users/Fei-Wang/events{/privacy}",
"received_events_url": "https://api.github.com/users/Fei-Wang/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 6
| 2024-01-16T10:03:32
| 2024-01-27T01:07:24
| 2024-01-27T01:07:24
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Setting seed and temperature cannot make the output consistent.
<img width="1087" alt="image" src="https://github.com/jmorganca/ollama/assets/11441526/9a00ac1f-c120-4211-9b2e-fcec627f69e1">
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2014/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2014/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1951
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1951/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1951/comments
|
https://api.github.com/repos/ollama/ollama/issues/1951/events
|
https://github.com/ollama/ollama/issues/1951
| 2,078,868,898
|
I_kwDOJ0Z1Ps576QWi
| 1,951
|
Ollama GPU Process does not automatically terminate after inactivity
|
{
"login": "chereszabor",
"id": 7354324,
"node_id": "MDQ6VXNlcjczNTQzMjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/7354324?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chereszabor",
"html_url": "https://github.com/chereszabor",
"followers_url": "https://api.github.com/users/chereszabor/followers",
"following_url": "https://api.github.com/users/chereszabor/following{/other_user}",
"gists_url": "https://api.github.com/users/chereszabor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chereszabor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chereszabor/subscriptions",
"organizations_url": "https://api.github.com/users/chereszabor/orgs",
"repos_url": "https://api.github.com/users/chereszabor/repos",
"events_url": "https://api.github.com/users/chereszabor/events{/privacy}",
"received_events_url": "https://api.github.com/users/chereszabor/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 3
| 2024-01-12T13:36:15
| 2024-01-18T16:58:53
| 2024-01-18T16:58:52
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Noticed with recent releases the ollama process does not get automatically terminated after a period of inactivity, idling the GPU process and keeping the last used model in VRAM. This also increases the time required to load a new model into VRAM and increases 'standby' power usage of the GPU.
I am deploying ollama via Docker and tested with the latest version v0.1.20.
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1951/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1951/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/621
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/621/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/621/comments
|
https://api.github.com/repos/ollama/ollama/issues/621/events
|
https://github.com/ollama/ollama/pull/621
| 1,915,255,182
|
PR_kwDOJ0Z1Ps5bUqpR
| 621
|
Added missing return preventing SIGSEGV because of missing resp
|
{
"login": "lstep",
"id": 2028,
"node_id": "MDQ6VXNlcjIwMjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2028?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lstep",
"html_url": "https://github.com/lstep",
"followers_url": "https://api.github.com/users/lstep/followers",
"following_url": "https://api.github.com/users/lstep/following{/other_user}",
"gists_url": "https://api.github.com/users/lstep/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lstep/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lstep/subscriptions",
"organizations_url": "https://api.github.com/users/lstep/orgs",
"repos_url": "https://api.github.com/users/lstep/repos",
"events_url": "https://api.github.com/users/lstep/events{/privacy}",
"received_events_url": "https://api.github.com/users/lstep/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2023-09-27T10:49:31
| 2023-09-28T21:25:23
| 2023-09-28T21:25:23
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/621",
"html_url": "https://github.com/ollama/ollama/pull/621",
"diff_url": "https://github.com/ollama/ollama/pull/621.diff",
"patch_url": "https://github.com/ollama/ollama/pull/621.patch",
"merged_at": "2023-09-28T21:25:23"
}
|
Closes #619
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/621/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/621/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6401
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6401/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6401/comments
|
https://api.github.com/repos/ollama/ollama/issues/6401/events
|
https://github.com/ollama/ollama/issues/6401
| 2,471,629,115
|
I_kwDOJ0Z1Ps6TUhE7
| 6,401
|
embeddings models keep_alive
|
{
"login": "Abdulrahman392011",
"id": 175052671,
"node_id": "U_kgDOCm8Xfw",
"avatar_url": "https://avatars.githubusercontent.com/u/175052671?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Abdulrahman392011",
"html_url": "https://github.com/Abdulrahman392011",
"followers_url": "https://api.github.com/users/Abdulrahman392011/followers",
"following_url": "https://api.github.com/users/Abdulrahman392011/following{/other_user}",
"gists_url": "https://api.github.com/users/Abdulrahman392011/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Abdulrahman392011/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Abdulrahman392011/subscriptions",
"organizations_url": "https://api.github.com/users/Abdulrahman392011/orgs",
"repos_url": "https://api.github.com/users/Abdulrahman392011/repos",
"events_url": "https://api.github.com/users/Abdulrahman392011/events{/privacy}",
"received_events_url": "https://api.github.com/users/Abdulrahman392011/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-08-17T18:46:16
| 2024-08-17T23:29:43
| 2024-08-17T23:29:43
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I use embeddings models a lot and every time it loads the model do the vectoring and then unload it immediately. when I try to keep alive by using this command
$ curl http://localhost:11434/api/generate -d '{"model": "mxbai-embed-large:latest", "keep_alive": -1}'
it tells me that this model isn't a generative model and refuse to keep alive. please have support for it to decrease the latency as it copies the 600 megabytes every time and then delete it which adds a couple of seconds to an operation that should take only a second.
|
{
"login": "Abdulrahman392011",
"id": 175052671,
"node_id": "U_kgDOCm8Xfw",
"avatar_url": "https://avatars.githubusercontent.com/u/175052671?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Abdulrahman392011",
"html_url": "https://github.com/Abdulrahman392011",
"followers_url": "https://api.github.com/users/Abdulrahman392011/followers",
"following_url": "https://api.github.com/users/Abdulrahman392011/following{/other_user}",
"gists_url": "https://api.github.com/users/Abdulrahman392011/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Abdulrahman392011/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Abdulrahman392011/subscriptions",
"organizations_url": "https://api.github.com/users/Abdulrahman392011/orgs",
"repos_url": "https://api.github.com/users/Abdulrahman392011/repos",
"events_url": "https://api.github.com/users/Abdulrahman392011/events{/privacy}",
"received_events_url": "https://api.github.com/users/Abdulrahman392011/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6401/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6401/timeline
| null |
completed
| false
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.