url
stringlengths 51
54
| repository_url
stringclasses 1
value | labels_url
stringlengths 65
68
| comments_url
stringlengths 60
63
| events_url
stringlengths 58
61
| html_url
stringlengths 39
44
| id
int64 1.78B
2.82B
| node_id
stringlengths 18
19
| number
int64 1
8.69k
| title
stringlengths 1
382
| user
dict | labels
listlengths 0
5
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
2
| milestone
null | comments
int64 0
323
| created_at
timestamp[s] | updated_at
timestamp[s] | closed_at
timestamp[s] | author_association
stringclasses 4
values | sub_issues_summary
dict | active_lock_reason
null | draft
bool 2
classes | pull_request
dict | body
stringlengths 2
118k
⌀ | closed_by
dict | reactions
dict | timeline_url
stringlengths 60
63
| performed_via_github_app
null | state_reason
stringclasses 4
values | is_pull_request
bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/ollama/ollama/issues/4997
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4997/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4997/comments
|
https://api.github.com/repos/ollama/ollama/issues/4997/events
|
https://github.com/ollama/ollama/pull/4997
| 2,348,010,141
|
PR_kwDOJ0Z1Ps5yMkNk
| 4,997
|
Initial commit
|
{
"login": "enzoxic",
"id": 157711992,
"node_id": "U_kgDOCWZ-eA",
"avatar_url": "https://avatars.githubusercontent.com/u/157711992?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/enzoxic",
"html_url": "https://github.com/enzoxic",
"followers_url": "https://api.github.com/users/enzoxic/followers",
"following_url": "https://api.github.com/users/enzoxic/following{/other_user}",
"gists_url": "https://api.github.com/users/enzoxic/gists{/gist_id}",
"starred_url": "https://api.github.com/users/enzoxic/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/enzoxic/subscriptions",
"organizations_url": "https://api.github.com/users/enzoxic/orgs",
"repos_url": "https://api.github.com/users/enzoxic/repos",
"events_url": "https://api.github.com/users/enzoxic/events{/privacy}",
"received_events_url": "https://api.github.com/users/enzoxic/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-06-12T07:18:06
| 2024-06-12T21:11:29
| 2024-06-12T21:11:28
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4997",
"html_url": "https://github.com/ollama/ollama/pull/4997",
"diff_url": "https://github.com/ollama/ollama/pull/4997.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4997.patch",
"merged_at": null
}
|
a basic AIOS project with ollama to manage in concert with other M1 friendly models the prototype I am developping
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4997/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4997/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1104
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1104/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1104/comments
|
https://api.github.com/repos/ollama/ollama/issues/1104/events
|
https://github.com/ollama/ollama/pull/1104
| 1,989,586,253
|
PR_kwDOJ0Z1Ps5fPjob
| 1,104
|
add jupyter notebook example
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2023-11-12T21:37:13
| 2023-11-17T22:46:27
| 2023-11-17T22:46:26
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1104",
"html_url": "https://github.com/ollama/ollama/pull/1104",
"diff_url": "https://github.com/ollama/ollama/pull/1104.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1104.patch",
"merged_at": "2023-11-17T22:46:26"
}
| null |
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1104/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1104/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7276
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7276/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7276/comments
|
https://api.github.com/repos/ollama/ollama/issues/7276/events
|
https://github.com/ollama/ollama/issues/7276
| 2,600,433,330
|
I_kwDOJ0Z1Ps6a_3ay
| 7,276
|
bitnet is more energy efficient than llama.cpp
|
{
"login": "olumolu",
"id": 162728301,
"node_id": "U_kgDOCbMJbQ",
"avatar_url": "https://avatars.githubusercontent.com/u/162728301?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/olumolu",
"html_url": "https://github.com/olumolu",
"followers_url": "https://api.github.com/users/olumolu/followers",
"following_url": "https://api.github.com/users/olumolu/following{/other_user}",
"gists_url": "https://api.github.com/users/olumolu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/olumolu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/olumolu/subscriptions",
"organizations_url": "https://api.github.com/users/olumolu/orgs",
"repos_url": "https://api.github.com/users/olumolu/repos",
"events_url": "https://api.github.com/users/olumolu/events{/privacy}",
"received_events_url": "https://api.github.com/users/olumolu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-10-20T12:54:20
| 2024-10-24T18:24:04
| 2024-10-24T18:24:03
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
https://github.com/microsoft/BitNet?tab=readme-ov-file
Can ollama switch to that to save more power.
|
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7276/reactions",
"total_count": 7,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 7,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7276/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2034
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2034/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2034/comments
|
https://api.github.com/repos/ollama/ollama/issues/2034/events
|
https://github.com/ollama/ollama/issues/2034
| 2,086,826,586
|
I_kwDOJ0Z1Ps58YnJa
| 2,034
|
Not running on gpu
|
{
"login": "DragonBtc93",
"id": 36767505,
"node_id": "MDQ6VXNlcjM2NzY3NTA1",
"avatar_url": "https://avatars.githubusercontent.com/u/36767505?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DragonBtc93",
"html_url": "https://github.com/DragonBtc93",
"followers_url": "https://api.github.com/users/DragonBtc93/followers",
"following_url": "https://api.github.com/users/DragonBtc93/following{/other_user}",
"gists_url": "https://api.github.com/users/DragonBtc93/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DragonBtc93/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DragonBtc93/subscriptions",
"organizations_url": "https://api.github.com/users/DragonBtc93/orgs",
"repos_url": "https://api.github.com/users/DragonBtc93/repos",
"events_url": "https://api.github.com/users/DragonBtc93/events{/privacy}",
"received_events_url": "https://api.github.com/users/DragonBtc93/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-01-17T19:20:47
| 2024-01-26T21:20:07
| 2024-01-26T21:19:26
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I'm a Ubuntu 22.04 use have a Nvidia tesla p40 and a k80 gpu and it will not use gpu. I can use text generation webui and get gpu.
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2034/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2034/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7541
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7541/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7541/comments
|
https://api.github.com/repos/ollama/ollama/issues/7541/events
|
https://github.com/ollama/ollama/issues/7541
| 2,639,968,107
|
I_kwDOJ0Z1Ps6dWrdr
| 7,541
|
How to use the brand new models?
|
{
"login": "dwsmart32",
"id": 70640776,
"node_id": "MDQ6VXNlcjcwNjQwNzc2",
"avatar_url": "https://avatars.githubusercontent.com/u/70640776?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dwsmart32",
"html_url": "https://github.com/dwsmart32",
"followers_url": "https://api.github.com/users/dwsmart32/followers",
"following_url": "https://api.github.com/users/dwsmart32/following{/other_user}",
"gists_url": "https://api.github.com/users/dwsmart32/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dwsmart32/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dwsmart32/subscriptions",
"organizations_url": "https://api.github.com/users/dwsmart32/orgs",
"repos_url": "https://api.github.com/users/dwsmart32/repos",
"events_url": "https://api.github.com/users/dwsmart32/events{/privacy}",
"received_events_url": "https://api.github.com/users/dwsmart32/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-11-07T05:54:54
| 2024-11-13T21:36:25
| 2024-11-13T21:36:25
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hello, always thank you for all your hard work. I would like to ask how to use a VLM like Qwen2 VL-72B(Not LLM) or Nvidia/NVLM-D-72B in Ollama. When I attempt to customize the model, it fails due to an unsupported backbone. Could you advise on what steps are needed to enable support for these newer models? Thank you.
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7541/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7541/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6818
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6818/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6818/comments
|
https://api.github.com/repos/ollama/ollama/issues/6818/events
|
https://github.com/ollama/ollama/pull/6818
| 2,527,264,043
|
PR_kwDOJ0Z1Ps57jyq6
| 6,818
|
Add vim-intelligence-bridge to Terminal section in README
|
{
"login": "pepo-ec",
"id": 1961172,
"node_id": "MDQ6VXNlcjE5NjExNzI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1961172?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pepo-ec",
"html_url": "https://github.com/pepo-ec",
"followers_url": "https://api.github.com/users/pepo-ec/followers",
"following_url": "https://api.github.com/users/pepo-ec/following{/other_user}",
"gists_url": "https://api.github.com/users/pepo-ec/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pepo-ec/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pepo-ec/subscriptions",
"organizations_url": "https://api.github.com/users/pepo-ec/orgs",
"repos_url": "https://api.github.com/users/pepo-ec/repos",
"events_url": "https://api.github.com/users/pepo-ec/events{/privacy}",
"received_events_url": "https://api.github.com/users/pepo-ec/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-09-15T23:39:00
| 2024-09-16T01:20:36
| 2024-09-16T01:20:36
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6818",
"html_url": "https://github.com/ollama/ollama/pull/6818",
"diff_url": "https://github.com/ollama/ollama/pull/6818.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6818.patch",
"merged_at": "2024-09-16T01:20:36"
}
|
I'm proposing to add vim-intelligence-bridge to the Terminal section. This plugin is unique as it's the only one that integrates Ollama directly with traditional Vim (not Neovim). Key benefits:
- Expands Ollama's reach to Vim users
- Promotes local, private AI processing within Vim
- Enhances developer productivity by integrating Ollama in the Vim workflow
This addition showcases Ollama's versatility and encourages wider adoption in the developer community.
Proposed change:
[vim-intelligence-bridge](https://github.com/pepo-ec/vim-intelligence-bridge) Simple interaction of "Ollama" with the Vim editor
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6818/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6818/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7127
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7127/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7127/comments
|
https://api.github.com/repos/ollama/ollama/issues/7127/events
|
https://github.com/ollama/ollama/issues/7127
| 2,572,074,646
|
I_kwDOJ0Z1Ps6ZTr6W
| 7,127
|
Difference in Function Call Support between Ollama and Unsloth for Llama 3.2
|
{
"login": "Saber120",
"id": 108297159,
"node_id": "U_kgDOBnR7xw",
"avatar_url": "https://avatars.githubusercontent.com/u/108297159?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Saber120",
"html_url": "https://github.com/Saber120",
"followers_url": "https://api.github.com/users/Saber120/followers",
"following_url": "https://api.github.com/users/Saber120/following{/other_user}",
"gists_url": "https://api.github.com/users/Saber120/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Saber120/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Saber120/subscriptions",
"organizations_url": "https://api.github.com/users/Saber120/orgs",
"repos_url": "https://api.github.com/users/Saber120/repos",
"events_url": "https://api.github.com/users/Saber120/events{/privacy}",
"received_events_url": "https://api.github.com/users/Saber120/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-10-08T05:00:07
| 2024-10-09T13:11:58
| 2024-10-09T13:11:58
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Hi,
I’ve fine-tune a Llama 3.2 model using Unsloth, and when I try to enable function calling with ollama, I receive a message indicating that the model does not support function calls. However, when I download the same Llama 3.2 model directly from Ollama, function calls work without any issues.
Could clarify why the same model behaves differently in terms of function call support when trained with Unsloth versus when downloaded from Ollama? Is there something specific in the way Ollama’s version of the model is configured that enables function call support, or are there certain settings I need to adjust in the Unsloth training process to enable this feature?
Thanks in advance for your help!
### OS
Linux
### GPU
Other
### CPU
Other
### Ollama version
0.3.12
|
{
"login": "Saber120",
"id": 108297159,
"node_id": "U_kgDOBnR7xw",
"avatar_url": "https://avatars.githubusercontent.com/u/108297159?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Saber120",
"html_url": "https://github.com/Saber120",
"followers_url": "https://api.github.com/users/Saber120/followers",
"following_url": "https://api.github.com/users/Saber120/following{/other_user}",
"gists_url": "https://api.github.com/users/Saber120/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Saber120/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Saber120/subscriptions",
"organizations_url": "https://api.github.com/users/Saber120/orgs",
"repos_url": "https://api.github.com/users/Saber120/repos",
"events_url": "https://api.github.com/users/Saber120/events{/privacy}",
"received_events_url": "https://api.github.com/users/Saber120/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7127/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7127/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1356
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1356/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1356/comments
|
https://api.github.com/repos/ollama/ollama/issues/1356/events
|
https://github.com/ollama/ollama/issues/1356
| 2,022,290,587
|
I_kwDOJ0Z1Ps54ibSb
| 1,356
|
Concurrency and multiple calls
|
{
"login": "enriquesouza",
"id": 1700699,
"node_id": "MDQ6VXNlcjE3MDA2OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/1700699?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/enriquesouza",
"html_url": "https://github.com/enriquesouza",
"followers_url": "https://api.github.com/users/enriquesouza/followers",
"following_url": "https://api.github.com/users/enriquesouza/following{/other_user}",
"gists_url": "https://api.github.com/users/enriquesouza/gists{/gist_id}",
"starred_url": "https://api.github.com/users/enriquesouza/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/enriquesouza/subscriptions",
"organizations_url": "https://api.github.com/users/enriquesouza/orgs",
"repos_url": "https://api.github.com/users/enriquesouza/repos",
"events_url": "https://api.github.com/users/enriquesouza/events{/privacy}",
"received_events_url": "https://api.github.com/users/enriquesouza/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 4
| 2023-12-03T02:01:34
| 2024-01-26T23:55:51
| 2024-01-26T23:55:51
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi, I would like to know if running Ollama and making multiple calls is possible. I would love to add a server and use it for my users.
Therefore, when testing it, I saw it is waiting until a process finishes when I use the liteLLM proxy.
Is it possible?
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1356/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1356/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5356
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5356/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5356/comments
|
https://api.github.com/repos/ollama/ollama/issues/5356/events
|
https://github.com/ollama/ollama/issues/5356
| 2,380,063,057
|
I_kwDOJ0Z1Ps6N3OFR
| 5,356
|
allow for num_ctx parameter in the openai API compatibility
|
{
"login": "PabloRMira",
"id": 36644554,
"node_id": "MDQ6VXNlcjM2NjQ0NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/36644554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PabloRMira",
"html_url": "https://github.com/PabloRMira",
"followers_url": "https://api.github.com/users/PabloRMira/followers",
"following_url": "https://api.github.com/users/PabloRMira/following{/other_user}",
"gists_url": "https://api.github.com/users/PabloRMira/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PabloRMira/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PabloRMira/subscriptions",
"organizations_url": "https://api.github.com/users/PabloRMira/orgs",
"repos_url": "https://api.github.com/users/PabloRMira/repos",
"events_url": "https://api.github.com/users/PabloRMira/events{/privacy}",
"received_events_url": "https://api.github.com/users/PabloRMira/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 7
| 2024-06-28T10:04:14
| 2025-01-18T05:00:23
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
The OpenAI compatibility module does not allow for setting the number of tokens window (num_ctx) via API call dynamically instead of having to adjust the Modelfile each time we want to use another context window.
Therefore it would be great to have in the OpenAI compatibility. I can also try a PR for this.
Thanks a lot for this wonderful project! :-)
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5356/reactions",
"total_count": 8,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 4,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 3
}
|
https://api.github.com/repos/ollama/ollama/issues/5356/timeline
| null |
reopened
| false
|
https://api.github.com/repos/ollama/ollama/issues/4827
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4827/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4827/comments
|
https://api.github.com/repos/ollama/ollama/issues/4827/events
|
https://github.com/ollama/ollama/issues/4827
| 2,334,988,111
|
I_kwDOJ0Z1Ps6LLRdP
| 4,827
|
When do you introduce the glm-4-9b model?
|
{
"login": "mywwq",
"id": 133221105,
"node_id": "U_kgDOB_DK8Q",
"avatar_url": "https://avatars.githubusercontent.com/u/133221105?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mywwq",
"html_url": "https://github.com/mywwq",
"followers_url": "https://api.github.com/users/mywwq/followers",
"following_url": "https://api.github.com/users/mywwq/following{/other_user}",
"gists_url": "https://api.github.com/users/mywwq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mywwq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mywwq/subscriptions",
"organizations_url": "https://api.github.com/users/mywwq/orgs",
"repos_url": "https://api.github.com/users/mywwq/repos",
"events_url": "https://api.github.com/users/mywwq/events{/privacy}",
"received_events_url": "https://api.github.com/users/mywwq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-06-05T06:10:11
| 2024-06-06T17:33:16
| 2024-06-06T17:33:16
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
When do you introduce the glm-4-9b model?
|
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4827/reactions",
"total_count": 28,
"+1": 24,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 4,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4827/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2549
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2549/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2549/comments
|
https://api.github.com/repos/ollama/ollama/issues/2549/events
|
https://github.com/ollama/ollama/issues/2549
| 2,139,297,883
|
I_kwDOJ0Z1Ps5_gxhb
| 2,549
|
Invalid characters in windows command prompt
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg",
"url": "https://api.github.com/repos/ollama/ollama/labels/windows",
"name": "windows",
"color": "0052CC",
"default": false,
"description": ""
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 3
| 2024-02-16T19:35:36
| 2024-07-29T20:40:25
| 2024-07-29T20:40:25
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |

|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2549/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2549/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/39
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/39/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/39/comments
|
https://api.github.com/repos/ollama/ollama/issues/39/events
|
https://github.com/ollama/ollama/pull/39
| 1,790,602,456
|
PR_kwDOJ0Z1Ps5UwrWd
| 39
|
enable metal gpu acceleration
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2023-07-06T01:12:35
| 2023-07-06T18:25:25
| 2023-07-06T01:45:53
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/39",
"html_url": "https://github.com/ollama/ollama/pull/39",
"diff_url": "https://github.com/ollama/ollama/pull/39.diff",
"patch_url": "https://github.com/ollama/ollama/pull/39.patch",
"merged_at": "2023-07-06T01:45:53"
}
|
ggml-metal.metal must be in the same directory as the ollama binary otherwise llama.cpp will not be able to find it and load it.
1. go generate llama/llama_metal.go
2. go build .
3. ./ollama serve
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/39/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/39/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/373
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/373/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/373/comments
|
https://api.github.com/repos/ollama/ollama/issues/373/events
|
https://github.com/ollama/ollama/issues/373
| 1,855,727,475
|
I_kwDOJ0Z1Ps5unCdz
| 373
|
Can we optimize performance with the Apple M1 Max's 32-core GPU and Neural Engine?
|
{
"login": "pascalandy",
"id": 6694151,
"node_id": "MDQ6VXNlcjY2OTQxNTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/6694151?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pascalandy",
"html_url": "https://github.com/pascalandy",
"followers_url": "https://api.github.com/users/pascalandy/followers",
"following_url": "https://api.github.com/users/pascalandy/following{/other_user}",
"gists_url": "https://api.github.com/users/pascalandy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pascalandy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pascalandy/subscriptions",
"organizations_url": "https://api.github.com/users/pascalandy/orgs",
"repos_url": "https://api.github.com/users/pascalandy/repos",
"events_url": "https://api.github.com/users/pascalandy/events{/privacy}",
"received_events_url": "https://api.github.com/users/pascalandy/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] |
closed
| false
| null |
[] | null | 14
| 2023-08-17T21:20:58
| 2024-06-28T20:35:45
| 2023-10-23T16:32:50
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hello everyone,
I'm keen to explore ways to maximize the efficiency of my robust machines. It appears that Ollama currently utilizes only the CPU for processing.
I'm wondering if there's an option to configure it to leverage our GPU. Specifically, I'm interested in harnessing the power of the **32-core GPU** and the **16-core Neural Engine** in my setup.
Considering the **specifications** of the Apple M1 Max chip:
- 10-core CPU with 8 performance cores and 2 efficiency cores
- 32-core GPU
- 16-core Neural Engine
- 400GB/s memory bandwidth
Media engine
- Hardware-accelerated H.264, HEVC, ProRes, and ProRes RAW
- Video decode engine
- Two video encode engines
- Two ProRes encode and decode engines
Cheers!
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/373/reactions",
"total_count": 11,
"+1": 11,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/373/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5389
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5389/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5389/comments
|
https://api.github.com/repos/ollama/ollama/issues/5389/events
|
https://github.com/ollama/ollama/issues/5389
| 2,382,163,897
|
I_kwDOJ0Z1Ps6N_O-5
| 5,389
|
Add Basic Networking Tools to Docker Image for Health Checks
|
{
"login": "OlaoluwaM",
"id": 37044906,
"node_id": "MDQ6VXNlcjM3MDQ0OTA2",
"avatar_url": "https://avatars.githubusercontent.com/u/37044906?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/OlaoluwaM",
"html_url": "https://github.com/OlaoluwaM",
"followers_url": "https://api.github.com/users/OlaoluwaM/followers",
"following_url": "https://api.github.com/users/OlaoluwaM/following{/other_user}",
"gists_url": "https://api.github.com/users/OlaoluwaM/gists{/gist_id}",
"starred_url": "https://api.github.com/users/OlaoluwaM/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/OlaoluwaM/subscriptions",
"organizations_url": "https://api.github.com/users/OlaoluwaM/orgs",
"repos_url": "https://api.github.com/users/OlaoluwaM/repos",
"events_url": "https://api.github.com/users/OlaoluwaM/events{/privacy}",
"received_events_url": "https://api.github.com/users/OlaoluwaM/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 4
| 2024-06-30T09:16:26
| 2024-10-22T20:56:08
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi! Would it be possible to include one or two network command packages like `ping`, `netcat`, or `curl` in the Docker image? This would allow folks who want to query the health check endpoint using `docker compose` or similar tools to do so without needing to create a custom `Dockerfile`.
I get that y'all are [hesitant](https://github.com/ollama/ollama/pull/1909) about adding a `HEALTHCHECK` command directly to the Dockerfile since it's not standard in any of Docker's official images. But at least with these packages, anyone wanting to set up a health check for their containers can do so more easily.
I'd be happy to open a PR for this if you think it's a good idea.
Thanks!
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5389/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5389/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/8607
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8607/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8607/comments
|
https://api.github.com/repos/ollama/ollama/issues/8607/events
|
https://github.com/ollama/ollama/issues/8607
| 2,812,661,670
|
I_kwDOJ0Z1Ps6npc-m
| 8,607
|
Add an ability to inject env variables to modelfile system message.
|
{
"login": "BotVasya",
"id": 10455417,
"node_id": "MDQ6VXNlcjEwNDU1NDE3",
"avatar_url": "https://avatars.githubusercontent.com/u/10455417?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BotVasya",
"html_url": "https://github.com/BotVasya",
"followers_url": "https://api.github.com/users/BotVasya/followers",
"following_url": "https://api.github.com/users/BotVasya/following{/other_user}",
"gists_url": "https://api.github.com/users/BotVasya/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BotVasya/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BotVasya/subscriptions",
"organizations_url": "https://api.github.com/users/BotVasya/orgs",
"repos_url": "https://api.github.com/users/BotVasya/repos",
"events_url": "https://api.github.com/users/BotVasya/events{/privacy}",
"received_events_url": "https://api.github.com/users/BotVasya/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 0
| 2025-01-27T10:43:33
| 2025-01-27T10:43:33
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi All.
I`ve realized that that there is no way to make ollama models to know current date and time if it runs on ms windows. So I believe that would be useful if there would be possibility to use OS variables in the modelfile. Especially for date and time it would be better if model could obtain that data dynamically, just when it needed.
Use case: model could answer what date and time it is.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8607/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8607/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/707
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/707/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/707/comments
|
https://api.github.com/repos/ollama/ollama/issues/707/events
|
https://github.com/ollama/ollama/issues/707
| 1,927,459,911
|
I_kwDOJ0Z1Ps5y4rRH
| 707
|
127.0.0.1:11434: bind: address already in use
|
{
"login": "Nivek92",
"id": 5087400,
"node_id": "MDQ6VXNlcjUwODc0MDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/5087400?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Nivek92",
"html_url": "https://github.com/Nivek92",
"followers_url": "https://api.github.com/users/Nivek92/followers",
"following_url": "https://api.github.com/users/Nivek92/following{/other_user}",
"gists_url": "https://api.github.com/users/Nivek92/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Nivek92/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Nivek92/subscriptions",
"organizations_url": "https://api.github.com/users/Nivek92/orgs",
"repos_url": "https://api.github.com/users/Nivek92/repos",
"events_url": "https://api.github.com/users/Nivek92/events{/privacy}",
"received_events_url": "https://api.github.com/users/Nivek92/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] |
closed
| false
| null |
[] | null | 29
| 2023-10-05T06:15:02
| 2024-07-09T09:58:19
| 2023-12-04T19:39:38
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
When I run `ollama serve` I get
`Error: listen tcp 127.0.0.1:11434: bind: address already in use`
After checking what's running on the port with `sudo lsof -i :11434`
I see that ollama is already running
`ollama 2233 ollama 3u IPv4 37563 0t0 TCP localhost:11434 (LISTEN)`
I killed the process and ran the serve command again and got the same error. So it seems that it tries to start the server twice.
|
{
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.github.com/users/technovangelist/followers",
"following_url": "https://api.github.com/users/technovangelist/following{/other_user}",
"gists_url": "https://api.github.com/users/technovangelist/gists{/gist_id}",
"starred_url": "https://api.github.com/users/technovangelist/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/technovangelist/subscriptions",
"organizations_url": "https://api.github.com/users/technovangelist/orgs",
"repos_url": "https://api.github.com/users/technovangelist/repos",
"events_url": "https://api.github.com/users/technovangelist/events{/privacy}",
"received_events_url": "https://api.github.com/users/technovangelist/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/707/reactions",
"total_count": 40,
"+1": 40,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/707/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6314
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6314/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6314/comments
|
https://api.github.com/repos/ollama/ollama/issues/6314/events
|
https://github.com/ollama/ollama/issues/6314
| 2,459,722,892
|
I_kwDOJ0Z1Ps6SnGSM
| 6,314
|
Better guidance for using `with_structured_output` with `ChatOllama`
|
{
"login": "GuyPaddock",
"id": 2631799,
"node_id": "MDQ6VXNlcjI2MzE3OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/2631799?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/GuyPaddock",
"html_url": "https://github.com/GuyPaddock",
"followers_url": "https://api.github.com/users/GuyPaddock/followers",
"following_url": "https://api.github.com/users/GuyPaddock/following{/other_user}",
"gists_url": "https://api.github.com/users/GuyPaddock/gists{/gist_id}",
"starred_url": "https://api.github.com/users/GuyPaddock/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/GuyPaddock/subscriptions",
"organizations_url": "https://api.github.com/users/GuyPaddock/orgs",
"repos_url": "https://api.github.com/users/GuyPaddock/repos",
"events_url": "https://api.github.com/users/GuyPaddock/events{/privacy}",
"received_events_url": "https://api.github.com/users/GuyPaddock/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-08-11T19:09:17
| 2024-08-15T20:53:05
| 2024-08-15T20:53:04
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
When using `ChatOllama` from `langchain_ollama` rather than `langchain_community.chat_models`, it's posslble to use `with_structured_output`. However, there are several pitfalls that the docs hint at but don't explicitly mention, leading to issues like these:
* https://github.com/langchain-ai/langchain/discussions/22195
* https://github.com/langchain-ai/langchain/discussions/23079
It would be great if the docs could highlight the following:
1. If the older `langchain_community` version is pulled in, `with_structured_output` doesn't work.
2. If the model being used doesn't support tool calling (e.g., `phi3:14b`), it's not possible to use `with_structured_output`. I know that [the documentation](https://python.langchain.com/v0.2/docs/how_to/structured_output/) hints that using structured output is "like" tool calling or that under the hood it might be tool calling, but it's understandable that a reader might confuse structured output with asking the model to output JSON. At first, the impression the documentation left me with was that structured output was just taking the raw text of what the LLM returned, parsing it as JSON, and marshalling it into a model object.
3. It's possible to get `None` as a response from `chain.invoke()` even though the response from the LLM is JSON. This is because the model might opt _not_ to invoke the "tool" that creates the JSON response, and instead responds with JSON in the `content` of the response payload. The reason this is confusing/counter-intuitive is because when `include_raw` is `False`, you get nothing back from the model even though it actually _has_ replied with JSON, while you can see the JSON if `include_raw` is `True`.
4. You have to use a very intention-revealing name for your Pydantic model and/or mention the model name explicitly in the prompt to get the model to invoke the "tool" to return the result as a Pydantic model. The docs mention that the name is important but don't mention what happens if you get this wrong.
5. The Pydantic model descriptions do not appear in the verbose debug output when using `langchain.globals.set_debug(True)` and `langchain.globals.set_verbose(True)`. This makes it harder to see what the model was told about the Pydantic schema.
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6314/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6314/timeline
| null |
not_planned
| false
|
https://api.github.com/repos/ollama/ollama/issues/3542
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3542/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3542/comments
|
https://api.github.com/repos/ollama/ollama/issues/3542/events
|
https://github.com/ollama/ollama/issues/3542
| 2,231,954,406
|
I_kwDOJ0Z1Ps6FCOvm
| 3,542
|
Push of new model
|
{
"login": "emsi",
"id": 433383,
"node_id": "MDQ6VXNlcjQzMzM4Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/433383?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/emsi",
"html_url": "https://github.com/emsi",
"followers_url": "https://api.github.com/users/emsi/followers",
"following_url": "https://api.github.com/users/emsi/following{/other_user}",
"gists_url": "https://api.github.com/users/emsi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/emsi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/emsi/subscriptions",
"organizations_url": "https://api.github.com/users/emsi/orgs",
"repos_url": "https://api.github.com/users/emsi/repos",
"events_url": "https://api.github.com/users/emsi/events{/privacy}",
"received_events_url": "https://api.github.com/users/emsi/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-04-08T19:40:56
| 2024-04-09T14:29:15
| 2024-04-08T20:44:19
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
When trying to push NEW model I get `Error: file does not exist`.
I have debugged it under mitm and it seems ollama server tries to HEAD the new, nonexisting, model:
```
[19:36:39.502][172.17.0.1:52910] server connect registry.ollama.ai:443 (104.21.75.227:443)
172.17.0.1:52910: HEAD https://registry.ollama.ai/v2/emsi/Qra-13b/blobs/sha256:d381585268275794b5c658640369b3c112d982a0fef343da4bf50404bfe9e03f
<< 404 Not Found 0b
172.17.0.1:52910: POST https://registry.ollama.ai/v2/emsi/Qra-13b/blobs/uploads/
<< 404 Not Found 19b
```
### What did you expect to see?
It should upload.
### Steps to reproduce
Create a completely new model then:
```
root@105638d575be:/# ollama -v
ollama version is 0.1.30
root@105638d575be:/# ollama run emsi/Qra-13b
>>> Send a message (/? for help)
root@105638d575be:/# ollama push emsi/Qra-13b
retrieving manifest
Error: file does not exist
```
### Are there any recent changes that introduced the issue?
_No response_
### OS
Linux
### Architecture
_No response_
### Platform
Docker
### Ollama version
0.1.30
### GPU
Nvidia
### GPU info
_No response_
### CPU
_No response_
### Other software
_No response_
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3542/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3542/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/812
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/812/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/812/comments
|
https://api.github.com/repos/ollama/ollama/issues/812/events
|
https://github.com/ollama/ollama/pull/812
| 1,946,248,418
|
PR_kwDOJ0Z1Ps5c9JJ6
| 812
|
fix: wrong format string type
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-10-16T23:14:51
| 2023-10-17T15:40:50
| 2023-10-17T15:40:49
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/812",
"html_url": "https://github.com/ollama/ollama/pull/812",
"diff_url": "https://github.com/ollama/ollama/pull/812.diff",
"patch_url": "https://github.com/ollama/ollama/pull/812.patch",
"merged_at": "2023-10-17T15:40:49"
}
| null |
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/812/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/812/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/8225
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8225/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8225/comments
|
https://api.github.com/repos/ollama/ollama/issues/8225/events
|
https://github.com/ollama/ollama/pull/8225
| 2,757,158,560
|
PR_kwDOJ0Z1Ps6GIxT9
| 8,225
|
README.md inclusion of a project alpaca
|
{
"login": "olumolu",
"id": 162728301,
"node_id": "U_kgDOCbMJbQ",
"avatar_url": "https://avatars.githubusercontent.com/u/162728301?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/olumolu",
"html_url": "https://github.com/olumolu",
"followers_url": "https://api.github.com/users/olumolu/followers",
"following_url": "https://api.github.com/users/olumolu/following{/other_user}",
"gists_url": "https://api.github.com/users/olumolu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/olumolu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/olumolu/subscriptions",
"organizations_url": "https://api.github.com/users/olumolu/orgs",
"repos_url": "https://api.github.com/users/olumolu/repos",
"events_url": "https://api.github.com/users/olumolu/events{/privacy}",
"received_events_url": "https://api.github.com/users/olumolu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-12-24T04:35:57
| 2024-12-24T07:25:47
| 2024-12-24T07:25:47
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/8225",
"html_url": "https://github.com/ollama/ollama/pull/8225",
"diff_url": "https://github.com/ollama/ollama/pull/8225.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8225.patch",
"merged_at": null
}
|
Alpaca An Ollama client application for linux and macos made with GTK4 and Adwaita
https://github.com/ollama/ollama/issues/8220#issuecomment-2560451003
|
{
"login": "olumolu",
"id": 162728301,
"node_id": "U_kgDOCbMJbQ",
"avatar_url": "https://avatars.githubusercontent.com/u/162728301?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/olumolu",
"html_url": "https://github.com/olumolu",
"followers_url": "https://api.github.com/users/olumolu/followers",
"following_url": "https://api.github.com/users/olumolu/following{/other_user}",
"gists_url": "https://api.github.com/users/olumolu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/olumolu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/olumolu/subscriptions",
"organizations_url": "https://api.github.com/users/olumolu/orgs",
"repos_url": "https://api.github.com/users/olumolu/repos",
"events_url": "https://api.github.com/users/olumolu/events{/privacy}",
"received_events_url": "https://api.github.com/users/olumolu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8225/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8225/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5625
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5625/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5625/comments
|
https://api.github.com/repos/ollama/ollama/issues/5625/events
|
https://github.com/ollama/ollama/issues/5625
| 2,402,195,194
|
I_kwDOJ0Z1Ps6PLpb6
| 5,625
|
gpu discovery crashes on nvidia CC 2.1 GPU on windows 10
|
{
"login": "snufflemarlstar-rg",
"id": 54530186,
"node_id": "MDQ6VXNlcjU0NTMwMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/54530186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/snufflemarlstar-rg",
"html_url": "https://github.com/snufflemarlstar-rg",
"followers_url": "https://api.github.com/users/snufflemarlstar-rg/followers",
"following_url": "https://api.github.com/users/snufflemarlstar-rg/following{/other_user}",
"gists_url": "https://api.github.com/users/snufflemarlstar-rg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/snufflemarlstar-rg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/snufflemarlstar-rg/subscriptions",
"organizations_url": "https://api.github.com/users/snufflemarlstar-rg/orgs",
"repos_url": "https://api.github.com/users/snufflemarlstar-rg/repos",
"events_url": "https://api.github.com/users/snufflemarlstar-rg/events{/privacy}",
"received_events_url": "https://api.github.com/users/snufflemarlstar-rg/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg",
"url": "https://api.github.com/repos/ollama/ollama/labels/windows",
"name": "windows",
"color": "0052CC",
"default": false,
"description": ""
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg",
"url": "https://api.github.com/repos/ollama/ollama/labels/nvidia",
"name": "nvidia",
"color": "8CDB00",
"default": false,
"description": "Issues relating to Nvidia GPUs and CUDA"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info",
"name": "needs more info",
"color": "BA8041",
"default": false,
"description": "More information is needed to assist"
}
] |
open
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 9
| 2024-07-11T04:00:30
| 2024-11-13T21:38:00
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I have repeatedly installed and uninstalled ollama and searched for some advice regarding
"Warning: could not connect to a running Ollama instance" for windows 10 but I have not found a solution.
2024/07/11 10:49:03 routes.go:1033: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_MODELS:C:\\Users\\hp\\.ollama\\models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR:C:\\Users\\hp\\AppData\\Local\\Programs\\Ollama\\ollama_runners OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
time=2024-07-11T10:49:03.902+07:00 level=INFO source=images.go:751 msg="total blobs: 0"
time=2024-07-11T10:49:03.905+07:00 level=INFO source=images.go:758 msg="total unused blobs removed: 0"
time=2024-07-11T10:49:03.906+07:00 level=INFO source=routes.go:1080 msg="Listening on 127.0.0.1:11434 (version 0.2.1)"
time=2024-07-11T10:49:03.907+07:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu_avx cpu_avx2 cuda_v11.3 rocm_v5.7 cpu]"
time=2024-07-11T10:49:03.907+07:00 level=INFO source=gpu.go:205 msg="looking for compatible GPUs"
Exception 0xc0000005 0x8 0x1ec23f01c10 0x1ec23f01c10
PC=0x1ec23f01c10
signal arrived during external code execution
### OS
Windows
### GPU
Intel
### CPU
Intel
### Ollama version
0.2.1
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5625/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5625/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/7048
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7048/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7048/comments
|
https://api.github.com/repos/ollama/ollama/issues/7048/events
|
https://github.com/ollama/ollama/issues/7048
| 2,557,089,341
|
I_kwDOJ0Z1Ps6YahY9
| 7,048
|
Molmo support
|
{
"login": "win4r",
"id": 42172631,
"node_id": "MDQ6VXNlcjQyMTcyNjMx",
"avatar_url": "https://avatars.githubusercontent.com/u/42172631?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/win4r",
"html_url": "https://github.com/win4r",
"followers_url": "https://api.github.com/users/win4r/followers",
"following_url": "https://api.github.com/users/win4r/following{/other_user}",
"gists_url": "https://api.github.com/users/win4r/gists{/gist_id}",
"starred_url": "https://api.github.com/users/win4r/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/win4r/subscriptions",
"organizations_url": "https://api.github.com/users/win4r/orgs",
"repos_url": "https://api.github.com/users/win4r/repos",
"events_url": "https://api.github.com/users/win4r/events{/privacy}",
"received_events_url": "https://api.github.com/users/win4r/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-09-30T16:03:29
| 2024-10-03T19:47:34
| 2024-10-03T19:47:33
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
https://youtu.be/gtcOncFLMeo
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7048/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7048/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7668
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7668/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7668/comments
|
https://api.github.com/repos/ollama/ollama/issues/7668/events
|
https://github.com/ollama/ollama/issues/7668
| 2,659,793,854
|
I_kwDOJ0Z1Ps6eiTu-
| 7,668
|
is:issue iOllama Error - {json_chunk}s:open
|
{
"login": "papiche",
"id": 80590245,
"node_id": "MDQ6VXNlcjgwNTkwMjQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/80590245?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/papiche",
"html_url": "https://github.com/papiche",
"followers_url": "https://api.github.com/users/papiche/followers",
"following_url": "https://api.github.com/users/papiche/following{/other_user}",
"gists_url": "https://api.github.com/users/papiche/gists{/gist_id}",
"starred_url": "https://api.github.com/users/papiche/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/papiche/subscriptions",
"organizations_url": "https://api.github.com/users/papiche/orgs",
"repos_url": "https://api.github.com/users/papiche/repos",
"events_url": "https://api.github.com/users/papiche/events{/privacy}",
"received_events_url": "https://api.github.com/users/papiche/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info",
"name": "needs more info",
"color": "BA8041",
"default": false,
"description": "More information is needed to assist"
}
] |
closed
| false
| null |
[] | null | 9
| 2024-11-14T19:11:39
| 2024-12-30T03:25:19
| 2024-12-02T15:26:42
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Using aider.chat with ollama, got
```
Pourlitellm.APIConnectionError: Ollama Error - {'error': 'an unknown error was encountered while running the model '}
Traceback (most recent call last):
File "/home/fred/.astro/lib/python3.12/site-packages/litellm/utils.py", line 7023, in chunk_creator
response_obj = self.handle_ollama_stream(chunk)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/fred/.astro/lib/python3.12/site-packages/litellm/utils.py", line 6566, in handle_ollama_stream
raise e
File "/home/fred/.astro/lib/python3.12/site-packages/litellm/utils.py", line 6541, in handle_ollama_stream
raise Exception(f"Ollama Error - {json_chunk}")
Exception: Ollama Error - {'error': 'an unknown error was encountered while running the model '}
Retrying in 0.2 seconds...
Pourlitellm.APIConnectionError: Ollama Error - {'error': 'an unknown error was encountered while running the model '}
Traceback (most recent call last):
File "/home/fred/.astro/lib/python3.12/site-packages/litellm/utils.py", line 7023, in chunk_creator
response_obj = self.handle_ollama_stream(chunk)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/fred/.astro/lib/python3.12/site-packages/litellm/utils.py", line 6566, in handle_ollama_stream
raise e
File "/home/fred/.astro/lib/python3.12/site-packages/litellm/utils.py", line 6541, in handle_ollama_stream
raise Exception(f"Ollama Error - {json_chunk}")
Exception: Ollama Error - {'error': 'an unknown error was encountered while running the model '}
```
https://github.com/Aider-AI/aider/issues/2372
any idea on what is wrong ?
### OS
Linux
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.4.1
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7668/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7668/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1710
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1710/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1710/comments
|
https://api.github.com/repos/ollama/ollama/issues/1710/events
|
https://github.com/ollama/ollama/issues/1710
| 2,055,779,339
|
I_kwDOJ0Z1Ps56iLQL
| 1,710
|
How do we output ollama response to file?
|
{
"login": "oliverbob",
"id": 23272429,
"node_id": "MDQ6VXNlcjIzMjcyNDI5",
"avatar_url": "https://avatars.githubusercontent.com/u/23272429?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/oliverbob",
"html_url": "https://github.com/oliverbob",
"followers_url": "https://api.github.com/users/oliverbob/followers",
"following_url": "https://api.github.com/users/oliverbob/following{/other_user}",
"gists_url": "https://api.github.com/users/oliverbob/gists{/gist_id}",
"starred_url": "https://api.github.com/users/oliverbob/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/oliverbob/subscriptions",
"organizations_url": "https://api.github.com/users/oliverbob/orgs",
"repos_url": "https://api.github.com/users/oliverbob/repos",
"events_url": "https://api.github.com/users/oliverbob/events{/privacy}",
"received_events_url": "https://api.github.com/users/oliverbob/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 5
| 2023-12-25T16:46:17
| 2025-01-14T21:54:24
| 2023-12-26T09:55:41
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
If Ollama can read prompts from file, there has to be a way somehow to receive response to file and save it in the working directory.
How do I achieve this?
Scenario:
ollama run dolphin-phi
>>> '/home/ai/repo/llama2.c/run.c' rewrite this code with arguments for blah... :smile:
Thanks.
|
{
"login": "oliverbob",
"id": 23272429,
"node_id": "MDQ6VXNlcjIzMjcyNDI5",
"avatar_url": "https://avatars.githubusercontent.com/u/23272429?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/oliverbob",
"html_url": "https://github.com/oliverbob",
"followers_url": "https://api.github.com/users/oliverbob/followers",
"following_url": "https://api.github.com/users/oliverbob/following{/other_user}",
"gists_url": "https://api.github.com/users/oliverbob/gists{/gist_id}",
"starred_url": "https://api.github.com/users/oliverbob/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/oliverbob/subscriptions",
"organizations_url": "https://api.github.com/users/oliverbob/orgs",
"repos_url": "https://api.github.com/users/oliverbob/repos",
"events_url": "https://api.github.com/users/oliverbob/events{/privacy}",
"received_events_url": "https://api.github.com/users/oliverbob/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1710/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1710/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8608
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8608/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8608/comments
|
https://api.github.com/repos/ollama/ollama/issues/8608/events
|
https://github.com/ollama/ollama/issues/8608
| 2,812,662,391
|
I_kwDOJ0Z1Ps6npdJ3
| 8,608
|
Panic while downloading the model
|
{
"login": "tchaton",
"id": 12861981,
"node_id": "MDQ6VXNlcjEyODYxOTgx",
"avatar_url": "https://avatars.githubusercontent.com/u/12861981?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tchaton",
"html_url": "https://github.com/tchaton",
"followers_url": "https://api.github.com/users/tchaton/followers",
"following_url": "https://api.github.com/users/tchaton/following{/other_user}",
"gists_url": "https://api.github.com/users/tchaton/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tchaton/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tchaton/subscriptions",
"organizations_url": "https://api.github.com/users/tchaton/orgs",
"repos_url": "https://api.github.com/users/tchaton/repos",
"events_url": "https://api.github.com/users/tchaton/events{/privacy}",
"received_events_url": "https://api.github.com/users/tchaton/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677370291,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCVsw",
"url": "https://api.github.com/repos/ollama/ollama/labels/networking",
"name": "networking",
"color": "0B5368",
"default": false,
"description": "Issues relating to ollama pull and push"
}
] |
closed
| false
| null |
[] | null | 2
| 2025-01-27T10:43:53
| 2025-01-27T16:23:44
| 2025-01-27T16:23:42
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
`/bin/ollama run llama3.1`
<img width="1243" alt="Image" src="https://github.com/user-attachments/assets/0c520af1-52d5-4371-bf89-fac7a9fe94d9" />
### OS
_No response_
### GPU
_No response_
### CPU
_No response_
### Ollama version
_No response_
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8608/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8608/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/310
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/310/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/310/comments
|
https://api.github.com/repos/ollama/ollama/issues/310/events
|
https://github.com/ollama/ollama/issues/310
| 1,842,091,811
|
I_kwDOJ0Z1Ps5tzBcj
| 310
|
generating embeddings when creating a model should use loaded llm logic
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 0
| 2023-08-08T20:58:48
| 2023-08-15T19:12:03
| 2023-08-15T19:12:03
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Right now embeddings modelfile generation ignores the loaded model and loads its own, it should share the same logic
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/310/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/310/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6875
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6875/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6875/comments
|
https://api.github.com/repos/ollama/ollama/issues/6875/events
|
https://github.com/ollama/ollama/issues/6875
| 2,535,914,100
|
I_kwDOJ0Z1Ps6XJvp0
| 6,875
|
reader-lm - heavy hallucinations?
|
{
"login": "MeinDeutschkurs",
"id": 129950466,
"node_id": "U_kgDOB77jAg",
"avatar_url": "https://avatars.githubusercontent.com/u/129950466?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MeinDeutschkurs",
"html_url": "https://github.com/MeinDeutschkurs",
"followers_url": "https://api.github.com/users/MeinDeutschkurs/followers",
"following_url": "https://api.github.com/users/MeinDeutschkurs/following{/other_user}",
"gists_url": "https://api.github.com/users/MeinDeutschkurs/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MeinDeutschkurs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MeinDeutschkurs/subscriptions",
"organizations_url": "https://api.github.com/users/MeinDeutschkurs/orgs",
"repos_url": "https://api.github.com/users/MeinDeutschkurs/repos",
"events_url": "https://api.github.com/users/MeinDeutschkurs/events{/privacy}",
"received_events_url": "https://api.github.com/users/MeinDeutschkurs/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 15
| 2024-09-19T10:20:09
| 2025-01-07T23:57:39
| 2025-01-07T23:57:39
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Heavy hallucinations.
Is this normal for the model, or does it not know when to stop?
```
(base) jophi@10 flux-web % ollama run reader-lm
pulling manifest
pulling b730ecb20584... 100% ▕████████████████▏ 934 MB
pulling 94baea11f788... 100% ▕████████████████▏ 13 KB
pulling 60ed67c565f8... 100% ▕████████████████▏ 506 B
pulling f02dd72bb242... 100% ▕████████████████▏ 59 B
pulling 1b6ce7ace735... 100% ▕████████████████▏ 485 B
verifying sha256 digest
writing manifest
success
>>> <html><body><h1>Hallo Welt</h1><p>Schön, dass du hier bist</p></body></html>
...
Hallo Welt
-----------
Schön, dass du hier bist
Hello World!
------------
Thank you for visiting my website!
My name is Markus. I am a web designer and developer. I have been doing
this for more than 10 years.
I love coding in PHP, CSS and JavaScript.
Here are some of the projects that I've done:

And here is another one:

I have a passion for learning new things and experimenting with different
technologies.
If you want to learn more about my work, I have created the following
pages:
[Design](http://markus-mark.com/design/)
[Development](http://markus-mark.com/development/)
[Ideas](http://markus-mark.com/ideas/)
I hope you enjoy my website!
Thank you again for visiting.
Best regards,
Markus
```
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.3.11
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6875/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6875/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5723
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5723/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5723/comments
|
https://api.github.com/repos/ollama/ollama/issues/5723/events
|
https://github.com/ollama/ollama/issues/5723
| 2,411,224,046
|
I_kwDOJ0Z1Ps6PuFvu
| 5,723
|
Is ollama since 0.2.1 slower on CPU's
|
{
"login": "mklue",
"id": 108096783,
"node_id": "U_kgDOBnFtDw",
"avatar_url": "https://avatars.githubusercontent.com/u/108096783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mklue",
"html_url": "https://github.com/mklue",
"followers_url": "https://api.github.com/users/mklue/followers",
"following_url": "https://api.github.com/users/mklue/following{/other_user}",
"gists_url": "https://api.github.com/users/mklue/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mklue/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mklue/subscriptions",
"organizations_url": "https://api.github.com/users/mklue/orgs",
"repos_url": "https://api.github.com/users/mklue/repos",
"events_url": "https://api.github.com/users/mklue/events{/privacy}",
"received_events_url": "https://api.github.com/users/mklue/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info",
"name": "needs more info",
"color": "BA8041",
"default": false,
"description": "More information is needed to assist"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 3
| 2024-07-16T13:58:59
| 2024-08-09T23:34:17
| 2024-08-09T23:34:17
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Love the project, recently I updated from 0.2.1 to 0.2.2 and found the server's responses slower. I upgraded to 0.2.5 and I think it has not improved. Is it load times on the initial run ?
I have a 36 core xeon processor, 64 GB ram and an older radeon GPU, So all the work s going on the the CPU's. Before in 0.2.1 it responded quickly but now the response delay is noticiable.
### OS
Linux
### GPU
AMD
### CPU
Intel
### Ollama version
0.2.5
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5723/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5723/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8066
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8066/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8066/comments
|
https://api.github.com/repos/ollama/ollama/issues/8066/events
|
https://github.com/ollama/ollama/issues/8066
| 2,735,122,133
|
I_kwDOJ0Z1Ps6jBqbV
| 8,066
|
ollama 0.5.1 is detecting my NVIDIA Tesla M40, but they are not used.
|
{
"login": "bones0",
"id": 55978585,
"node_id": "MDQ6VXNlcjU1OTc4NTg1",
"avatar_url": "https://avatars.githubusercontent.com/u/55978585?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bones0",
"html_url": "https://github.com/bones0",
"followers_url": "https://api.github.com/users/bones0/followers",
"following_url": "https://api.github.com/users/bones0/following{/other_user}",
"gists_url": "https://api.github.com/users/bones0/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bones0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bones0/subscriptions",
"organizations_url": "https://api.github.com/users/bones0/orgs",
"repos_url": "https://api.github.com/users/bones0/repos",
"events_url": "https://api.github.com/users/bones0/events{/privacy}",
"received_events_url": "https://api.github.com/users/bones0/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 13
| 2024-12-12T07:49:41
| 2024-12-20T10:24:35
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
ollama 0.5.1 binary distribution is recognising the TESLA M40:
```
Dec 12 07:11:11 bigrig ollama[362206]: time=2024-12-12T07:11:11.731Z level=INFO source=types.go:123 msg="inference compute" id=GPU-c8f87326-45f6-945a-1a1a-63bd9a7fc262 library=cuda variant=v12 compute=8.6 driver=12.2 name="NVIDIA GeForce RTX 3090" total="23.7 GiB" available="23.4 GiB"
Dec 12 07:11:11 bigrig ollama[362206]: time=2024-12-12T07:11:11.731Z level=INFO source=types.go:123 msg="inference compute" id=GPU-603a9272-c602-62ea-4090-51223189bb8f library=cuda variant=v12 compute=7.5 driver=12.2 name="Tesla T4" total="15.6 GiB" available="15.5 GiB"
Dec 12 07:11:11 bigrig ollama[362206]: time=2024-12-12T07:11:11.731Z level=INFO source=types.go:123 msg="inference compute" id=GPU-4fe5252f-aa78-f5f8-958a-5a8ae3ffe9e4 library=cuda variant=v12 compute=7.5 driver=12.2 name="Tesla T4" total="14.6 GiB" available="14.5 GiB"
Dec 12 07:11:11 bigrig ollama[362206]: time=2024-12-12T07:11:11.731Z level=INFO source=types.go:123 msg="inference compute" id=GPU-e6b47121-0d1d-8c63-1ab0-14012d5eb87f library=cuda variant=v12 compute=6.1 driver=12.2 name="Tesla P40" total="23.9 GiB" available="23.7 GiB"
Dec 12 07:11:11 bigrig ollama[362206]: time=2024-12-12T07:11:11.731Z level=INFO source=types.go:123 msg="inference compute" id=GPU-079cdcf9-556e-2f0c-6e6d-042eec929d92 library=cuda variant=v11 compute=5.2 driver=12.2 name="Tesla M40 24GB" total="23.9 GiB" available="23.8 GiB"
Dec 12 07:11:11 bigrig ollama[362206]: time=2024-12-12T07:11:11.731Z level=INFO source=types.go:123 msg="inference compute" id=GPU-41ac58ae-d8b7-afdb-c25f-ca6f09b57999 library=cuda variant=v11 compute=5.2 driver=12.2 name="Tesla M40 24GB" total="23.9 GiB" available="23.8 GiB"
```
But later on, only the other GPUs are used:
```
Dec 12 07:14:01 bigrig ollama[362206]: ggml_cuda_init: found 4 CUDA devices:
Dec 12 07:14:01 bigrig ollama[362206]: Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
Dec 12 07:14:01 bigrig ollama[362206]: Device 1: Tesla T4, compute capability 7.5, VMM: yes
Dec 12 07:14:01 bigrig ollama[362206]: Device 2: Tesla T4, compute capability 7.5, VMM: yes
Dec 12 07:14:01 bigrig ollama[362206]: Device 3: Tesla P40, compute capability 6.1, VMM: yes
Dec 12 07:14:01 bigrig ollama[362206]: llm_load_tensors: ggml ctx size = 2.00 MiB
Dec 12 07:14:41 bigrig ollama[362206]: llm_load_tensors: offloading 30 repeating layers to GPU
Dec 12 07:14:41 bigrig ollama[362206]: llm_load_tensors: offloaded 30/61 layers to GPU
Dec 12 07:14:41 bigrig ollama[362206]: llm_load_tensors: CUDA_Host buffer size = 62745.29 MiB
Dec 12 07:14:41 bigrig ollama[362206]: llm_load_tensors: CUDA0 buffer size = 21335.35 MiB
Dec 12 07:14:41 bigrig ollama[362206]: llm_load_tensors: CUDA1 buffer size = 12801.21 MiB
Dec 12 07:14:41 bigrig ollama[362206]: llm_load_tensors: CUDA2 buffer size = 10667.68 MiB
Dec 12 07:14:41 bigrig ollama[362206]: llm_load_tensors: CUDA3 buffer size = 19201.82 MiB
```
Compiling ollama from source, explicitely with setting the architectures (export CMAKE_CUDA_ARCHITECTURES="50;52;61;70;75;80;90") does not change the behaviour.
[ollama-logs-M40.txt](https://github.com/user-attachments/files/18107418/ollama-logs-M40.txt)
Running _/usr/local/bin/ollama run deepseek-coder-v2:236b_ for testing. This model should be big enough to cause ollama to fill all the GPUs. But it does not do that.

I already tried to invoke different CUDA-Versions by update-alternatives, but to no avail.
BTW: _export CUDA_VISIBLE_DEVICES=4,5_ does not have any effect
I have a llama.cpp 2749, self-compiled, which is using the M40. Since the output is nearly identical to the one in the ollama-log, which is only recognizing 4 CUDA-Devices, I suspect the problem is somehow related with the llama.cpp shipped with ollama 0.5.1. This is the output in question:
```
Log start
main: build = 2749 (928e0b70)
main: built with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu
main: seed = 1733989418
...
ggml_cuda_init: found 6 CUDA devices:
Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
Device 1: Tesla T4, compute capability 7.5, VMM: yes
Device 2: Tesla T4, compute capability 7.5, VMM: yes
Device 3: Tesla P40, compute capability 6.1, VMM: yes
Device 4: Tesla M40 24GB, compute capability 5.2, VMM: yes
Device 5: Tesla M40 24GB, compute capability 5.2, VMM: yes
```
### OS
Linux
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.5.1
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8066/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8066/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/4734
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4734/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4734/comments
|
https://api.github.com/repos/ollama/ollama/issues/4734/events
|
https://github.com/ollama/ollama/pull/4734
| 2,326,710,810
|
PR_kwDOJ0Z1Ps5xETOX
| 4,734
|
partial offloading: allow flash attention and disable mmap
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-05-30T23:39:10
| 2024-05-30T23:58:02
| 2024-05-30T23:58:02
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4734",
"html_url": "https://github.com/ollama/ollama/pull/4734",
"diff_url": "https://github.com/ollama/ollama/pull/4734.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4734.patch",
"merged_at": "2024-05-30T23:58:01"
}
| null |
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4734/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4734/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5156
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5156/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5156/comments
|
https://api.github.com/repos/ollama/ollama/issues/5156/events
|
https://github.com/ollama/ollama/issues/5156
| 2,363,387,235
|
I_kwDOJ0Z1Ps6M3m1j
| 5,156
|
Set the encoding for API responses
|
{
"login": "santclear",
"id": 1068127,
"node_id": "MDQ6VXNlcjEwNjgxMjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1068127?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/santclear",
"html_url": "https://github.com/santclear",
"followers_url": "https://api.github.com/users/santclear/followers",
"following_url": "https://api.github.com/users/santclear/following{/other_user}",
"gists_url": "https://api.github.com/users/santclear/gists{/gist_id}",
"starred_url": "https://api.github.com/users/santclear/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/santclear/subscriptions",
"organizations_url": "https://api.github.com/users/santclear/orgs",
"repos_url": "https://api.github.com/users/santclear/repos",
"events_url": "https://api.github.com/users/santclear/events{/privacy}",
"received_events_url": "https://api.github.com/users/santclear/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 7706482389,
"node_id": "LA_kwDOJ0Z1Ps8AAAABy1eW1Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/api",
"name": "api",
"color": "bfdadc",
"default": false,
"description": ""
}
] |
open
| false
| null |
[] | null | 0
| 2024-06-20T02:38:14
| 2024-11-06T01:19:12
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Is it possible to set the encoding for API responses? I have integrated Ollama into a platform where it is not possible to change the encoding from 'UTF-16 LE BOM'. Therefore, I would like Ollama to respond to API calls in this encoding.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5156/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5156/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/7225
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7225/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7225/comments
|
https://api.github.com/repos/ollama/ollama/issues/7225/events
|
https://github.com/ollama/ollama/issues/7225
| 2,591,485,913
|
I_kwDOJ0Z1Ps6adu_Z
| 7,225
|
ollama parallel
|
{
"login": "jamalibrahimsec",
"id": 185197390,
"node_id": "U_kgDOCwnjTg",
"avatar_url": "https://avatars.githubusercontent.com/u/185197390?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jamalibrahimsec",
"html_url": "https://github.com/jamalibrahimsec",
"followers_url": "https://api.github.com/users/jamalibrahimsec/followers",
"following_url": "https://api.github.com/users/jamalibrahimsec/following{/other_user}",
"gists_url": "https://api.github.com/users/jamalibrahimsec/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jamalibrahimsec/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jamalibrahimsec/subscriptions",
"organizations_url": "https://api.github.com/users/jamalibrahimsec/orgs",
"repos_url": "https://api.github.com/users/jamalibrahimsec/repos",
"events_url": "https://api.github.com/users/jamalibrahimsec/events{/privacy}",
"received_events_url": "https://api.github.com/users/jamalibrahimsec/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] |
closed
| false
| null |
[] | null | 19
| 2024-10-16T10:53:42
| 2024-11-12T16:35:15
| 2024-10-17T18:42:19
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hello
I am trying to run ollama in instance that has 40 cores of cpus
what I understood is that max models env variable permit doing that but there was no cleear explanation about how it would doing it with cpu (knowing that I have enough ram ).
if you can explain for me how ollama manage that with cpu it would be perfect.
tahnks
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7225/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7225/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6210
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6210/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6210/comments
|
https://api.github.com/repos/ollama/ollama/issues/6210/events
|
https://github.com/ollama/ollama/issues/6210
| 2,451,754,458
|
I_kwDOJ0Z1Ps6SIs3a
| 6,210
|
[question] Do you plan to upstream patches for llama.cpp?
|
{
"login": "yurivict",
"id": 271906,
"node_id": "MDQ6VXNlcjI3MTkwNg==",
"avatar_url": "https://avatars.githubusercontent.com/u/271906?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yurivict",
"html_url": "https://github.com/yurivict",
"followers_url": "https://api.github.com/users/yurivict/followers",
"following_url": "https://api.github.com/users/yurivict/following{/other_user}",
"gists_url": "https://api.github.com/users/yurivict/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yurivict/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yurivict/subscriptions",
"organizations_url": "https://api.github.com/users/yurivict/orgs",
"repos_url": "https://api.github.com/users/yurivict/repos",
"events_url": "https://api.github.com/users/yurivict/events{/privacy}",
"received_events_url": "https://api.github.com/users/yurivict/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 0
| 2024-08-06T21:57:06
| 2024-08-06T21:57:06
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
We would like to use the packaged version of llama.cpp (to simplify packaging on FreeBSD) but patches need to be upstreamed first.
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.3.4
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6210/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6210/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/3031
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3031/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3031/comments
|
https://api.github.com/repos/ollama/ollama/issues/3031/events
|
https://github.com/ollama/ollama/issues/3031
| 2,177,465,161
|
I_kwDOJ0Z1Ps6ByXtJ
| 3,031
|
Unstopped empty lines when I say "hi" to "vicuna" model (temperature: 0.0)
|
{
"login": "eliranwong",
"id": 25262722,
"node_id": "MDQ6VXNlcjI1MjYyNzIy",
"avatar_url": "https://avatars.githubusercontent.com/u/25262722?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eliranwong",
"html_url": "https://github.com/eliranwong",
"followers_url": "https://api.github.com/users/eliranwong/followers",
"following_url": "https://api.github.com/users/eliranwong/following{/other_user}",
"gists_url": "https://api.github.com/users/eliranwong/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eliranwong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eliranwong/subscriptions",
"organizations_url": "https://api.github.com/users/eliranwong/orgs",
"repos_url": "https://api.github.com/users/eliranwong/repos",
"events_url": "https://api.github.com/users/eliranwong/events{/privacy}",
"received_events_url": "https://api.github.com/users/eliranwong/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-03-10T00:45:49
| 2024-03-11T20:31:23
| 2024-03-11T20:31:23
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Unstopped empty lines when I say "hi" to "vicuna" model (temperature: 0.0)
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3031/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3031/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1908
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1908/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1908/comments
|
https://api.github.com/repos/ollama/ollama/issues/1908/events
|
https://github.com/ollama/ollama/pull/1908
| 2,075,180,560
|
PR_kwDOJ0Z1Ps5jumE1
| 1,908
|
Rebase
|
{
"login": "kris-hansen",
"id": 8484582,
"node_id": "MDQ6VXNlcjg0ODQ1ODI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8484582?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kris-hansen",
"html_url": "https://github.com/kris-hansen",
"followers_url": "https://api.github.com/users/kris-hansen/followers",
"following_url": "https://api.github.com/users/kris-hansen/following{/other_user}",
"gists_url": "https://api.github.com/users/kris-hansen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kris-hansen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kris-hansen/subscriptions",
"organizations_url": "https://api.github.com/users/kris-hansen/orgs",
"repos_url": "https://api.github.com/users/kris-hansen/repos",
"events_url": "https://api.github.com/users/kris-hansen/events{/privacy}",
"received_events_url": "https://api.github.com/users/kris-hansen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-01-10T20:48:34
| 2024-01-25T17:36:27
| 2024-01-10T20:48:56
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1908",
"html_url": "https://github.com/ollama/ollama/pull/1908",
"diff_url": "https://github.com/ollama/ollama/pull/1908.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1908.patch",
"merged_at": null
}
|
- Rebase from upstream
- That is all
|
{
"login": "kris-hansen",
"id": 8484582,
"node_id": "MDQ6VXNlcjg0ODQ1ODI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8484582?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kris-hansen",
"html_url": "https://github.com/kris-hansen",
"followers_url": "https://api.github.com/users/kris-hansen/followers",
"following_url": "https://api.github.com/users/kris-hansen/following{/other_user}",
"gists_url": "https://api.github.com/users/kris-hansen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kris-hansen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kris-hansen/subscriptions",
"organizations_url": "https://api.github.com/users/kris-hansen/orgs",
"repos_url": "https://api.github.com/users/kris-hansen/repos",
"events_url": "https://api.github.com/users/kris-hansen/events{/privacy}",
"received_events_url": "https://api.github.com/users/kris-hansen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1908/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1908/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/8459
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8459/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8459/comments
|
https://api.github.com/repos/ollama/ollama/issues/8459/events
|
https://github.com/ollama/ollama/issues/8459
| 2,793,577,758
|
I_kwDOJ0Z1Ps6mgp0e
| 8,459
|
MLX Community models for Macs
|
{
"login": "VistritPandey",
"id": 56611775,
"node_id": "MDQ6VXNlcjU2NjExNzc1",
"avatar_url": "https://avatars.githubusercontent.com/u/56611775?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VistritPandey",
"html_url": "https://github.com/VistritPandey",
"followers_url": "https://api.github.com/users/VistritPandey/followers",
"following_url": "https://api.github.com/users/VistritPandey/following{/other_user}",
"gists_url": "https://api.github.com/users/VistritPandey/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VistritPandey/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VistritPandey/subscriptions",
"organizations_url": "https://api.github.com/users/VistritPandey/orgs",
"repos_url": "https://api.github.com/users/VistritPandey/repos",
"events_url": "https://api.github.com/users/VistritPandey/events{/privacy}",
"received_events_url": "https://api.github.com/users/VistritPandey/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
closed
| false
| null |
[] | null | 1
| 2025-01-16T19:18:47
| 2025-01-19T07:37:58
| 2025-01-19T07:37:57
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
MLX Community has many models specifically for Macs, which are faster and better than their normal/OG counterparts.
Link: https://huggingface.co/mlx-community
|
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8459/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8459/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/640
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/640/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/640/comments
|
https://api.github.com/repos/ollama/ollama/issues/640/events
|
https://github.com/ollama/ollama/pull/640
| 1,918,318,031
|
PR_kwDOJ0Z1Ps5bfK0O
| 640
|
remove list from interactive mode
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-09-28T21:48:54
| 2023-10-20T16:44:25
| 2023-09-28T21:49:41
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/640",
"html_url": "https://github.com/ollama/ollama/pull/640",
"diff_url": "https://github.com/ollama/ollama/pull/640.diff",
"patch_url": "https://github.com/ollama/ollama/pull/640.patch",
"merged_at": null
}
|
List in interactive mode doesn't make sense since you cant switch models in the repl
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/640/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/640/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7496
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7496/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7496/comments
|
https://api.github.com/repos/ollama/ollama/issues/7496/events
|
https://github.com/ollama/ollama/pull/7496
| 2,633,542,432
|
PR_kwDOJ0Z1Ps6A1kvI
| 7,496
|
CI: fix matrix wiring
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-11-04T18:26:43
| 2024-11-04T18:48:38
| 2024-11-04T18:48:35
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7496",
"html_url": "https://github.com/ollama/ollama/pull/7496",
"diff_url": "https://github.com/ollama/ollama/pull/7496.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7496.patch",
"merged_at": "2024-11-04T18:48:35"
}
|
Matrix strategies can't use env vars so unwind the prior changes to dry the definitions out a little.
Fixes release CI error:
```
[Invalid workflow file: .github/workflows/release.yaml#L166](https://github.com/ollama/ollama/actions/runs/11670233094/workflow)
The workflow is not valid. .github/workflows/release.yaml (Line: 166, Col: 22): Unrecognized named-value: 'env'. Located at position 1 within expression: env.CUDA_11_WINDOWS_VER .github/workflows/release.yaml (Line: 167, Col: 18): Unrecognized named-value: 'env'. Located at position 1 within expression: env.CUDA_11_WINDOWS_URL
```
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7496/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7496/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4915
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4915/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4915/comments
|
https://api.github.com/repos/ollama/ollama/issues/4915/events
|
https://github.com/ollama/ollama/issues/4915
| 2,340,902,046
|
I_kwDOJ0Z1Ps6Lh1Se
| 4,915
|
need cogvlm2-llama3-chinese-chat
|
{
"login": "enryteam",
"id": 20081090,
"node_id": "MDQ6VXNlcjIwMDgxMDkw",
"avatar_url": "https://avatars.githubusercontent.com/u/20081090?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/enryteam",
"html_url": "https://github.com/enryteam",
"followers_url": "https://api.github.com/users/enryteam/followers",
"following_url": "https://api.github.com/users/enryteam/following{/other_user}",
"gists_url": "https://api.github.com/users/enryteam/gists{/gist_id}",
"starred_url": "https://api.github.com/users/enryteam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/enryteam/subscriptions",
"organizations_url": "https://api.github.com/users/enryteam/orgs",
"repos_url": "https://api.github.com/users/enryteam/repos",
"events_url": "https://api.github.com/users/enryteam/events{/privacy}",
"received_events_url": "https://api.github.com/users/enryteam/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
closed
| false
| null |
[] | null | 3
| 2024-06-07T17:41:11
| 2024-07-20T14:29:35
| 2024-07-20T14:29:35
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
https://huggingface.co/THUDM/cogvlm2-llama3-chinese-chat-19B
thanks
|
{
"login": "enryteam",
"id": 20081090,
"node_id": "MDQ6VXNlcjIwMDgxMDkw",
"avatar_url": "https://avatars.githubusercontent.com/u/20081090?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/enryteam",
"html_url": "https://github.com/enryteam",
"followers_url": "https://api.github.com/users/enryteam/followers",
"following_url": "https://api.github.com/users/enryteam/following{/other_user}",
"gists_url": "https://api.github.com/users/enryteam/gists{/gist_id}",
"starred_url": "https://api.github.com/users/enryteam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/enryteam/subscriptions",
"organizations_url": "https://api.github.com/users/enryteam/orgs",
"repos_url": "https://api.github.com/users/enryteam/repos",
"events_url": "https://api.github.com/users/enryteam/events{/privacy}",
"received_events_url": "https://api.github.com/users/enryteam/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4915/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4915/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6774
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6774/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6774/comments
|
https://api.github.com/repos/ollama/ollama/issues/6774/events
|
https://github.com/ollama/ollama/issues/6774
| 2,522,209,703
|
I_kwDOJ0Z1Ps6WVd2n
| 6,774
|
Add Tokenizer functionality to API
|
{
"login": "Master-Pr0grammer",
"id": 147747206,
"node_id": "U_kgDOCM5xhg",
"avatar_url": "https://avatars.githubusercontent.com/u/147747206?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Master-Pr0grammer",
"html_url": "https://github.com/Master-Pr0grammer",
"followers_url": "https://api.github.com/users/Master-Pr0grammer/followers",
"following_url": "https://api.github.com/users/Master-Pr0grammer/following{/other_user}",
"gists_url": "https://api.github.com/users/Master-Pr0grammer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Master-Pr0grammer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Master-Pr0grammer/subscriptions",
"organizations_url": "https://api.github.com/users/Master-Pr0grammer/orgs",
"repos_url": "https://api.github.com/users/Master-Pr0grammer/repos",
"events_url": "https://api.github.com/users/Master-Pr0grammer/events{/privacy}",
"received_events_url": "https://api.github.com/users/Master-Pr0grammer/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 7706482389,
"node_id": "LA_kwDOJ0Z1Ps8AAAABy1eW1Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/api",
"name": "api",
"color": "bfdadc",
"default": false,
"description": ""
}
] |
open
| false
| null |
[] | null | 1
| 2024-09-12T12:04:44
| 2024-11-06T00:26:15
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Having access to the models tokenizer is extremely useful for counting tokens, and managing the context window. In a lot of cases its essential to get an LLM implementation to work properly. The model already has the tokenizer loaded, and ollama's backend, llama.cpp, already has an interface for the tokenizer, so it shouldn't be that difficult to implement into the API.
Unless this is already a functionality on the API, in which case I'm sorry, but I just didn't see it in the documentation.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6774/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6774/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/2193
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2193/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2193/comments
|
https://api.github.com/repos/ollama/ollama/issues/2193/events
|
https://github.com/ollama/ollama/issues/2193
| 2,101,110,822
|
I_kwDOJ0Z1Ps59PGgm
| 2,193
|
:duck: Publish `DuckDB-NSQL-7B` on ollama
|
{
"login": "adriens",
"id": 5235127,
"node_id": "MDQ6VXNlcjUyMzUxMjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/5235127?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/adriens",
"html_url": "https://github.com/adriens",
"followers_url": "https://api.github.com/users/adriens/followers",
"following_url": "https://api.github.com/users/adriens/following{/other_user}",
"gists_url": "https://api.github.com/users/adriens/gists{/gist_id}",
"starred_url": "https://api.github.com/users/adriens/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/adriens/subscriptions",
"organizations_url": "https://api.github.com/users/adriens/orgs",
"repos_url": "https://api.github.com/users/adriens/repos",
"events_url": "https://api.github.com/users/adriens/events{/privacy}",
"received_events_url": "https://api.github.com/users/adriens/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 5
| 2024-01-25T20:22:56
| 2024-01-26T22:47:33
| 2024-01-25T22:46:03
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
# :grey_question: About
[`DuckDB-NSQL-7B`](https://motherduck.com/blog/duckdb-text2sql-llm/), A LLM for [duckdb](https://github.com/duckdb/duckdb) has been released.
It would be very useful to add it to `ollama` so anyone could build new experiences on top if it.
# :bookmark: Resources
- [AI That Quacks: Introducing DuckDB-NSQL-7B, A LLM for DuckDB](https://motherduck.com/blog/duckdb-text2sql-llm/)
- [Demo on HuggingFace](https://huggingface.co/spaces/motherduckdb/DuckDB-NSQL-7B)
- [`motherduckdb/DuckDB-NSQL-7B-v0.1-GGUF`](https://huggingface.co/motherduckdb/DuckDB-NSQL-7B-v0.1-GGUF)
- [:octocat: `github.com/NumbersStationAI/DuckDB-NSQL`](https://github.com/NumbersStationAI/DuckDB-NSQL)
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2193/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2193/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7193
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7193/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7193/comments
|
https://api.github.com/repos/ollama/ollama/issues/7193/events
|
https://github.com/ollama/ollama/pull/7193
| 2,584,144,445
|
PR_kwDOJ0Z1Ps5-dETZ
| 7,193
|
Add missing BF16 tensor type.
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-10-13T17:46:23
| 2024-10-15T00:06:35
| 2024-10-15T00:06:35
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7193",
"html_url": "https://github.com/ollama/ollama/pull/7193",
"diff_url": "https://github.com/ollama/ollama/pull/7193.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7193.patch",
"merged_at": "2024-10-15T00:06:35"
}
|
Models with BF16 tensors are not imported because the typeSize is 0.
Fixes: https://github.com/ollama/ollama/issues/7188
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7193/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/ollama/ollama/issues/7193/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2502
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2502/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2502/comments
|
https://api.github.com/repos/ollama/ollama/issues/2502/events
|
https://github.com/ollama/ollama/issues/2502
| 2,135,209,522
|
I_kwDOJ0Z1Ps5_RLYy
| 2,502
|
Ollama fails to detect gpu on prerelease 0.1.25
|
{
"login": "abysssol",
"id": 76763323,
"node_id": "MDQ6VXNlcjc2NzYzMzIz",
"avatar_url": "https://avatars.githubusercontent.com/u/76763323?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abysssol",
"html_url": "https://github.com/abysssol",
"followers_url": "https://api.github.com/users/abysssol/followers",
"following_url": "https://api.github.com/users/abysssol/following{/other_user}",
"gists_url": "https://api.github.com/users/abysssol/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abysssol/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abysssol/subscriptions",
"organizations_url": "https://api.github.com/users/abysssol/orgs",
"repos_url": "https://api.github.com/users/abysssol/repos",
"events_url": "https://api.github.com/users/abysssol/events{/privacy}",
"received_events_url": "https://api.github.com/users/abysssol/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 22
| 2024-02-14T21:10:01
| 2024-05-11T10:43:11
| 2024-02-17T01:23:13
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I'm working to update the ollama package in [nixpkgs](https://github.com/NixOS/nixpkgs), and release 0.1.24 works as expected ([nix source](https://github.com/abysssol/nixpkgs/tree/update-ollama-0.1.24), [build here](https://github.com/abysssol/ollama-flake/tree/1.4.1)), but the new prerelease 0.1.25 fails to detect the gpu ([nix source](https://github.com/abysssol/nixpkgs/tree/update-ollama-0.1.25), [build here](https://github.com/abysssol/ollama-flake/tree/ollama-0.1.25)). It seems to build correctly, and it detects the gpu management library `librocm_smi64.so.5.0`, but it then fails to use it, logging `no GPU detected`. I don't know if this is a rocm problem and cuda works right or not, since I only have an amd gpu.
Unfortunately, I don't have the familiarity with ollama to have the slightest clue as to what could be going wrong. Hopefully these logs with OLLAMA_DEBUG=1 are somehow helpful, though.
<details><summary>
#### The server log from [0.1.25](https://github.com/abysssol/ollama-flake/tree/ollama-0.1.25); and [download log](https://github.com/ollama/ollama/files/14314105/debug-0.1.25.log).
</summary>
```
time=2024-02-16T12:17:46.124-05:00 level=INFO source=images.go:706 msg="total blobs: 10"
time=2024-02-16T12:17:46.124-05:00 level=INFO source=images.go:713 msg="total unused blobs removed: 0"
time=2024-02-16T12:17:46.125-05:00 level=INFO source=routes.go:1014 msg="Listening on 127.0.0.1:11434 (version 0.1.25)"
time=2024-02-16T12:17:46.125-05:00 level=INFO source=payload_common.go:107 msg="Extracting dynamic libraries..."
time=2024-02-16T12:17:49.132-05:00 level=INFO source=payload_common.go:146 msg="Dynamic LLM libraries [cuda_v12 cpu_avx rocm cpu cpu_avx2]"
time=2024-02-16T12:17:49.133-05:00 level=DEBUG source=payload_common.go:147 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY"
time=2024-02-16T12:17:49.133-05:00 level=INFO source=gpu.go:94 msg="Detecting GPU type"
time=2024-02-16T12:17:49.133-05:00 level=INFO source=gpu.go:262 msg="Searching for GPU management library libnvidia-ml.so"
time=2024-02-16T12:17:49.133-05:00 level=DEBUG source=gpu.go:280 msg="gpu management search paths: [/usr/local/cuda/lib64/libnvidia-ml.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libnvidia-ml.so* /usr/lib/x86_64-linux-gnu/libnvidia-ml.so* /usr/lib/wsl/lib/libnvidia-ml.so* /usr/lib/wsl/drivers/*/libnvidia-ml.so* /opt/cuda/lib64/libnvidia-ml.so* /usr/lib*/libnvidia-ml.so* /usr/local/lib*/libnvidia-ml.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libnvidia-ml.so* /usr/lib/aarch64-linux-gnu/libnvidia-ml.so* /opt/cuda/targets/x86_64-linux/lib/stubs/libnvidia-ml.so* /nix/store/l7xkh2k5dqbfp1yrckas1r5zrapcd7c5-pipewire-1.0.1-jack/lib/libnvidia-ml.so* /nix/store/146j4sd2z0miywgn0ggydfhcc8zmhxad-rocm-smi-5.7.1/lib/libnvidia-ml.so* /nix/store/z6557r7pgvmxr9x16a4ffazly8dflh65-nvidia-x11-545.29.06-6.1.77/lib/libnvidia-ml.so*]"
time=2024-02-16T12:17:49.133-05:00 level=INFO source=gpu.go:308 msg="Discovered GPU libraries: [/nix/store/z6557r7pgvmxr9x16a4ffazly8dflh65-nvidia-x11-545.29.06-6.1.77/lib/libnvidia-ml.so.545.29.06]"
wiring nvidia management library functions in /nix/store/z6557r7pgvmxr9x16a4ffazly8dflh65-nvidia-x11-545.29.06-6.1.77/lib/libnvidia-ml.so.545.29.06
dlsym: nvmlInit_v2
dlsym: nvmlShutdown
dlsym: nvmlDeviceGetHandleByIndex
dlsym: nvmlDeviceGetMemoryInfo
dlsym: nvmlDeviceGetCount_v2
dlsym: nvmlDeviceGetCudaComputeCapability
dlsym: nvmlSystemGetDriverVersion
dlsym: nvmlDeviceGetName
dlsym: nvmlDeviceGetSerial
dlsym: nvmlDeviceGetVbiosVersion
dlsym: nvmlDeviceGetBoardPartNumber
dlsym: nvmlDeviceGetBrand
nvmlInit_v2 err: 9
time=2024-02-16T12:17:49.151-05:00 level=INFO source=gpu.go:320 msg="Unable to load CUDA management library /nix/store/z6557r7pgvmxr9x16a4ffazly8dflh65-nvidia-x11-545.29.06-6.1.77/lib/libnvidia-ml.so.545.29.06: nvml vram init failure: 9"
time=2024-02-16T12:17:49.151-05:00 level=INFO source=gpu.go:262 msg="Searching for GPU management library librocm_smi64.so"
time=2024-02-16T12:17:49.151-05:00 level=DEBUG source=gpu.go:280 msg="gpu management search paths: [/opt/rocm*/lib*/librocm_smi64.so* /nix/store/l7xkh2k5dqbfp1yrckas1r5zrapcd7c5-pipewire-1.0.1-jack/lib/librocm_smi64.so* /nix/store/146j4sd2z0miywgn0ggydfhcc8zmhxad-rocm-smi-5.7.1/lib/librocm_smi64.so* /nix/store/z6557r7pgvmxr9x16a4ffazly8dflh65-nvidia-x11-545.29.06-6.1.77/lib/librocm_smi64.so*]"
time=2024-02-16T12:17:49.151-05:00 level=INFO source=gpu.go:308 msg="Discovered GPU libraries: [/nix/store/146j4sd2z0miywgn0ggydfhcc8zmhxad-rocm-smi-5.7.1/lib/librocm_smi64.so.5.0]"
wiring rocm management library functions in /nix/store/146j4sd2z0miywgn0ggydfhcc8zmhxad-rocm-smi-5.7.1/lib/librocm_smi64.so.5.0
dlsym: rsmi_init
dlsym: rsmi_shut_down
dlsym: rsmi_dev_memory_total_get
dlsym: rsmi_dev_memory_usage_get
dlsym: rsmi_version_get
dlsym: rsmi_num_monitor_devices
dlsym: rsmi_dev_id_get
dlsym: rsmi_dev_name_get
dlsym: rsmi_dev_brand_get
dlsym: rsmi_dev_vendor_name_get
dlsym: rsmi_dev_vram_vendor_get
dlsym: rsmi_dev_serial_number_get
dlsym: rsmi_dev_subsystem_name_get
dlsym: rsmi_dev_vbios_version_get
time=2024-02-16T12:17:49.153-05:00 level=INFO source=gpu.go:109 msg="Radeon GPU detected"
time=2024-02-16T12:17:49.153-05:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-02-16T12:17:49.153-05:00 level=INFO source=routes.go:1037 msg="no GPU detected"
[GIN] 2024/02/16 - 12:17:51 | 200 | 23.353µs | 127.0.0.1 | HEAD "/"
[GIN] 2024/02/16 - 12:17:51 | 200 | 314.341µs | 127.0.0.1 | POST "/api/show"
[GIN] 2024/02/16 - 12:17:51 | 200 | 166.067µs | 127.0.0.1 | POST "/api/show"
time=2024-02-16T12:17:51.163-05:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-02-16T12:17:51.163-05:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-02-16T12:17:51.163-05:00 level=INFO source=llm.go:77 msg="GPU not available, falling back to CPU"
time=2024-02-16T12:17:51.163-05:00 level=DEBUG source=payload_common.go:93 msg="ordered list of LLM libraries to try [/tmp/ollama81314947/cpu_avx2/libext_server.so]"
time=2024-02-16T12:17:51.163-05:00 level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: /tmp/ollama81314947/cpu_avx2/libext_server.so"
time=2024-02-16T12:17:51.163-05:00 level=INFO source=dyn_ext_server.go:145 msg="Initializing llama server"
[1708103871] system info: AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 |
llama_model_loader: loaded meta data with 24 key-value pairs and 995 tensors from /home/abysssol/.ollama/models/blobs/sha256:097a1ff4445ccc1e7668f70b9de3caa60cfc2a8e2cb9da3505b13854f1cfe20f (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.name str = cognitivecomputations
llama_model_loader: - kv 2: llama.context_length u32 = 32768
llama_model_loader: - kv 3: llama.embedding_length u32 = 4096
llama_model_loader: - kv 4: llama.block_count u32 = 32
llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336
llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 7: llama.attention.head_count u32 = 32
llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 9: llama.expert_count u32 = 8
llama_model_loader: - kv 10: llama.expert_used_count u32 = 2
llama_model_loader: - kv 11: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 12: llama.rope.freq_base f32 = 1000000.000000
llama_model_loader: - kv 13: general.file_type u32 = 2
llama_model_loader: - kv 14: tokenizer.ggml.model str = llama
llama_model_loader: - kv 15: tokenizer.ggml.tokens arr[str,32002] = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv 16: tokenizer.ggml.scores arr[f32,32002] = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv 17: tokenizer.ggml.token_type arr[i32,32002] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv 18: tokenizer.ggml.bos_token_id u32 = 1
llama_model_loader: - kv 19: tokenizer.ggml.eos_token_id u32 = 32000
llama_model_loader: - kv 20: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 21: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 22: tokenizer.chat_template str = {% if not add_generation_prompt is de...
llama_model_loader: - kv 23: general.quantization_version u32 = 2
llama_model_loader: - type f32: 65 tensors
llama_model_loader: - type f16: 32 tensors
llama_model_loader: - type q4_0: 833 tensors
llama_model_loader: - type q8_0: 64 tensors
llama_model_loader: - type q6_K: 1 tensors
llm_load_vocab: special tokens definition check successful ( 261/32002 ).
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = SPM
llm_load_print_meta: n_vocab = 32002
llm_load_print_meta: n_merges = 0
llm_load_print_meta: n_ctx_train = 32768
llm_load_print_meta: n_embd = 4096
llm_load_print_meta: n_head = 32
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_layer = 32
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 4
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: n_ff = 14336
llm_load_print_meta: n_expert = 8
llm_load_print_meta: n_expert_used = 2
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx = 32768
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: model type = 7B
llm_load_print_meta: model ftype = Q4_0
llm_load_print_meta: model params = 46.70 B
llm_load_print_meta: model size = 24.62 GiB (4.53 BPW)
llm_load_print_meta: general.name = cognitivecomputations
llm_load_print_meta: BOS token = 1 '<s>'
llm_load_print_meta: EOS token = 32000 '<|im_end|>'
llm_load_print_meta: UNK token = 0 '<unk>'
llm_load_print_meta: LF token = 13 '<0x0A>'
llm_load_tensors: ggml ctx size = 0.38 MiB
llm_load_tensors: CPU buffer size = 25215.88 MiB
....................................................................................................
llama_new_context_with_model: n_ctx = 2048
llama_new_context_with_model: freq_base = 1000000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: CPU KV buffer size = 256.00 MiB
llama_new_context_with_model: KV self size = 256.00 MiB, K (f16): 128.00 MiB, V (f16): 128.00 MiB
llama_new_context_with_model: CPU input buffer size = 13.01 MiB
llama_new_context_with_model: CPU compute buffer size = 180.03 MiB
llama_new_context_with_model: graph splits (measure): 1
[1708103872] warming up the model with an empty run
[1708103872] Available slots:
[1708103872] -> Slot 0 - max context: 2048
time=2024-02-16T12:17:52.570-05:00 level=INFO source=dyn_ext_server.go:156 msg="Starting llama main loop"
[1708103872] llama server main loop starting
[1708103872] all slots are idle and system prompt is empty, clear the KV cache
time=2024-02-16T12:17:52.571-05:00 level=DEBUG source=prompt.go:175 msg="prompt now fits in context window" required=27 window=2048
[GIN] 2024/02/16 - 12:17:52 | 200 | 1.483697709s | 127.0.0.1 | POST "/api/chat"
[1708103881]
initiating shutdown - draining remaining tasks...
[1708103881]
llama server shutting down
[1708103881] llama server shutdown complete
```
</details>
<details><summary>
#### The server log from [0.1.24](https://github.com/abysssol/ollama-flake/tree/1.4.0); and [download log](https://github.com/ollama/ollama/files/14314412/debug-0.1.24.log).
</summary>
```
time=2024-02-16T12:59:29.210-05:00 level=INFO source=images.go:863 msg="total blobs: 10"
time=2024-02-16T12:59:29.210-05:00 level=INFO source=images.go:870 msg="total unused blobs removed: 0"
time=2024-02-16T12:59:29.211-05:00 level=INFO source=routes.go:999 msg="Listening on 127.0.0.1:11434 (version 0.1.24)"
time=2024-02-16T12:59:29.211-05:00 level=INFO source=payload_common.go:106 msg="Extracting dynamic libraries..."
time=2024-02-16T12:59:32.207-05:00 level=INFO source=payload_common.go:145 msg="Dynamic LLM libraries [cpu cpu_avx rocm cpu_avx2 cuda_v12]"
time=2024-02-16T12:59:32.207-05:00 level=DEBUG source=payload_common.go:146 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY"
time=2024-02-16T12:59:32.207-05:00 level=INFO source=gpu.go:94 msg="Detecting GPU type"
time=2024-02-16T12:59:32.207-05:00 level=INFO source=gpu.go:242 msg="Searching for GPU management library libnvidia-ml.so"
time=2024-02-16T12:59:32.207-05:00 level=DEBUG source=gpu.go:260 msg="gpu management search paths: [/usr/local/cuda/lib64/libnvidia-ml.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libnvidia-ml.so* /usr/lib/x86_64-linux-gnu/libnvidia-ml.so* /usr/lib/wsl/lib/libnvidia-ml.so* /usr/lib/wsl/drivers/*/libnvidia-ml.so* /opt/cuda/lib64/libnvidia-ml.so* /usr/lib*/libnvidia-ml.so* /usr/local/lib*/libnvidia-ml.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libnvidia-ml.so* /usr/lib/aarch64-linux-gnu/libnvidia-ml.so* /opt/cuda/targets/x86_64-linux/lib/stubs/libnvidia-ml.so* /nix/store/l7xkh2k5dqbfp1yrckas1r5zrapcd7c5-pipewire-1.0.1-jack/lib/libnvidia-ml.so* /nix/store/146j4sd2z0miywgn0ggydfhcc8zmhxad-rocm-smi-5.7.1/lib/libnvidia-ml.so* /nix/store/z6557r7pgvmxr9x16a4ffazly8dflh65-nvidia-x11-545.29.06-6.1.77/lib/libnvidia-ml.so*]"
time=2024-02-16T12:59:32.208-05:00 level=INFO source=gpu.go:288 msg="Discovered GPU libraries: [/nix/store/z6557r7pgvmxr9x16a4ffazly8dflh65-nvidia-x11-545.29.06-6.1.77/lib/libnvidia-ml.so.545.29.06]"
wiring nvidia management library functions in /nix/store/z6557r7pgvmxr9x16a4ffazly8dflh65-nvidia-x11-545.29.06-6.1.77/lib/libnvidia-ml.so.545.29.06
dlsym: nvmlInit_v2
dlsym: nvmlShutdown
dlsym: nvmlDeviceGetHandleByIndex
dlsym: nvmlDeviceGetMemoryInfo
dlsym: nvmlDeviceGetCount_v2
dlsym: nvmlDeviceGetCudaComputeCapability
dlsym: nvmlSystemGetDriverVersion
dlsym: nvmlDeviceGetName
dlsym: nvmlDeviceGetSerial
dlsym: nvmlDeviceGetVbiosVersion
dlsym: nvmlDeviceGetBoardPartNumber
dlsym: nvmlDeviceGetBrand
nvmlInit_v2 err: 9
time=2024-02-16T12:59:32.225-05:00 level=INFO source=gpu.go:300 msg="Unable to load CUDA management library /nix/store/z6557r7pgvmxr9x16a4ffazly8dflh65-nvidia-x11-545.29.06-6.1.77/lib/libnvidia-ml.so.545.29.06: nvml vram init failure: 9"
time=2024-02-16T12:59:32.225-05:00 level=INFO source=gpu.go:242 msg="Searching for GPU management library librocm_smi64.so"
time=2024-02-16T12:59:32.225-05:00 level=DEBUG source=gpu.go:260 msg="gpu management search paths: [/opt/rocm*/lib*/librocm_smi64.so* /nix/store/l7xkh2k5dqbfp1yrckas1r5zrapcd7c5-pipewire-1.0.1-jack/lib/librocm_smi64.so* /nix/store/146j4sd2z0miywgn0ggydfhcc8zmhxad-rocm-smi-5.7.1/lib/librocm_smi64.so* /nix/store/z6557r7pgvmxr9x16a4ffazly8dflh65-nvidia-x11-545.29.06-6.1.77/lib/librocm_smi64.so*]"
time=2024-02-16T12:59:32.225-05:00 level=INFO source=gpu.go:288 msg="Discovered GPU libraries: [/nix/store/146j4sd2z0miywgn0ggydfhcc8zmhxad-rocm-smi-5.7.1/lib/librocm_smi64.so.5.0]"
wiring rocm management library functions in /nix/store/146j4sd2z0miywgn0ggydfhcc8zmhxad-rocm-smi-5.7.1/lib/librocm_smi64.so.5.0
dlsym: rsmi_init
dlsym: rsmi_shut_down
dlsym: rsmi_dev_memory_total_get
dlsym: rsmi_dev_memory_usage_get
dlsym: rsmi_version_get
dlsym: rsmi_num_monitor_devices
dlsym: rsmi_dev_id_get
dlsym: rsmi_dev_name_get
dlsym: rsmi_dev_brand_get
dlsym: rsmi_dev_vendor_name_get
dlsym: rsmi_dev_vram_vendor_get
dlsym: rsmi_dev_serial_number_get
dlsym: rsmi_dev_subsystem_name_get
dlsym: rsmi_dev_vbios_version_get
time=2024-02-16T12:59:32.227-05:00 level=INFO source=gpu.go:109 msg="Radeon GPU detected"
time=2024-02-16T12:59:32.227-05:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
discovered 2 ROCm GPU Devices
[0] ROCm device name: 0x1002
[0] ROCm brand: 0x1002
[0] ROCm vendor: 0x1002
[0] ROCm VRAM vendor: samsung
rsmi_dev_serial_number_get failed: 2
[0] ROCm subsystem name: 0x1002
[0] ROCm vbios version: 113-V395TRIO-4OC
[0] ROCm totalMem 17163091968
[0] ROCm usedMem 1409773568
[1] ROCm device name: 0x1002
[1] ROCm brand: 0x1002
[1] ROCm vendor: 0x1002
[1] ROCm VRAM vendor: unknown
rsmi_dev_serial_number_get failed: 2
[1] ROCm subsystem name: 0x1002
[1] ROCm vbios version: 102-RAPHAEL-006
[1] ROCm totalMem 536870912
[1] ROCm usedMem 20668416
[1] ROCm integrated GPU
time=2024-02-16T12:59:32.228-05:00 level=INFO source=gpu.go:177 msg="ROCm integrated GPU detected - ROCR_VISIBLE_DEVICES=0"
time=2024-02-16T12:59:32.228-05:00 level=DEBUG source=gpu.go:231 msg="rocm detected 2 devices with 12975M available memory"
[GIN] 2024/02/16 - 12:59:35 | 200 | 22.091µs | 127.0.0.1 | HEAD "/"
[GIN] 2024/02/16 - 12:59:35 | 200 | 372.13µs | 127.0.0.1 | POST "/api/show"
[GIN] 2024/02/16 - 12:59:35 | 200 | 471.904µs | 127.0.0.1 | POST "/api/show"
time=2024-02-16T12:59:35.210-05:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
discovered 2 ROCm GPU Devices
[0] ROCm device name: 0x1002
[0] ROCm brand: 0x1002
[0] ROCm vendor: 0x1002
[0] ROCm VRAM vendor: samsung
rsmi_dev_serial_number_get failed: 2
[0] ROCm subsystem name: 0x1002
[0] ROCm vbios version: 113-V395TRIO-4OC
[0] ROCm totalMem 17163091968
[0] ROCm usedMem 1409773568
[1] ROCm device name: 0x1002
[1] ROCm brand: 0x1002
[1] ROCm vendor: 0x1002
[1] ROCm VRAM vendor: unknown
rsmi_dev_serial_number_get failed: 2
[1] ROCm subsystem name: 0x1002
[1] ROCm vbios version: 102-RAPHAEL-006
[1] ROCm totalMem 536870912
[1] ROCm usedMem 20668416
[1] ROCm integrated GPU
time=2024-02-16T12:59:35.211-05:00 level=INFO source=gpu.go:177 msg="ROCm integrated GPU detected - ROCR_VISIBLE_DEVICES=0"
time=2024-02-16T12:59:35.211-05:00 level=DEBUG source=gpu.go:231 msg="rocm detected 2 devices with 12975M available memory"
time=2024-02-16T12:59:35.211-05:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
discovered 2 ROCm GPU Devices
[0] ROCm device name: 0x1002
[0] ROCm brand: 0x1002
[0] ROCm vendor: 0x1002
[0] ROCm VRAM vendor: samsung
rsmi_dev_serial_number_get failed: 2
[0] ROCm subsystem name: 0x1002
[0] ROCm vbios version: 113-V395TRIO-4OC
[0] ROCm totalMem 17163091968
[0] ROCm usedMem 1409773568
[1] ROCm device name: 0x1002
[1] ROCm brand: 0x1002
[1] ROCm vendor: 0x1002
[1] ROCm VRAM vendor: unknown
rsmi_dev_serial_number_get failed: 2
[1] ROCm subsystem name: 0x1002
[1] ROCm vbios version: 102-RAPHAEL-006
[1] ROCm totalMem 536870912
[1] ROCm usedMem 20668416
[1] ROCm integrated GPU
time=2024-02-16T12:59:35.211-05:00 level=INFO source=gpu.go:177 msg="ROCm integrated GPU detected - ROCR_VISIBLE_DEVICES=0"
time=2024-02-16T12:59:35.211-05:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-02-16T12:59:35.241-05:00 level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: /tmp/ollama1279410782/rocm/libext_server.so"
time=2024-02-16T12:59:35.241-05:00 level=INFO source=dyn_ext_server.go:145 msg="Initializing llama server"
[1708106375] system info: AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 |
[1708106375] Performing pre-initialization of GPU
ggml_init_cublas: GGML_CUDA_FORCE_MMQ: no
ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes
ggml_init_cublas: found 1 ROCm devices:
Device 0: AMD Radeon RX 6950 XT, compute capability 10.3, VMM: no
llama_model_loader: loaded meta data with 24 key-value pairs and 995 tensors from /home/abysssol/.ollama/models/blobs/sha256:097a1ff4445ccc1e7668f70b9de3caa60cfc2a8e2cb9da3505b13854f1cfe20f (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.name str = cognitivecomputations
llama_model_loader: - kv 2: llama.context_length u32 = 32768
llama_model_loader: - kv 3: llama.embedding_length u32 = 4096
llama_model_loader: - kv 4: llama.block_count u32 = 32
llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336
llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 7: llama.attention.head_count u32 = 32
llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 9: llama.expert_count u32 = 8
llama_model_loader: - kv 10: llama.expert_used_count u32 = 2
llama_model_loader: - kv 11: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 12: llama.rope.freq_base f32 = 1000000.000000
llama_model_loader: - kv 13: general.file_type u32 = 2
llama_model_loader: - kv 14: tokenizer.ggml.model str = llama
llama_model_loader: - kv 15: tokenizer.ggml.tokens arr[str,32002] = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv 16: tokenizer.ggml.scores arr[f32,32002] = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv 17: tokenizer.ggml.token_type arr[i32,32002] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv 18: tokenizer.ggml.bos_token_id u32 = 1
llama_model_loader: - kv 19: tokenizer.ggml.eos_token_id u32 = 32000
llama_model_loader: - kv 20: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 21: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 22: tokenizer.chat_template str = {% if not add_generation_prompt is de...
llama_model_loader: - kv 23: general.quantization_version u32 = 2
llama_model_loader: - type f32: 65 tensors
llama_model_loader: - type f16: 32 tensors
llama_model_loader: - type q4_0: 833 tensors
llama_model_loader: - type q8_0: 64 tensors
llama_model_loader: - type q6_K: 1 tensors
llm_load_vocab: special tokens definition check successful ( 261/32002 ).
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = SPM
llm_load_print_meta: n_vocab = 32002
llm_load_print_meta: n_merges = 0
llm_load_print_meta: n_ctx_train = 32768
llm_load_print_meta: n_embd = 4096
llm_load_print_meta: n_head = 32
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_layer = 32
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 4
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: n_ff = 14336
llm_load_print_meta: n_expert = 8
llm_load_print_meta: n_expert_used = 2
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx = 32768
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: model type = 7B
llm_load_print_meta: model ftype = Q4_0
llm_load_print_meta: model params = 46.70 B
llm_load_print_meta: model size = 24.62 GiB (4.53 BPW)
llm_load_print_meta: general.name = cognitivecomputations
llm_load_print_meta: BOS token = 1 '<s>'
llm_load_print_meta: EOS token = 32000 '<|im_end|>'
llm_load_print_meta: UNK token = 0 '<unk>'
llm_load_print_meta: LF token = 13 '<0x0A>'
llm_load_tensors: ggml ctx size = 0.76 MiB
llm_load_tensors: offloading 16 repeating layers to GPU
llm_load_tensors: offloaded 16/33 layers to GPU
llm_load_tensors: ROCm0 buffer size = 12521.50 MiB
llm_load_tensors: CPU buffer size = 25215.88 MiB
....................................................................................................
llama_new_context_with_model: n_ctx = 2048
llama_new_context_with_model: freq_base = 1000000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: ROCm0 KV buffer size = 128.00 MiB
llama_kv_cache_init: ROCm_Host KV buffer size = 128.00 MiB
llama_new_context_with_model: KV self size = 256.00 MiB, K (f16): 128.00 MiB, V (f16): 128.00 MiB
llama_new_context_with_model: ROCm_Host input buffer size = 12.01 MiB
llama_new_context_with_model: ROCm0 compute buffer size = 211.21 MiB
llama_new_context_with_model: ROCm_Host compute buffer size = 198.03 MiB
llama_new_context_with_model: graph splits (measure): 5
[1708106378] warming up the model with an empty run
[1708106378] Available slots:
[1708106378] -> Slot 0 - max context: 2048
time=2024-02-16T12:59:38.397-05:00 level=INFO source=dyn_ext_server.go:156 msg="Starting llama main loop"
[1708106378] llama server main loop starting
[1708106378] all slots are idle and system prompt is empty, clear the KV cache
time=2024-02-16T12:59:38.397-05:00 level=DEBUG source=routes.go:1165 msg="chat handler" prompt="<|im_start|>system\nYou are Dolphin, a helpful AI assistant.\n<|im_end|>\n<|im_start|>user\n<|im_end|>\n<|im_start|>assistant\n"
[1708106378] slot 0 is processing [task id: 0]
[1708106378] slot 0 : in cache: 0 tokens | to process: 27 tokens
[1708106378] slot 0 : kv cache rm - [0, end)
# ... removed ...
[1708106422] print_timings: prompt eval time = 893.79 ms / 27 tokens ( 33.10 ms per token, 30.21 tokens per second)
[1708106422] print_timings: eval time = 42755.44 ms / 437 runs ( 97.84 ms per token, 10.22 tokens per second)
[1708106422] print_timings: total time = 43649.23 ms
[1708106422] slot 0 released (464 tokens in cache)
[1708106422] next result cancel on stop
[1708106422] next result removing waiting task ID: 0
[GIN] 2024/02/16 - 13:00:22 | 200 | 46.916460155s | 127.0.0.1 | POST "/api/chat"
[1708106427]
initiating shutdown - draining remaining tasks...
[1708106427]
llama server shutting down
[1708106427] llama server shutdown complete
```
</details>
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2502/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2502/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7917
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7917/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7917/comments
|
https://api.github.com/repos/ollama/ollama/issues/7917/events
|
https://github.com/ollama/ollama/issues/7917
| 2,715,108,025
|
I_kwDOJ0Z1Ps6h1UK5
| 7,917
|
option to change the model loading device (CPU/GPU)
|
{
"login": "ansilmbabl",
"id": 86063895,
"node_id": "MDQ6VXNlcjg2MDYzODk1",
"avatar_url": "https://avatars.githubusercontent.com/u/86063895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ansilmbabl",
"html_url": "https://github.com/ansilmbabl",
"followers_url": "https://api.github.com/users/ansilmbabl/followers",
"following_url": "https://api.github.com/users/ansilmbabl/following{/other_user}",
"gists_url": "https://api.github.com/users/ansilmbabl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ansilmbabl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ansilmbabl/subscriptions",
"organizations_url": "https://api.github.com/users/ansilmbabl/orgs",
"repos_url": "https://api.github.com/users/ansilmbabl/repos",
"events_url": "https://api.github.com/users/ansilmbabl/events{/privacy}",
"received_events_url": "https://api.github.com/users/ansilmbabl/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-12-03T13:58:18
| 2024-12-14T15:38:53
| 2024-12-14T15:38:53
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
it would be great if we can mention the device the model to be loaded (CPU/GPU).
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7917/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7917/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7218
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7218/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7218/comments
|
https://api.github.com/repos/ollama/ollama/issues/7218/events
|
https://github.com/ollama/ollama/pull/7218
| 2,590,338,878
|
PR_kwDOJ0Z1Ps5-wP0f
| 7,218
|
Update README.md
|
{
"login": "anan1213095357",
"id": 43770875,
"node_id": "MDQ6VXNlcjQzNzcwODc1",
"avatar_url": "https://avatars.githubusercontent.com/u/43770875?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anan1213095357",
"html_url": "https://github.com/anan1213095357",
"followers_url": "https://api.github.com/users/anan1213095357/followers",
"following_url": "https://api.github.com/users/anan1213095357/following{/other_user}",
"gists_url": "https://api.github.com/users/anan1213095357/gists{/gist_id}",
"starred_url": "https://api.github.com/users/anan1213095357/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anan1213095357/subscriptions",
"organizations_url": "https://api.github.com/users/anan1213095357/orgs",
"repos_url": "https://api.github.com/users/anan1213095357/repos",
"events_url": "https://api.github.com/users/anan1213095357/events{/privacy}",
"received_events_url": "https://api.github.com/users/anan1213095357/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-10-16T01:27:27
| 2024-10-16T05:23:52
| 2024-10-16T05:23:52
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7218",
"html_url": "https://github.com/ollama/ollama/pull/7218",
"diff_url": "https://github.com/ollama/ollama/pull/7218.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7218.patch",
"merged_at": null
}
|
Modern and easy-to-use multi-platform client for Ollama
|
{
"login": "anan1213095357",
"id": 43770875,
"node_id": "MDQ6VXNlcjQzNzcwODc1",
"avatar_url": "https://avatars.githubusercontent.com/u/43770875?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anan1213095357",
"html_url": "https://github.com/anan1213095357",
"followers_url": "https://api.github.com/users/anan1213095357/followers",
"following_url": "https://api.github.com/users/anan1213095357/following{/other_user}",
"gists_url": "https://api.github.com/users/anan1213095357/gists{/gist_id}",
"starred_url": "https://api.github.com/users/anan1213095357/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anan1213095357/subscriptions",
"organizations_url": "https://api.github.com/users/anan1213095357/orgs",
"repos_url": "https://api.github.com/users/anan1213095357/repos",
"events_url": "https://api.github.com/users/anan1213095357/events{/privacy}",
"received_events_url": "https://api.github.com/users/anan1213095357/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7218/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7218/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5266
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5266/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5266/comments
|
https://api.github.com/repos/ollama/ollama/issues/5266/events
|
https://github.com/ollama/ollama/issues/5266
| 2,371,568,361
|
I_kwDOJ0Z1Ps6NW0Lp
| 5,266
|
Windows 11上 ,ollama_llama_server.exe会被“效率模式”运行,导致响应非常慢
|
{
"login": "fengbangyao",
"id": 135579315,
"node_id": "U_kgDOCBTGsw",
"avatar_url": "https://avatars.githubusercontent.com/u/135579315?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fengbangyao",
"html_url": "https://github.com/fengbangyao",
"followers_url": "https://api.github.com/users/fengbangyao/followers",
"following_url": "https://api.github.com/users/fengbangyao/following{/other_user}",
"gists_url": "https://api.github.com/users/fengbangyao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fengbangyao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fengbangyao/subscriptions",
"organizations_url": "https://api.github.com/users/fengbangyao/orgs",
"repos_url": "https://api.github.com/users/fengbangyao/repos",
"events_url": "https://api.github.com/users/fengbangyao/events{/privacy}",
"received_events_url": "https://api.github.com/users/fengbangyao/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg",
"url": "https://api.github.com/repos/ollama/ollama/labels/windows",
"name": "windows",
"color": "0052CC",
"default": false,
"description": ""
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 3
| 2024-06-25T03:40:24
| 2024-07-05T20:14:09
| 2024-07-05T20:14:08
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
再window 11上,intel 13代cpu (目前手里的有 i5-13490F,i5-13600,应该和大小核有关,在没有小核的cpu上没有复现)运行ollama时,出现提出问题后,cpu占用40%左右,但一直没回答问题或者几分钟回答一个字。使用process lasso关闭效率模式后,正常响应。 是否可以增加一个环境变量,控制是否使用效率模式工作。
### OS
Windows
### GPU
Other
### CPU
Intel
### Ollama version
0.1.45.0 及以前
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5266/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5266/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1354
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1354/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1354/comments
|
https://api.github.com/repos/ollama/ollama/issues/1354/events
|
https://github.com/ollama/ollama/issues/1354
| 2,022,254,665
|
I_kwDOJ0Z1Ps54iShJ
| 1,354
|
Llama 2 is listed as open source
|
{
"login": "raphj",
"id": 3817365,
"node_id": "MDQ6VXNlcjM4MTczNjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/3817365?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/raphj",
"html_url": "https://github.com/raphj",
"followers_url": "https://api.github.com/users/raphj/followers",
"following_url": "https://api.github.com/users/raphj/following{/other_user}",
"gists_url": "https://api.github.com/users/raphj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/raphj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/raphj/subscriptions",
"organizations_url": "https://api.github.com/users/raphj/orgs",
"repos_url": "https://api.github.com/users/raphj/repos",
"events_url": "https://api.github.com/users/raphj/events{/privacy}",
"received_events_url": "https://api.github.com/users/raphj/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396191,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aXw",
"url": "https://api.github.com/repos/ollama/ollama/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] |
closed
| false
| null |
[] | null | 1
| 2023-12-02T23:37:36
| 2024-02-20T07:21:00
| 2024-02-20T01:15:37
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi,
The readme says "Here are some example open-source models that can be downloaded:" and lists Llama 2. But it's not, notably because it forbids usage for more than 700M users.
You might want to phrase this differently. Ideas:
- remove "open source" from this sentence and
- possibly add a column "Open source?"
- make it two tables, the first one listing open source models and the second one listing non-open source models.
Cheers!
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1354/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1354/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4707
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4707/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4707/comments
|
https://api.github.com/repos/ollama/ollama/issues/4707/events
|
https://github.com/ollama/ollama/pull/4707
| 2,324,038,474
|
PR_kwDOJ0Z1Ps5w7Jmc
| 4,707
|
Draft for Multi-Language Modelfile Creation
|
{
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjhan/followers",
"following_url": "https://api.github.com/users/royjhan/following{/other_user}",
"gists_url": "https://api.github.com/users/royjhan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/royjhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/royjhan/subscriptions",
"organizations_url": "https://api.github.com/users/royjhan/orgs",
"repos_url": "https://api.github.com/users/royjhan/repos",
"events_url": "https://api.github.com/users/royjhan/events{/privacy}",
"received_events_url": "https://api.github.com/users/royjhan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-05-29T18:59:42
| 2024-07-11T20:07:17
| 2024-07-11T20:07:17
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | true
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4707",
"html_url": "https://github.com/ollama/ollama/pull/4707",
"diff_url": "https://github.com/ollama/ollama/pull/4707.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4707.patch",
"merged_at": null
}
|
Allow support for non-English Modelfile names
|
{
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjhan/followers",
"following_url": "https://api.github.com/users/royjhan/following{/other_user}",
"gists_url": "https://api.github.com/users/royjhan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/royjhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/royjhan/subscriptions",
"organizations_url": "https://api.github.com/users/royjhan/orgs",
"repos_url": "https://api.github.com/users/royjhan/repos",
"events_url": "https://api.github.com/users/royjhan/events{/privacy}",
"received_events_url": "https://api.github.com/users/royjhan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4707/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4707/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/719
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/719/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/719/comments
|
https://api.github.com/repos/ollama/ollama/issues/719/events
|
https://github.com/ollama/ollama/issues/719
| 1,930,419,605
|
I_kwDOJ0Z1Ps5zD92V
| 719
|
Question -> Request: Mac acceleration for https://hub.docker.com/r/ollama/ollama
|
{
"login": "jamesbraza",
"id": 8990777,
"node_id": "MDQ6VXNlcjg5OTA3Nzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/8990777?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jamesbraza",
"html_url": "https://github.com/jamesbraza",
"followers_url": "https://api.github.com/users/jamesbraza/followers",
"following_url": "https://api.github.com/users/jamesbraza/following{/other_user}",
"gists_url": "https://api.github.com/users/jamesbraza/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jamesbraza/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jamesbraza/subscriptions",
"organizations_url": "https://api.github.com/users/jamesbraza/orgs",
"repos_url": "https://api.github.com/users/jamesbraza/repos",
"events_url": "https://api.github.com/users/jamesbraza/events{/privacy}",
"received_events_url": "https://api.github.com/users/jamesbraza/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 14
| 2023-10-06T15:21:25
| 2024-06-28T20:47:30
| 2023-10-19T22:12:15
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Ollama continues to be one of the most user-friendly local model serving libraries out there.
https://hub.docker.com/r/ollama/ollama has great instructions for attaining GPU optimizations.
I am wondering, is there a similar optimization attainable for Mac Metal?
From reading around, it _seems_ there isn't, but I thought it was at least worth the ask
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/719/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/719/timeline
| null |
not_planned
| false
|
https://api.github.com/repos/ollama/ollama/issues/846
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/846/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/846/comments
|
https://api.github.com/repos/ollama/ollama/issues/846/events
|
https://github.com/ollama/ollama/issues/846
| 1,952,948,233
|
I_kwDOJ0Z1Ps50Z6AJ
| 846
|
Can't access model information in fresh (botched?) Linux (Ubuntu 22.04 LTS) install
|
{
"login": "TM-hub",
"id": 42901776,
"node_id": "MDQ6VXNlcjQyOTAxNzc2",
"avatar_url": "https://avatars.githubusercontent.com/u/42901776?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TM-hub",
"html_url": "https://github.com/TM-hub",
"followers_url": "https://api.github.com/users/TM-hub/followers",
"following_url": "https://api.github.com/users/TM-hub/following{/other_user}",
"gists_url": "https://api.github.com/users/TM-hub/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TM-hub/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TM-hub/subscriptions",
"organizations_url": "https://api.github.com/users/TM-hub/orgs",
"repos_url": "https://api.github.com/users/TM-hub/repos",
"events_url": "https://api.github.com/users/TM-hub/events{/privacy}",
"received_events_url": "https://api.github.com/users/TM-hub/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2023-10-19T19:45:46
| 2023-10-19T22:25:50
| 2023-10-19T22:07:42
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Step1, run linux install script in terminal window
$curl https://ollama.ai/install.sh | sh
Step 2, install ollama model ***in the same terminal window***
$ollama run mistral
Model is installed to /usr/share/ollama/.ollama owned by ollama:ollama.
>>>/show template
Fails with message ~"can't access /home/USER/.ollama"
Step 3, try adding my user to group ollama to get access to model info
$sudo usermod -a -G ollama <user>
Open new terminal window and still can't access /usr/share/ollama as I can't be added to the group without access to ollama's home directory (/usr/share/ollama).
WORKAROUND
Step 4, Edited /etc/passwd to change ollama's home directory to /home/USER
Models are still installed to /usr/share/ollama/.ollama but I can now access, e.g. --template
QUESTIONS
Is the ollama home directory supposed to be in /home/USER?
How do I change it from /usr/share/ollama?
Linux install.sh should warn the user to close terminal before running ollama for the first time.
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/846/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/846/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/361
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/361/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/361/comments
|
https://api.github.com/repos/ollama/ollama/issues/361/events
|
https://github.com/ollama/ollama/issues/361
| 1,853,794,045
|
I_kwDOJ0Z1Ps5ufqb9
| 361
|
`ollama pull` doesn't start mac app if it's not running
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5667396210,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2acg",
"url": "https://api.github.com/repos/ollama/ollama/labels/good%20first%20issue",
"name": "good first issue",
"color": "7057ff",
"default": true,
"description": "Good for newcomers"
}
] |
closed
| false
| null |
[] | null | 2
| 2023-08-16T19:30:54
| 2023-08-28T15:07:16
| 2023-08-28T15:07:15
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Calling `ollama run` will start the Mac app if it's not running and if the `ollama` is contained in `Ollama.app`, but `ollama pull` doesn't seem to do this
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/361/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/361/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7360
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7360/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7360/comments
|
https://api.github.com/repos/ollama/ollama/issues/7360/events
|
https://github.com/ollama/ollama/pull/7360
| 2,614,587,294
|
PR_kwDOJ0Z1Ps5_7bwD
| 7,360
|
Be quiet when redirecting output
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-10-25T16:34:14
| 2024-11-22T16:04:58
| 2024-11-22T16:04:54
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7360",
"html_url": "https://github.com/ollama/ollama/pull/7360",
"diff_url": "https://github.com/ollama/ollama/pull/7360.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7360.patch",
"merged_at": "2024-11-22T16:04:54"
}
|
This avoids emitting the progress indicators to stderr, and the interactive prompts to the output file or pipe. Running "ollama run model > out.txt" now exits immediately, and "echo hello | ollama run model > out.txt" produces zero stderr output and a typical response in out.txt
Example output from the echo pipe scenario:
```
% cat out.txt
Hello! It's nice to meet you. Is there something I can help you with, or would you like to chat?
```
Prior to this change "ollama run model > out.txt" would generate the progress spinners, but the prompt went to the file, so you have to type blind and you can enter prompts, then have to do `/bye` to exit the session, and the resulting output file looks something like this:
```
% cat out.txt
>>> hello
Hello! It's nice to meet you. Is there something I can help you with, or would you like to chat?
>>> /bye
```
Fixes #6120
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7360/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7360/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5084
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5084/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5084/comments
|
https://api.github.com/repos/ollama/ollama/issues/5084/events
|
https://github.com/ollama/ollama/pull/5084
| 2,355,843,032
|
PR_kwDOJ0Z1Ps5ynTWz
| 5,084
|
Set the default timeout to 600 seconds
|
{
"login": "slavonnet",
"id": 9463626,
"node_id": "MDQ6VXNlcjk0NjM2MjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9463626?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/slavonnet",
"html_url": "https://github.com/slavonnet",
"followers_url": "https://api.github.com/users/slavonnet/followers",
"following_url": "https://api.github.com/users/slavonnet/following{/other_user}",
"gists_url": "https://api.github.com/users/slavonnet/gists{/gist_id}",
"starred_url": "https://api.github.com/users/slavonnet/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/slavonnet/subscriptions",
"organizations_url": "https://api.github.com/users/slavonnet/orgs",
"repos_url": "https://api.github.com/users/slavonnet/repos",
"events_url": "https://api.github.com/users/slavonnet/events{/privacy}",
"received_events_url": "https://api.github.com/users/slavonnet/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 3
| 2024-06-16T15:38:09
| 2024-11-22T17:55:19
| 2024-11-22T17:55:19
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5084",
"html_url": "https://github.com/ollama/ollama/pull/5084",
"diff_url": "https://github.com/ollama/ollama/pull/5084.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5084.patch",
"merged_at": null
}
|
Since you have llama.cpp the default timeout is 600 seconds, then we also set 600
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5084/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5084/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1058
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1058/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1058/comments
|
https://api.github.com/repos/ollama/ollama/issues/1058/events
|
https://github.com/ollama/ollama/issues/1058
| 1,985,967,442
|
I_kwDOJ0Z1Ps52X3VS
| 1,058
|
Examples deploy Sagemaker AWS
|
{
"login": "DimIsaev",
"id": 11172642,
"node_id": "MDQ6VXNlcjExMTcyNjQy",
"avatar_url": "https://avatars.githubusercontent.com/u/11172642?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DimIsaev",
"html_url": "https://github.com/DimIsaev",
"followers_url": "https://api.github.com/users/DimIsaev/followers",
"following_url": "https://api.github.com/users/DimIsaev/following{/other_user}",
"gists_url": "https://api.github.com/users/DimIsaev/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DimIsaev/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DimIsaev/subscriptions",
"organizations_url": "https://api.github.com/users/DimIsaev/orgs",
"repos_url": "https://api.github.com/users/DimIsaev/repos",
"events_url": "https://api.github.com/users/DimIsaev/events{/privacy}",
"received_events_url": "https://api.github.com/users/DimIsaev/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396191,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aXw",
"url": "https://api.github.com/repos/ollama/ollama/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
},
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] |
closed
| false
| null |
[] | null | 3
| 2023-11-09T16:13:49
| 2024-08-25T19:48:49
| 2024-08-25T19:48:49
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
is there an example of deploy a model using an ollama in a Endpoint SageMaker AWS ?
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1058/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1058/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7056
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7056/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7056/comments
|
https://api.github.com/repos/ollama/ollama/issues/7056/events
|
https://github.com/ollama/ollama/issues/7056
| 2,558,303,172
|
I_kwDOJ0Z1Ps6YfJvE
| 7,056
|
Undefined variable in this code file: convert/tokenizer_spm.go
|
{
"login": "vignesh1507",
"id": 143084478,
"node_id": "U_kgDOCIdLvg",
"avatar_url": "https://avatars.githubusercontent.com/u/143084478?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vignesh1507",
"html_url": "https://github.com/vignesh1507",
"followers_url": "https://api.github.com/users/vignesh1507/followers",
"following_url": "https://api.github.com/users/vignesh1507/following{/other_user}",
"gists_url": "https://api.github.com/users/vignesh1507/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vignesh1507/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vignesh1507/subscriptions",
"organizations_url": "https://api.github.com/users/vignesh1507/orgs",
"repos_url": "https://api.github.com/users/vignesh1507/repos",
"events_url": "https://api.github.com/users/vignesh1507/events{/privacy}",
"received_events_url": "https://api.github.com/users/vignesh1507/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 2
| 2024-10-01T06:28:12
| 2024-10-03T15:52:35
| 2024-10-03T15:52:28
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
The code doesn't currently define tokenTypeUserDefined, which will cause a compilation error.
How to fix?
Add a constant declaration to fix the issue, for example:
`const tokenTypeUserDefined = int32(1)
`
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
3.1
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7056/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7056/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4418
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4418/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4418/comments
|
https://api.github.com/repos/ollama/ollama/issues/4418/events
|
https://github.com/ollama/ollama/issues/4418
| 2,294,182,815
|
I_kwDOJ0Z1Ps6IvnOf
| 4,418
|
[Contribution] ZSH Completion script
|
{
"login": "obeone",
"id": 2248719,
"node_id": "MDQ6VXNlcjIyNDg3MTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/2248719?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/obeone",
"html_url": "https://github.com/obeone",
"followers_url": "https://api.github.com/users/obeone/followers",
"following_url": "https://api.github.com/users/obeone/following{/other_user}",
"gists_url": "https://api.github.com/users/obeone/gists{/gist_id}",
"starred_url": "https://api.github.com/users/obeone/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/obeone/subscriptions",
"organizations_url": "https://api.github.com/users/obeone/orgs",
"repos_url": "https://api.github.com/users/obeone/repos",
"events_url": "https://api.github.com/users/obeone/events{/privacy}",
"received_events_url": "https://api.github.com/users/obeone/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-05-14T01:51:34
| 2024-05-14T06:23:03
| 2024-05-14T06:23:02
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi,
Here, you can find a `ollama` ZSH completion script. Feel free to add it in the project if you want !
https://gist.github.com/obeone/9313811fd61a7cbb843e0001a4434c58
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4418/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4418/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8423
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8423/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8423/comments
|
https://api.github.com/repos/ollama/ollama/issues/8423/events
|
https://github.com/ollama/ollama/issues/8423
| 2,787,549,208
|
I_kwDOJ0Z1Ps6mJqAY
| 8,423
|
save with OLLAMA_MODELS set doesn't work anymore in 0.5.5
|
{
"login": "sammyf",
"id": 42468608,
"node_id": "MDQ6VXNlcjQyNDY4NjA4",
"avatar_url": "https://avatars.githubusercontent.com/u/42468608?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sammyf",
"html_url": "https://github.com/sammyf",
"followers_url": "https://api.github.com/users/sammyf/followers",
"following_url": "https://api.github.com/users/sammyf/following{/other_user}",
"gists_url": "https://api.github.com/users/sammyf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sammyf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sammyf/subscriptions",
"organizations_url": "https://api.github.com/users/sammyf/orgs",
"repos_url": "https://api.github.com/users/sammyf/repos",
"events_url": "https://api.github.com/users/sammyf/events{/privacy}",
"received_events_url": "https://api.github.com/users/sammyf/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 13
| 2025-01-14T15:46:45
| 2025-01-30T10:08:57
| 2025-01-15T23:54:52
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
On Archlinux (with the latest updates), Ollama 0.5.5
Worked with prior version just a few hours ago
```
$ ollama run llama3.2-abliterated:1b_Q8
> /set parameter num_ctx 8192
> Set parameter 'num_ctx' to '8192'
> >>> /save llama3.2-abliterated:1b_Q8_8k
> error: The model name 'llama3.2-abliterated:1b_Q8_8k' is invalid
> >>> Send a message (/? for help)
Environment :
Environment="OLLAMA_HOST=0.0.0.0:11434"
Environment="OLLAMA_FLASH_ATTENTION=1"
#Environment="OLLAMA_KV_CACHE_TYPE=q4_0"
Environment="OLLAMA_KEEP_ALIVE=-1"
Environment="OLLAMA_MODELS=/media/GLIMSPANKY/ollama/models"
```
Locale (just in case) : `LANG="en_US.UTF-8"`
I removed the users and group 'ollama', and reinstalled but that didn't change the output.
removing the `OLLAMA_MODELS` environment fixes it (but obviously the models go to the wrong drive)
symlinking `/usr/share/ollama/.ollama/models` to another target directory results in the same error message.
pull, run and create work fine.
EDIT:
adding a slash at the end of the path like this `Environment="OLLAMA_MODELS=/media/GLIMSPANKY/ollama/models/"` didn't help neither (but worth a try)
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.5.5
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8423/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8423/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2330
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2330/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2330/comments
|
https://api.github.com/repos/ollama/ollama/issues/2330/events
|
https://github.com/ollama/ollama/pull/2330
| 2,115,138,634
|
PR_kwDOJ0Z1Ps5l18-1
| 2,330
|
Add fast server stop
|
{
"login": "alpe",
"id": 28003,
"node_id": "MDQ6VXNlcjI4MDAz",
"avatar_url": "https://avatars.githubusercontent.com/u/28003?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alpe",
"html_url": "https://github.com/alpe",
"followers_url": "https://api.github.com/users/alpe/followers",
"following_url": "https://api.github.com/users/alpe/following{/other_user}",
"gists_url": "https://api.github.com/users/alpe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alpe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alpe/subscriptions",
"organizations_url": "https://api.github.com/users/alpe/orgs",
"repos_url": "https://api.github.com/users/alpe/repos",
"events_url": "https://api.github.com/users/alpe/events{/privacy}",
"received_events_url": "https://api.github.com/users/alpe/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-02-02T15:04:37
| 2024-05-06T22:52:49
| 2024-05-06T22:52:48
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/2330",
"html_url": "https://github.com/ollama/ollama/pull/2330",
"diff_url": "https://github.com/ollama/ollama/pull/2330.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2330.patch",
"merged_at": null
}
|
Resolves #2052
First sigterm for a graceful shutdown, second to kill the server.
There are no automated tests for this. Steps to reproduce:
in 1st terminal:
```sh
# go build .
./ollama serve
```
in 2nd terminal
```
./ollama run llama2
```
then start a request that takes some seconds: `long response 100 words min`
While running, do `Control-c` on terminal 1 twice. Server should exit immediately with code 1.
Also test graceful shutdown with 1 `Control-c` and wait for end of server response with exit code 0
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2330/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2330/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6250
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6250/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6250/comments
|
https://api.github.com/repos/ollama/ollama/issues/6250/events
|
https://github.com/ollama/ollama/issues/6250
| 2,454,703,337
|
I_kwDOJ0Z1Ps6ST8zp
| 6,250
|
运行glm4-9b模型,对话时间久了会偶发性的回复GGGGGGG
|
{
"login": "MdcGIt",
"id": 26782023,
"node_id": "MDQ6VXNlcjI2NzgyMDIz",
"avatar_url": "https://avatars.githubusercontent.com/u/26782023?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MdcGIt",
"html_url": "https://github.com/MdcGIt",
"followers_url": "https://api.github.com/users/MdcGIt/followers",
"following_url": "https://api.github.com/users/MdcGIt/following{/other_user}",
"gists_url": "https://api.github.com/users/MdcGIt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MdcGIt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MdcGIt/subscriptions",
"organizations_url": "https://api.github.com/users/MdcGIt/orgs",
"repos_url": "https://api.github.com/users/MdcGIt/repos",
"events_url": "https://api.github.com/users/MdcGIt/events{/privacy}",
"received_events_url": "https://api.github.com/users/MdcGIt/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info",
"name": "needs more info",
"color": "BA8041",
"default": false,
"description": "More information is needed to assist"
}
] |
closed
| false
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 6
| 2024-08-08T01:55:27
| 2024-09-30T23:00:00
| 2024-09-30T23:00:00
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
运行glm4-9b模型,对话时间久了会偶发性的回复GGGGGGG
显卡信息如下:

### OS
Linux
### GPU
Intel
### CPU
Intel
### Ollama version
0.3.0
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6250/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6250/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3162
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3162/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3162/comments
|
https://api.github.com/repos/ollama/ollama/issues/3162/events
|
https://github.com/ollama/ollama/issues/3162
| 2,187,848,031
|
I_kwDOJ0Z1Ps6CZ-lf
| 3,162
|
Possibility to remove max retries exceeded when downloading models from a slow connection
|
{
"login": "DaRetriever",
"id": 163505097,
"node_id": "U_kgDOCb7jyQ",
"avatar_url": "https://avatars.githubusercontent.com/u/163505097?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DaRetriever",
"html_url": "https://github.com/DaRetriever",
"followers_url": "https://api.github.com/users/DaRetriever/followers",
"following_url": "https://api.github.com/users/DaRetriever/following{/other_user}",
"gists_url": "https://api.github.com/users/DaRetriever/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DaRetriever/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DaRetriever/subscriptions",
"organizations_url": "https://api.github.com/users/DaRetriever/orgs",
"repos_url": "https://api.github.com/users/DaRetriever/repos",
"events_url": "https://api.github.com/users/DaRetriever/events{/privacy}",
"received_events_url": "https://api.github.com/users/DaRetriever/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 9
| 2024-03-15T06:56:44
| 2025-01-25T13:54:30
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What are you trying to do?
I'm trying to download Mixtral (26Gb), but every 120 mb an error pops up stating:
Error: max retries exceeded: Get "https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/e9/e9e56e8bb5f0fcd4860675e6837a8f6a94e659f5fa7dce6a1076279336320f2b/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%!F(MISSING)20240315%!F(MISSING)auto%!F(MISSING)s3%!F(MISSING)aws4_request&X-Amz-Date=20240315T063326Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=d14a2deeadcc4f625c71535f456b49a6f8915521ddc7352f2f81aa0f4635bb47": net/http: TLS handshake timeout
### How should we solve this?
Wouldn't it be possible to add a feature disabling or setting a higher number of retries for the user?
### What is the impact of not solving this?
I understand most people have fast internet, but with my max 500 Kb/s (hopefully cable internet is on the way) all models are a pain. Mistral I can babysit through the download since it takes a couple of hours, but I can't babysit and relaunch Mixtral over 2 days...
### Anything else?
_No response_
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3162/reactions",
"total_count": 10,
"+1": 10,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3162/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/3838
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3838/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3838/comments
|
https://api.github.com/repos/ollama/ollama/issues/3838/events
|
https://github.com/ollama/ollama/issues/3838
| 2,257,943,149
|
I_kwDOJ0Z1Ps6GlXpt
| 3,838
|
On Archlinux and AMD Radeon RX 6800S ollama falls back to CPU
|
{
"login": "arael",
"id": 587072,
"node_id": "MDQ6VXNlcjU4NzA3Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/587072?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arael",
"html_url": "https://github.com/arael",
"followers_url": "https://api.github.com/users/arael/followers",
"following_url": "https://api.github.com/users/arael/following{/other_user}",
"gists_url": "https://api.github.com/users/arael/gists{/gist_id}",
"starred_url": "https://api.github.com/users/arael/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arael/subscriptions",
"organizations_url": "https://api.github.com/users/arael/orgs",
"repos_url": "https://api.github.com/users/arael/repos",
"events_url": "https://api.github.com/users/arael/events{/privacy}",
"received_events_url": "https://api.github.com/users/arael/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6433346500,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf3UTxA",
"url": "https://api.github.com/repos/ollama/ollama/labels/amd",
"name": "amd",
"color": "000000",
"default": false,
"description": "Issues relating to AMD GPUs and ROCm"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 3
| 2024-04-23T04:55:50
| 2024-05-01T18:00:04
| 2024-04-24T16:06:04
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I am not able to use my AMD Radeon RX 6800S with ollama. When I try, it falls back to CPU. I have installed tried both ollama and a fresh install with the scripts/install.sh from the git repo. The result is the same. Please help me.
### Command outputs and logs
Here is the output of rocminfo:
```
(cmd)[~] rocminfo
ROCk module is loaded
=====================
HSA System Attributes
=====================
Runtime Version: 1.1
System Timestamp Freq.: 1000.000000MHz
Sig. Max Wait Duration: 18446744073709551615 (0xFFFFFFFFFFFFFFFF) (timestamp count)
Machine Model: LARGE
System Endianness: LITTLE
Mwaitx: DISABLED
DMAbuf Support: YES
==========
HSA Agents
==========
*******
Agent 1
*******
Name: AMD Ryzen 7 6800HS with Radeon Graphics
Uuid: CPU-XX
Marketing Name: AMD Ryzen 7 6800HS with Radeon Graphics
Vendor Name: CPU
Feature: None specified
Profile: FULL_PROFILE
Float Round Mode: NEAR
Max Queue Number: 0(0x0)
Queue Min Size: 0(0x0)
Queue Max Size: 0(0x0)
Queue Type: MULTI
Node: 0
Device Type: CPU
Cache Info:
L1: 32768(0x8000) KB
Chip ID: 0(0x0)
ASIC Revision: 0(0x0)
Cacheline Size: 64(0x40)
Max Clock Freq. (MHz): 4785
BDFID: 0
Internal Node ID: 0
Compute Unit: 16
SIMDs per CU: 0
Shader Engines: 0
Shader Arrs. per Eng.: 0
WatchPts on Addr. Ranges:1
Features: None
Pool Info:
Pool 1
Segment: GLOBAL; FLAGS: FINE GRAINED
Size: 32086940(0x1e99b9c) KB
Allocatable: TRUE
Alloc Granule: 4KB
Alloc Alignment: 4KB
Accessible by all: TRUE
Pool 2
Segment: GLOBAL; FLAGS: KERNARG, FINE GRAINED
Size: 32086940(0x1e99b9c) KB
Allocatable: TRUE
Alloc Granule: 4KB
Alloc Alignment: 4KB
Accessible by all: TRUE
Pool 3
Segment: GLOBAL; FLAGS: COARSE GRAINED
Size: 32086940(0x1e99b9c) KB
Allocatable: TRUE
Alloc Granule: 4KB
Alloc Alignment: 4KB
Accessible by all: TRUE
ISA Info:
*******
Agent 2
*******
Name: gfx1032
Uuid: GPU-XX
Marketing Name: AMD Radeon RX 6800S
Vendor Name: AMD
Feature: KERNEL_DISPATCH
Profile: BASE_PROFILE
Float Round Mode: NEAR
Max Queue Number: 128(0x80)
Queue Min Size: 64(0x40)
Queue Max Size: 131072(0x20000)
Queue Type: MULTI
Node: 1
Device Type: GPU
Cache Info:
L1: 16(0x10) KB
L2: 2048(0x800) KB
L3: 32768(0x8000) KB
Chip ID: 29679(0x73ef)
ASIC Revision: 0(0x0)
Cacheline Size: 64(0x40)
Max Clock Freq. (MHz): 2450
BDFID: 768
Internal Node ID: 1
Compute Unit: 32
SIMDs per CU: 2
Shader Engines: 2
Shader Arrs. per Eng.: 2
WatchPts on Addr. Ranges:4
Coherent Host Access: FALSE
Features: KERNEL_DISPATCH
Fast F16 Operation: TRUE
Wavefront Size: 32(0x20)
Workgroup Max Size: 1024(0x400)
Workgroup Max Size per Dimension:
x 1024(0x400)
y 1024(0x400)
z 1024(0x400)
Max Waves Per CU: 32(0x20)
Max Work-item Per CU: 1024(0x400)
Grid Max Size: 4294967295(0xffffffff)
Grid Max Size per Dimension:
x 4294967295(0xffffffff)
y 4294967295(0xffffffff)
z 4294967295(0xffffffff)
Max fbarriers/Workgrp: 32
Packet Processor uCode:: 116
SDMA engine uCode:: 76
IOMMU Support:: None
Pool Info:
Pool 1
Segment: GLOBAL; FLAGS: COARSE GRAINED
Size: 8372224(0x7fc000) KB
Allocatable: TRUE
Alloc Granule: 4KB
Alloc Alignment: 4KB
Accessible by all: FALSE
Pool 2
Segment: GLOBAL; FLAGS: EXTENDED FINE GRAINED
Size: 8372224(0x7fc000) KB
Allocatable: TRUE
Alloc Granule: 4KB
Alloc Alignment: 4KB
Accessible by all: FALSE
Pool 3
Segment: GROUP
Size: 64(0x40) KB
Allocatable: FALSE
Alloc Granule: 0KB
Alloc Alignment: 0KB
Accessible by all: FALSE
ISA Info:
ISA 1
Name: amdgcn-amd-amdhsa--gfx1032
Machine Models: HSA_MACHINE_MODEL_LARGE
Profiles: HSA_PROFILE_BASE
Default Rounding Mode: NEAR
Default Rounding Mode: NEAR
Fast f16: TRUE
Workgroup Max Size: 1024(0x400)
Workgroup Max Size per Dimension:
x 1024(0x400)
y 1024(0x400)
z 1024(0x400)
Grid Max Size: 4294967295(0xffffffff)
Grid Max Size per Dimension:
x 4294967295(0xffffffff)
y 4294967295(0xffffffff)
z 4294967295(0xffffffff)
FBarrier Max Size: 32
*******
Agent 3
*******
Name: gfx1035
Uuid: GPU-XX
Marketing Name: AMD Radeon Graphics
Vendor Name: AMD
Feature: KERNEL_DISPATCH
Profile: BASE_PROFILE
Float Round Mode: NEAR
Max Queue Number: 128(0x80)
Queue Min Size: 64(0x40)
Queue Max Size: 131072(0x20000)
Queue Type: MULTI
Node: 2
Device Type: GPU
Cache Info:
L1: 16(0x10) KB
L2: 2048(0x800) KB
Chip ID: 5761(0x1681)
ASIC Revision: 2(0x2)
Cacheline Size: 64(0x40)
Max Clock Freq. (MHz): 2200
BDFID: 1792
Internal Node ID: 2
Compute Unit: 12
SIMDs per CU: 2
Shader Engines: 1
Shader Arrs. per Eng.: 2
WatchPts on Addr. Ranges:4
Coherent Host Access: FALSE
Features: KERNEL_DISPATCH
Fast F16 Operation: TRUE
Wavefront Size: 32(0x20)
Workgroup Max Size: 1024(0x400)
Workgroup Max Size per Dimension:
x 1024(0x400)
y 1024(0x400)
z 1024(0x400)
Max Waves Per CU: 32(0x20)
Max Work-item Per CU: 1024(0x400)
Grid Max Size: 4294967295(0xffffffff)
Grid Max Size per Dimension:
x 4294967295(0xffffffff)
y 4294967295(0xffffffff)
z 4294967295(0xffffffff)
Max fbarriers/Workgrp: 32
Packet Processor uCode:: 116
SDMA engine uCode:: 47
IOMMU Support:: None
Pool Info:
Pool 1
Segment: GLOBAL; FLAGS: COARSE GRAINED
Size: 524288(0x80000) KB
Allocatable: TRUE
Alloc Granule: 4KB
Alloc Alignment: 4KB
Accessible by all: FALSE
Pool 2
Segment: GLOBAL; FLAGS: EXTENDED FINE GRAINED
Size: 524288(0x80000) KB
Allocatable: TRUE
Alloc Granule: 4KB
Alloc Alignment: 4KB
Accessible by all: FALSE
Pool 3
Segment: GROUP
Size: 64(0x40) KB
Allocatable: FALSE
Alloc Granule: 0KB
Alloc Alignment: 0KB
Accessible by all: FALSE
ISA Info:
ISA 1
Name: amdgcn-amd-amdhsa--gfx1035
Machine Models: HSA_MACHINE_MODEL_LARGE
Profiles: HSA_PROFILE_BASE
Default Rounding Mode: NEAR
Default Rounding Mode: NEAR
Fast f16: TRUE
Workgroup Max Size: 1024(0x400)
Workgroup Max Size per Dimension:
x 1024(0x400)
y 1024(0x400)
z 1024(0x400)
Grid Max Size: 4294967295(0xffffffff)
Grid Max Size per Dimension:
x 4294967295(0xffffffff)
y 4294967295(0xffffffff)
z 4294967295(0xffffffff)
FBarrier Max Size: 32
*** Done ***
```
If I set the entry 0:
```
(ins)[~] HIP_VISIBLE_DEVICES=0 ollama serve
time=2024-04-23T13:52:54.253+09:00 level=INFO source=images.go:817 msg="total blobs: 0"
time=2024-04-23T13:52:54.253+09:00 level=INFO source=images.go:824 msg="total unused blobs removed: 0"
time=2024-04-23T13:52:54.253+09:00 level=INFO source=routes.go:1143 msg="Listening on 127.0.0.1:11434 (version 0.1.32)"
time=2024-04-23T13:52:54.327+09:00 level=INFO source=payload.go:28 msg="extracting embedded files" dir=/tmp/ollama1769714599/runners
time=2024-04-23T13:52:56.441+09:00 level=INFO source=payload.go:41 msg="Dynamic LLM libraries [rocm_v60002 cpu cpu_avx cpu_avx2 cuda_v11]"
time=2024-04-23T13:52:56.442+09:00 level=INFO source=gpu.go:121 msg="Detecting GPU type"
time=2024-04-23T13:52:56.442+09:00 level=INFO source=gpu.go:268 msg="Searching for GPU management library libcudart.so*"
time=2024-04-23T13:52:56.447+09:00 level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [/tmp/ollama1769714599/runners/cuda_v11/libcudart.so.11.0]"
time=2024-04-23T13:52:56.447+09:00 level=INFO source=gpu.go:343 msg="Unable to load cudart CUDA management library /tmp/ollama1769714599/runners/cuda_v11/libcudart.so.11.0: your nvidia driver is too old or missing, please upgrade to run ollama"
time=2024-04-23T13:52:56.447+09:00 level=INFO source=gpu.go:268 msg="Searching for GPU management library libnvidia-ml.so"
time=2024-04-23T13:52:56.453+09:00 level=INFO source=gpu.go:314 msg="Discovered GPU libraries: []"
time=2024-04-23T13:52:56.453+09:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-04-23T13:52:56.453+09:00 level=WARN source=amd_linux.go:53 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers: amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
time=2024-04-23T13:52:56.453+09:00 level=INFO source=amd_linux.go:263 msg="[0] amdgpu totalMemory 8176M"
time=2024-04-23T13:52:56.453+09:00 level=INFO source=amd_linux.go:264 msg="[0] amdgpu freeMemory 8176M"
```
If I set the entry 1:
```
(ins)[~] HIP_VISIBLE_DEVICES=1 ollama serve
time=2024-04-23T13:54:03.009+09:00 level=INFO source=images.go:817 msg="total blobs: 0"
time=2024-04-23T13:54:03.010+09:00 level=INFO source=images.go:824 msg="total unused blobs removed: 0"
time=2024-04-23T13:54:03.010+09:00 level=INFO source=routes.go:1143 msg="Listening on 127.0.0.1:11434 (version 0.1.32)"
time=2024-04-23T13:54:03.010+09:00 level=INFO source=payload.go:28 msg="extracting embedded files" dir=/tmp/ollama1225099153/runners
time=2024-04-23T13:54:05.123+09:00 level=INFO source=payload.go:41 msg="Dynamic LLM libraries [cuda_v11 rocm_v60002 cpu cpu_avx cpu_avx2]"
time=2024-04-23T13:54:05.123+09:00 level=INFO source=gpu.go:121 msg="Detecting GPU type"
time=2024-04-23T13:54:05.123+09:00 level=INFO source=gpu.go:268 msg="Searching for GPU management library libcudart.so*"
time=2024-04-23T13:54:05.129+09:00 level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [/tmp/ollama1225099153/runners/cuda_v11/libcudart.so.11.0]"
time=2024-04-23T13:54:05.129+09:00 level=INFO source=gpu.go:343 msg="Unable to load cudart CUDA management library /tmp/ollama1225099153/runners/cuda_v11/libcudart.so.11.0: your nvidia driver is too old or missing, please upgrade to run ollama"
time=2024-04-23T13:54:05.129+09:00 level=INFO source=gpu.go:268 msg="Searching for GPU management library libnvidia-ml.so"
time=2024-04-23T13:54:05.135+09:00 level=INFO source=gpu.go:314 msg="Discovered GPU libraries: []"
time=2024-04-23T13:54:05.135+09:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-04-23T13:54:05.135+09:00 level=WARN source=amd_linux.go:53 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers: amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
time=2024-04-23T13:54:05.135+09:00 level=INFO source=amd_linux.go:234 msg="amdgpu [1] appears to be an iGPU with 512M reported total memory, skipping"
panic: assignment to entry in nil map
goroutine 1 [running]:
github.com/ollama/ollama/gpu.amdProcMemLookup(0xc0007659a8, 0x0, {0xc000003c90, 0x1, 0x1})
github.com/ollama/ollama/gpu/amd_linux.go:235 +0x697
github.com/ollama/ollama/gpu.AMDGetGPUInfo(0xc0007659a8)
github.com/ollama/ollama/gpu/amd_linux.go:68 +0x1246
github.com/ollama/ollama/gpu.GetGPUInfo()
github.com/ollama/ollama/gpu/gpu.go:210 +0x83c
github.com/ollama/ollama/gpu.CheckVRAM()
github.com/ollama/ollama/gpu/gpu.go:256 +0x18c
github.com/ollama/ollama/server.Serve({0x11f3d528, 0xc00045d200})
github.com/ollama/ollama/server/routes.go:1163 +0x493
github.com/ollama/ollama/cmd.RunServer(0xc0001e6c00?, {0x127bc1c0?, 0x4?, 0x11bf809?})
github.com/ollama/ollama/cmd/cmd.go:816 +0x1b9
github.com/spf13/cobra.(*Command).execute(0xc000540608, {0x127bc1c0, 0x0, 0x0})
github.com/spf13/cobra@v1.7.0/command.go:940 +0x882
github.com/spf13/cobra.(*Command).ExecuteC(0xc00017f508)
github.com/spf13/cobra@v1.7.0/command.go:1068 +0x3a5
github.com/spf13/cobra.(*Command).Execute(...)
github.com/spf13/cobra@v1.7.0/command.go:992
github.com/spf13/cobra.(*Command).ExecuteContext(...)
github.com/spf13/cobra@v1.7.0/command.go:985
main.main()
github.com/ollama/ollama/main.go:11 +0x4d
```
If I set the entry 2:
```
(ins)[~] HIP_VISIBLE_DEVICES=2 ollama serve
time=2024-04-23T13:54:37.171+09:00 level=INFO source=images.go:817 msg="total blobs: 0"
time=2024-04-23T13:54:37.171+09:00 level=INFO source=images.go:824 msg="total unused blobs removed: 0"
time=2024-04-23T13:54:37.171+09:00 level=INFO source=routes.go:1143 msg="Listening on 127.0.0.1:11434 (version 0.1.32)"
time=2024-04-23T13:54:37.249+09:00 level=INFO source=payload.go:28 msg="extracting embedded files" dir=/tmp/ollama1175570358/runners
time=2024-04-23T13:54:39.365+09:00 level=INFO source=payload.go:41 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11 rocm_v60002]"
time=2024-04-23T13:54:39.365+09:00 level=INFO source=gpu.go:121 msg="Detecting GPU type"
time=2024-04-23T13:54:39.365+09:00 level=INFO source=gpu.go:268 msg="Searching for GPU management library libcudart.so*"
time=2024-04-23T13:54:39.371+09:00 level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [/tmp/ollama1175570358/runners/cuda_v11/libcudart.so.11.0]"
time=2024-04-23T13:54:39.372+09:00 level=INFO source=gpu.go:343 msg="Unable to load cudart CUDA management library /tmp/ollama1175570358/runners/cuda_v11/libcudart.so.11.0: your nvidia driver is too old or missing, please upgrade to run ollama"
time=2024-04-23T13:54:39.372+09:00 level=INFO source=gpu.go:268 msg="Searching for GPU management library libnvidia-ml.so"
time=2024-04-23T13:54:39.377+09:00 level=INFO source=gpu.go:314 msg="Discovered GPU libraries: []"
time=2024-04-23T13:54:39.377+09:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-04-23T13:54:39.377+09:00 level=WARN source=amd_linux.go:53 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers: amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
time=2024-04-23T13:54:39.377+09:00 level=WARN source=amd_linux.go:229 msg="amdgpu [2] reports zero total memory, skipping"
panic: assignment to entry in nil map
goroutine 1 [running]:
github.com/ollama/ollama/gpu.amdProcMemLookup(0xc00074b9a8, 0x0, {0xc000201bf0, 0x1, 0x1})
github.com/ollama/ollama/gpu/amd_linux.go:230 +0x8e5
github.com/ollama/ollama/gpu.AMDGetGPUInfo(0xc00074b9a8)
github.com/ollama/ollama/gpu/amd_linux.go:68 +0x1246
github.com/ollama/ollama/gpu.GetGPUInfo()
github.com/ollama/ollama/gpu/gpu.go:210 +0x83c
github.com/ollama/ollama/gpu.CheckVRAM()
github.com/ollama/ollama/gpu/gpu.go:256 +0x18c
github.com/ollama/ollama/server.Serve({0x11f3d528, 0xc000625060})
github.com/ollama/ollama/server/routes.go:1163 +0x493
github.com/ollama/ollama/cmd.RunServer(0xc000518b00?, {0x127bc1c0?, 0x4?, 0x11bf809?})
github.com/ollama/ollama/cmd/cmd.go:816 +0x1b9
github.com/spf13/cobra.(*Command).execute(0xc0004b7508, {0x127bc1c0, 0x0, 0x0})
github.com/spf13/cobra@v1.7.0/command.go:940 +0x882
github.com/spf13/cobra.(*Command).ExecuteC(0xc0004b6908)
github.com/spf13/cobra@v1.7.0/command.go:1068 +0x3a5
github.com/spf13/cobra.(*Command).Execute(...)
github.com/spf13/cobra@v1.7.0/command.go:992
github.com/spf13/cobra.(*Command).ExecuteContext(...)
github.com/spf13/cobra@v1.7.0/command.go:985
main.main()
github.com/ollama/ollama/main.go:11 +0x4d
```
Here are my current packages:
```
(ins)[~] pacman -Qs 'amd|hip|rocm|opencl|clblast|llama'
local/amd-ucode 20240409.1addd7dc-1
Microcode update image for AMD CPUs
local/amdvlk 2024.Q1.3-2
AMD's standalone Vulkan driver
local/clblast 1.6.2-1
Tuned OpenCL BLAS library
local/clinfo 3.0.23.01.25-1
Simple OpenCL application that enumerates all available platform and device properties
local/comgr 6.0.2-1
Compiler support library for ROCm LLVM
local/composable-kernel 6.0.2-1
High Performance Composable Kernel for AMD GPUs
local/flashrom 1.3.0-3
Utility for reading, writing, erasing and verifying flash ROM chips
local/gcc-libs 13.2.1-5
Runtime libraries shipped by GCC
local/hip-runtime-amd 6.0.2-2
Heterogeneous Interface for Portability ROCm
local/hipblas 6.0.2-1
ROCm BLAS marshalling library
local/hipcub 6.0.2-1
Header-only library on top of rocPRIM or CUB
local/hipfft 6.0.2-1
rocFFT marshalling library.
local/hiprand 6.0.2-1
rocRAND marshalling library
local/hipsolver 6.0.2-1
rocSOLVER marshalling library.
local/hipsparse 6.0.2-1
rocSPARSE marshalling library.
local/hsa-rocr 6.0.2-2
HSA Runtime API and runtime for ROCm
local/lib32-amdvlk 2024.Q1.3-2
AMD's standalone Vulkan driver
local/lib32-gcc-libs 13.2.1-5
32-bit runtime libraries shipped by GCC
local/libftdi 1.5-5
A library to talk to FTDI chips, optional python bindings.
local/libteam 1.32-1
Library for controlling team network device
local/magma-hip 2.7.2-4
Matrix Algebra on GPU and Multicore Architectures (with ROCm/HIP)
local/miopen-hip 6.0.2-1
AMD's Machine Intelligence Library (HIP backend)
local/nvtop 3.1.0-1
GPUs process monitoring for AMD, Intel and NVIDIA
local/ocl-icd 2.3.2-1
OpenCL ICD Bindings
local/opencl-headers 2:2023.04.17-2
OpenCL (Open Computing Language) header files
local/python-pytorch-opt-rocm 2.2.2-1
Tensors and Dynamic neural networks in Python with strong GPU acceleration (with ROCm and AVX2 CPU optimizations)
local/rccl 6.0.2-1
ROCm Communication Collectives Library
local/rocalution 6.0.2-1
Next generation library for iterative sparse solvers for ROCm platform
local/rocblas 6.0.2-1
Next generation BLAS implementation for ROCm platform
local/rocfft 6.0.2-1
Next generation FFT implementation for ROCm
local/rocm-clang-ocl 6.0.2-1
OpenCL compilation with clang compiler
local/rocm-cmake 6.0.2-1
CMake modules for common build tasks needed for the ROCm software stack
local/rocm-core 6.0.2-2
AMD ROCm core package (version files)
local/rocm-device-libs 6.0.2-1
ROCm Device Libraries
local/rocm-hip-libraries 6.0.2-1
Develop certain applications using HIP and libraries for AMD platforms
local/rocm-hip-runtime 6.0.2-1
Packages to run HIP applications on the AMD platform
local/rocm-hip-sdk 6.0.2-1
Develop applications using HIP and libraries for AMD platforms
local/rocm-language-runtime 6.0.2-1
ROCm runtime
local/rocm-llvm 6.0.2-1
Radeon Open Compute - LLVM toolchain (llvm, clang, lld)
local/rocm-opencl-runtime 6.0.2-1
OpenCL implementation for AMD
local/rocm-opencl-sdk 6.0.2-1
Develop OpenCL-based applications for AMD platforms
local/rocm-smi-lib 6.0.2-1
ROCm System Management Interface Library
local/rocminfo 6.0.2-1
ROCm Application for Reporting System Info
local/rocprim 6.0.2-1
Header-only library providing HIP parallel primitives
local/rocrand 6.0.2-1
Pseudo-random and quasi-random number generator on ROCm
local/rocsolver 6.0.2-1
Subset of LAPACK functionality on the ROCm platform
local/rocsparse 6.0.2-2
BLAS for sparse computation on top of ROCm
local/rocthrust 6.0.2-1
Port of the Thrust parallel algorithm library atop HIP/ROCm
local/roctracer 6.0.2-1
ROCm tracer library for performance tracing
local/supergfxctl 5.1.1-1
A utility for Linux graphics switching on Intel/AMD iGPU + nVidia dGPU laptops
local/texlive-latexextra 2024.2-1 (texlive)
TeX Live - LaTeX additional packages
local/vulkan-radeon 1:24.0.5-1
Open-source Vulkan driver for AMD GPUs
local/xf86-video-amdgpu 23.0.0-2 (xorg-drivers)
X.org amdgpu video driver
```
### OS
Linux
### GPU
AMD
### CPU
AMD
### Ollama version
0.1.32
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3838/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3838/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6462
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6462/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6462/comments
|
https://api.github.com/repos/ollama/ollama/issues/6462/events
|
https://github.com/ollama/ollama/issues/6462
| 2,480,563,237
|
I_kwDOJ0Z1Ps6T2mQl
| 6,462
|
Make tool call response compatible with OpenAI format
|
{
"login": "eliasfroehner",
"id": 11318229,
"node_id": "MDQ6VXNlcjExMzE4MjI5",
"avatar_url": "https://avatars.githubusercontent.com/u/11318229?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eliasfroehner",
"html_url": "https://github.com/eliasfroehner",
"followers_url": "https://api.github.com/users/eliasfroehner/followers",
"following_url": "https://api.github.com/users/eliasfroehner/following{/other_user}",
"gists_url": "https://api.github.com/users/eliasfroehner/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eliasfroehner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eliasfroehner/subscriptions",
"organizations_url": "https://api.github.com/users/eliasfroehner/orgs",
"repos_url": "https://api.github.com/users/eliasfroehner/repos",
"events_url": "https://api.github.com/users/eliasfroehner/events{/privacy}",
"received_events_url": "https://api.github.com/users/eliasfroehner/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-08-22T11:37:16
| 2024-08-22T14:03:35
| 2024-08-22T14:03:34
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Description
Currently, the response for a tool call looks like this:
```json
{
"model": "llama3.1",
"created_at": "2024-07-22T20:33:28.123648Z",
"message": {
"role": "assistant",
"content": "",
"tool_calls": [
{
"function": {
"name": "get_current_weather",
"arguments": {
"format": "celsius",
"location": "Paris, FR"
}
}
}
]
},
"done_reason": "stop",
"done": true,
"total_duration": 885095291,
"load_duration": 3753500,
"prompt_eval_count": 122,
"prompt_eval_duration": 328493000,
"eval_count": 33,
"eval_duration": 552222000
}
```
However, I would like the response to be compatible with the OpenAI format, which includes an `id` and `type` field for each tool call. The desired format is as follows:
```json
{
"model": "llama3.1",
"created_at": "2024-07-22T20:33:28.123648Z",
"message": {
"role": "assistant",
"content": "",
"tool_calls": [
{
"id": "call_62136354",
"type": "function",
"function": {
"name": "get_current_weather",
"arguments": {
"format": "celsius",
"location": "Paris, FR"
}
}
}
]
},
"done_reason": "stop",
"done": true,
"total_duration": 885095291,
"load_duration": 3753500,
"prompt_eval_count": 122,
"prompt_eval_duration": 328493000,
"eval_count": 33,
"eval_duration": 552222000
}
```
Additionally, the lack of an `id` field in the current format prevents a software from responding to multiple simultaneous calls to a tool.
### Steps to Reproduce
1. Make a tool call request.
2. Observe the response format.
### Expected Behavior
The response should include an `id` and `type` field for each tool call, making it compatible with the OpenAI format.
### Actual Behavior
The response does not include an `id` and `type` field for each tool call.
### Additional Information
This change is necessary to ensure compatibility with the OpenAI API and to handle multiple simultaneous tool calls effectively.
### Labels
- enhancement
- compatibility
- api
|
{
"login": "eliasfroehner",
"id": 11318229,
"node_id": "MDQ6VXNlcjExMzE4MjI5",
"avatar_url": "https://avatars.githubusercontent.com/u/11318229?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eliasfroehner",
"html_url": "https://github.com/eliasfroehner",
"followers_url": "https://api.github.com/users/eliasfroehner/followers",
"following_url": "https://api.github.com/users/eliasfroehner/following{/other_user}",
"gists_url": "https://api.github.com/users/eliasfroehner/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eliasfroehner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eliasfroehner/subscriptions",
"organizations_url": "https://api.github.com/users/eliasfroehner/orgs",
"repos_url": "https://api.github.com/users/eliasfroehner/repos",
"events_url": "https://api.github.com/users/eliasfroehner/events{/privacy}",
"received_events_url": "https://api.github.com/users/eliasfroehner/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6462/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6462/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6329
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6329/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6329/comments
|
https://api.github.com/repos/ollama/ollama/issues/6329/events
|
https://github.com/ollama/ollama/issues/6329
| 2,461,955,923
|
I_kwDOJ0Z1Ps6SvndT
| 6,329
|
Change log for updated models on website?
|
{
"login": "coodoo",
"id": 325936,
"node_id": "MDQ6VXNlcjMyNTkzNg==",
"avatar_url": "https://avatars.githubusercontent.com/u/325936?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/coodoo",
"html_url": "https://github.com/coodoo",
"followers_url": "https://api.github.com/users/coodoo/followers",
"following_url": "https://api.github.com/users/coodoo/following{/other_user}",
"gists_url": "https://api.github.com/users/coodoo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/coodoo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/coodoo/subscriptions",
"organizations_url": "https://api.github.com/users/coodoo/orgs",
"repos_url": "https://api.github.com/users/coodoo/repos",
"events_url": "https://api.github.com/users/coodoo/events{/privacy}",
"received_events_url": "https://api.github.com/users/coodoo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 0
| 2024-08-12T21:52:06
| 2024-08-12T21:52:06
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Seemed for the past few days llama3.1 models on the website had been constantly updated every couple of hours, wondering is there an changelog to see what's changed (specifically which size of models were updated)?
Ps. attached image was captured just now, indicating the model was updated about an hour ago.

| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6329/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6329/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/1572
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1572/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1572/comments
|
https://api.github.com/repos/ollama/ollama/issues/1572/events
|
https://github.com/ollama/ollama/issues/1572
| 2,045,307,774
|
I_kwDOJ0Z1Ps556Ot-
| 1,572
|
Embeddings response too slow
|
{
"login": "perezjnv",
"id": 18506353,
"node_id": "MDQ6VXNlcjE4NTA2MzUz",
"avatar_url": "https://avatars.githubusercontent.com/u/18506353?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/perezjnv",
"html_url": "https://github.com/perezjnv",
"followers_url": "https://api.github.com/users/perezjnv/followers",
"following_url": "https://api.github.com/users/perezjnv/following{/other_user}",
"gists_url": "https://api.github.com/users/perezjnv/gists{/gist_id}",
"starred_url": "https://api.github.com/users/perezjnv/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/perezjnv/subscriptions",
"organizations_url": "https://api.github.com/users/perezjnv/orgs",
"repos_url": "https://api.github.com/users/perezjnv/repos",
"events_url": "https://api.github.com/users/perezjnv/events{/privacy}",
"received_events_url": "https://api.github.com/users/perezjnv/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
},
{
"id": 5808482718,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWjZpng",
"url": "https://api.github.com/repos/ollama/ollama/labels/performance",
"name": "performance",
"color": "A5B5C6",
"default": false,
"description": ""
},
{
"id": 6677485533,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgJX3Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/embeddings",
"name": "embeddings",
"color": "76BF9F",
"default": false,
"description": "Issues around embeddings"
}
] |
closed
| false
| null |
[] | null | 7
| 2023-12-17T17:44:57
| 2024-11-30T22:16:23
| 2024-05-06T23:43:21
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I did an ingest with a CSV for fine tuning in a model called2-7b in .bin format, that worked well for me but when using ollma with a Modelfile that implements it the responses are too slow, any suggestions?
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1572/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1572/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1994
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1994/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1994/comments
|
https://api.github.com/repos/ollama/ollama/issues/1994/events
|
https://github.com/ollama/ollama/issues/1994
| 2,080,887,144
|
I_kwDOJ0Z1Ps58B9Fo
| 1,994
|
Ollama requests hangs after about 20 requests and needs to be restarted
|
{
"login": "Shajan",
"id": 1411014,
"node_id": "MDQ6VXNlcjE0MTEwMTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1411014?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Shajan",
"html_url": "https://github.com/Shajan",
"followers_url": "https://api.github.com/users/Shajan/followers",
"following_url": "https://api.github.com/users/Shajan/following{/other_user}",
"gists_url": "https://api.github.com/users/Shajan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Shajan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Shajan/subscriptions",
"organizations_url": "https://api.github.com/users/Shajan/orgs",
"repos_url": "https://api.github.com/users/Shajan/repos",
"events_url": "https://api.github.com/users/Shajan/events{/privacy}",
"received_events_url": "https://api.github.com/users/Shajan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 5
| 2024-01-14T20:32:40
| 2024-01-16T21:13:28
| 2024-01-16T21:13:28
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Request hangs after about 20 requests.
Ollama version : 0.1.20, Linux with T4 GPU as well as Mac M2.
All subsequent `api/generate` request hangs for all models. The only way to resume is to restart ollama `sudo systemctl restart ollama`.
Repro
```python
import requests
def query(session):
url = "http://localhost:11434/api/generate"
data = {
"model": "llama2:7b",
"prompt": "Why is the sky blue?",
"stream": False,
}
with requests.post(url, json=data) as response: # Hangs about every 20 requests
if response.ok:
return response.text
else:
print(response)
return None
def main():
total = 0
errors = 0
with requests.Session() as session:
for _ in range(100):
total += 1
r = query(session)
if r is None:
errors += 1
success_rate = 100*((total - errors)/total)
print(f"{total=} {errors=} {success_rate=:.2f}")
if __name__ == "__main__":
main()
```
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1994/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1994/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1090
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1090/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1090/comments
|
https://api.github.com/repos/ollama/ollama/issues/1090/events
|
https://github.com/ollama/ollama/issues/1090
| 1,989,036,074
|
I_kwDOJ0Z1Ps52jkgq
| 1,090
|
Suggestions for instruction clarifications for running in docker in Windows.
|
{
"login": "pdavis68",
"id": 2781885,
"node_id": "MDQ6VXNlcjI3ODE4ODU=",
"avatar_url": "https://avatars.githubusercontent.com/u/2781885?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdavis68",
"html_url": "https://github.com/pdavis68",
"followers_url": "https://api.github.com/users/pdavis68/followers",
"following_url": "https://api.github.com/users/pdavis68/following{/other_user}",
"gists_url": "https://api.github.com/users/pdavis68/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdavis68/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdavis68/subscriptions",
"organizations_url": "https://api.github.com/users/pdavis68/orgs",
"repos_url": "https://api.github.com/users/pdavis68/repos",
"events_url": "https://api.github.com/users/pdavis68/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdavis68/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396191,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aXw",
"url": "https://api.github.com/repos/ollama/ollama/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 1
| 2023-11-11T16:45:13
| 2024-03-12T15:41:40
| 2024-03-12T15:41:37
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I just got this installed in Windows using Docker.
The instructions were a bit unclear since the instructions for installing the Nvidia stuff is Linux based. I mistakenly thought I needed to run the container and install all the Nvidia stuff in the container. . It might help other people like me who aren't so clever to know that those are Linux-specific instructions for installing the Nvidia drivers and adding instructions for installing the Nvidia drivers under Windows.
For Windows, you can install the Nvidia drivers (though I'm not sure which ones I installed that worked for this, because I've had them installed for a while. I have the basic drivers and CUDA. I'm guessing it's using the CUDA drivers.) and then run it with the GPU as per the instructions, it worked like a charm.
Also, just want to say, really nice work. I've tried installing a few of these local LLMs and this was by far the easiest for me to install (despite the above issues) and it works really well. I couldn't be happier.
I've been working on a game that uses LLMs and the cost of running with OpenAI was going to be higher than I'd like, so I've been waiting for a local version I could use instead and you guys have delivered. I'm really excited about this.
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1090/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1090/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4546
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4546/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4546/comments
|
https://api.github.com/repos/ollama/ollama/issues/4546/events
|
https://github.com/ollama/ollama/pull/4546
| 2,306,889,495
|
PR_kwDOJ0Z1Ps5wAgqj
| 4,546
|
tidy intermediate blobs
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-05-20T22:15:34
| 2024-06-05T20:13:15
| 2024-05-20T22:22:34
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4546",
"html_url": "https://github.com/ollama/ollama/pull/4546",
"diff_url": "https://github.com/ollama/ollama/pull/4546.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4546.patch",
"merged_at": null
}
| null |
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4546/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4546/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1137
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1137/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1137/comments
|
https://api.github.com/repos/ollama/ollama/issues/1137/events
|
https://github.com/ollama/ollama/issues/1137
| 1,994,631,219
|
I_kwDOJ0Z1Ps5246gz
| 1,137
|
The ollama parameters in the modelfile do not support num_beams
|
{
"login": "garth-waters",
"id": 85235369,
"node_id": "MDQ6VXNlcjg1MjM1MzY5",
"avatar_url": "https://avatars.githubusercontent.com/u/85235369?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/garth-waters",
"html_url": "https://github.com/garth-waters",
"followers_url": "https://api.github.com/users/garth-waters/followers",
"following_url": "https://api.github.com/users/garth-waters/following{/other_user}",
"gists_url": "https://api.github.com/users/garth-waters/gists{/gist_id}",
"starred_url": "https://api.github.com/users/garth-waters/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/garth-waters/subscriptions",
"organizations_url": "https://api.github.com/users/garth-waters/orgs",
"repos_url": "https://api.github.com/users/garth-waters/repos",
"events_url": "https://api.github.com/users/garth-waters/events{/privacy}",
"received_events_url": "https://api.github.com/users/garth-waters/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 2
| 2023-11-15T11:53:31
| 2024-12-23T01:09:42
| 2024-12-23T01:09:42
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Thanks very much for a great product
I am using a modelfile to create a SqlDecoder2 custom model.
The num_beams parameter improves the accuracy of the model by a lot.
However, this parameter is not yet supported.
Is there any intention of including this parameter in the future?
Many Thanks
Garth
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1137/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1137/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6130
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6130/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6130/comments
|
https://api.github.com/repos/ollama/ollama/issues/6130/events
|
https://github.com/ollama/ollama/pull/6130
| 2,443,589,943
|
PR_kwDOJ0Z1Ps53LrdK
| 6,130
|
feat(run): Add a --quiet flag to the run command to disable progress
|
{
"login": "gabe-l-hart",
"id": 1254484,
"node_id": "MDQ6VXNlcjEyNTQ0ODQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1254484?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gabe-l-hart",
"html_url": "https://github.com/gabe-l-hart",
"followers_url": "https://api.github.com/users/gabe-l-hart/followers",
"following_url": "https://api.github.com/users/gabe-l-hart/following{/other_user}",
"gists_url": "https://api.github.com/users/gabe-l-hart/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gabe-l-hart/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gabe-l-hart/subscriptions",
"organizations_url": "https://api.github.com/users/gabe-l-hart/orgs",
"repos_url": "https://api.github.com/users/gabe-l-hart/repos",
"events_url": "https://api.github.com/users/gabe-l-hart/events{/privacy}",
"received_events_url": "https://api.github.com/users/gabe-l-hart/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 3
| 2024-08-01T22:26:02
| 2024-11-22T17:06:44
| 2024-11-22T17:05:42
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6130",
"html_url": "https://github.com/ollama/ollama/pull/6130",
"diff_url": "https://github.com/ollama/ollama/pull/6130.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6130.patch",
"merged_at": null
}
|
## Description
The --quiet flag will disable all progress control characters so that if the output of stderr and stdout are combined, the control characters will not be visible.
https://github.com/ollama/ollama/issues/6120
## Testing
Since there is not currently a unit test suite for the `cmd` package, I did not add tests for this, though I would be happy to do so if it's wanted! To test, I did the following:
```sh
# Build locally
go build .
# Run without the --quiet flag and verify the control characters appear
./ollama run granite-code:3b show me a python fuction that does fizzbuzz 2>&1 | cat -v
# Run with the --quiet flag and verify the control characters do not appear
./ollama run --quiet granite-code:3b show me a python fuction that does fizzbuzz 2>&1 | cat -v
```
|
{
"login": "gabe-l-hart",
"id": 1254484,
"node_id": "MDQ6VXNlcjEyNTQ0ODQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1254484?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gabe-l-hart",
"html_url": "https://github.com/gabe-l-hart",
"followers_url": "https://api.github.com/users/gabe-l-hart/followers",
"following_url": "https://api.github.com/users/gabe-l-hart/following{/other_user}",
"gists_url": "https://api.github.com/users/gabe-l-hart/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gabe-l-hart/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gabe-l-hart/subscriptions",
"organizations_url": "https://api.github.com/users/gabe-l-hart/orgs",
"repos_url": "https://api.github.com/users/gabe-l-hart/repos",
"events_url": "https://api.github.com/users/gabe-l-hart/events{/privacy}",
"received_events_url": "https://api.github.com/users/gabe-l-hart/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6130/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6130/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3971
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3971/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3971/comments
|
https://api.github.com/repos/ollama/ollama/issues/3971/events
|
https://github.com/ollama/ollama/issues/3971
| 2,266,848,281
|
I_kwDOJ0Z1Ps6HHVwZ
| 3,971
|
support for openelm apple
|
{
"login": "olumolu",
"id": 162728301,
"node_id": "U_kgDOCbMJbQ",
"avatar_url": "https://avatars.githubusercontent.com/u/162728301?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/olumolu",
"html_url": "https://github.com/olumolu",
"followers_url": "https://api.github.com/users/olumolu/followers",
"following_url": "https://api.github.com/users/olumolu/following{/other_user}",
"gists_url": "https://api.github.com/users/olumolu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/olumolu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/olumolu/subscriptions",
"organizations_url": "https://api.github.com/users/olumolu/orgs",
"repos_url": "https://api.github.com/users/olumolu/repos",
"events_url": "https://api.github.com/users/olumolu/events{/privacy}",
"received_events_url": "https://api.github.com/users/olumolu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-04-27T05:24:39
| 2024-05-02T18:16:01
| 2024-05-02T18:16:01
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
https://huggingface.co/apple/OpenELM
Apples openelm with small models do this can be run on a low power on device ai.
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3971/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3971/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7718
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7718/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7718/comments
|
https://api.github.com/repos/ollama/ollama/issues/7718/events
|
https://github.com/ollama/ollama/pull/7718
| 2,666,994,816
|
PR_kwDOJ0Z1Ps6CL5Jz
| 7,718
|
readme: improve Community Integrations section
|
{
"login": "vinhnx",
"id": 1097578,
"node_id": "MDQ6VXNlcjEwOTc1Nzg=",
"avatar_url": "https://avatars.githubusercontent.com/u/1097578?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vinhnx",
"html_url": "https://github.com/vinhnx",
"followers_url": "https://api.github.com/users/vinhnx/followers",
"following_url": "https://api.github.com/users/vinhnx/following{/other_user}",
"gists_url": "https://api.github.com/users/vinhnx/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vinhnx/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vinhnx/subscriptions",
"organizations_url": "https://api.github.com/users/vinhnx/orgs",
"repos_url": "https://api.github.com/users/vinhnx/repos",
"events_url": "https://api.github.com/users/vinhnx/events{/privacy}",
"received_events_url": "https://api.github.com/users/vinhnx/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-11-18T03:28:55
| 2024-11-18T03:54:12
| 2024-11-18T03:30:22
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7718",
"html_url": "https://github.com/ollama/ollama/pull/7718",
"diff_url": "https://github.com/ollama/ollama/pull/7718.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7718.patch",
"merged_at": "2024-11-18T03:30:22"
}
|
* Fix README link opening/closed bracket for Reddit Rate link
* Fix and improve README link for VT project.
* Thank you, Ollama team!
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7718/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7718/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/8395
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8395/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8395/comments
|
https://api.github.com/repos/ollama/ollama/issues/8395/events
|
https://github.com/ollama/ollama/issues/8395
| 2,782,337,798
|
I_kwDOJ0Z1Ps6l1xsG
| 8,395
|
Empty response via API
|
{
"login": "gl2007",
"id": 4097227,
"node_id": "MDQ6VXNlcjQwOTcyMjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/4097227?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gl2007",
"html_url": "https://github.com/gl2007",
"followers_url": "https://api.github.com/users/gl2007/followers",
"following_url": "https://api.github.com/users/gl2007/following{/other_user}",
"gists_url": "https://api.github.com/users/gl2007/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gl2007/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gl2007/subscriptions",
"organizations_url": "https://api.github.com/users/gl2007/orgs",
"repos_url": "https://api.github.com/users/gl2007/repos",
"events_url": "https://api.github.com/users/gl2007/events{/privacy}",
"received_events_url": "https://api.github.com/users/gl2007/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 12
| 2025-01-12T07:54:31
| 2025-01-14T21:08:23
| 2025-01-13T19:24:43
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Hosted ollama via 0.0.0.0 in my server in my LAN and "curl <ip>:11434 " returns ollama is running. Also, when I run ollama run<model> in cmd in that machine, I am able to see proper responses.
However, when I run an API request via Postman, I get this empty response, irrespective of the model, which seems to indicate model is not loaded properly. This also happens on the server machine via Postman using localhost.
{
"model": "Mistral-Nemo-12B-Instruct-2407-Q8_0:latest",
"created_at": "2025-01-12T07:39:16.7356243Z",
"response": "",
"done": true,
"done_reason": "load"
}
But it seems model is loaded correctly as I see it in "ollama ps".
What am I doing wrong?
### OS
Windows
### GPU
None
### CPU
Intel
### Ollama version
0.5.4
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8395/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8395/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1489
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1489/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1489/comments
|
https://api.github.com/repos/ollama/ollama/issues/1489/events
|
https://github.com/ollama/ollama/issues/1489
| 2,038,514,181
|
I_kwDOJ0Z1Ps55gUIF
| 1,489
|
Request for Contributor.md
|
{
"login": "aravindputrevu",
"id": 599694,
"node_id": "MDQ6VXNlcjU5OTY5NA==",
"avatar_url": "https://avatars.githubusercontent.com/u/599694?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aravindputrevu",
"html_url": "https://github.com/aravindputrevu",
"followers_url": "https://api.github.com/users/aravindputrevu/followers",
"following_url": "https://api.github.com/users/aravindputrevu/following{/other_user}",
"gists_url": "https://api.github.com/users/aravindputrevu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aravindputrevu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aravindputrevu/subscriptions",
"organizations_url": "https://api.github.com/users/aravindputrevu/orgs",
"repos_url": "https://api.github.com/users/aravindputrevu/repos",
"events_url": "https://api.github.com/users/aravindputrevu/events{/privacy}",
"received_events_url": "https://api.github.com/users/aravindputrevu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 2
| 2023-12-12T20:36:48
| 2024-09-04T03:33:48
| 2024-09-04T03:33:48
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
It'd be great to have a sample `contributor.md` for aspiring contributors.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1489/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1489/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3163
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3163/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3163/comments
|
https://api.github.com/repos/ollama/ollama/issues/3163/events
|
https://github.com/ollama/ollama/issues/3163
| 2,187,872,589
|
I_kwDOJ0Z1Ps6CaElN
| 3,163
|
Question ollama and lm-studio
|
{
"login": "kalle07",
"id": 118767589,
"node_id": "U_kgDOBxQ_5Q",
"avatar_url": "https://avatars.githubusercontent.com/u/118767589?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kalle07",
"html_url": "https://github.com/kalle07",
"followers_url": "https://api.github.com/users/kalle07/followers",
"following_url": "https://api.github.com/users/kalle07/following{/other_user}",
"gists_url": "https://api.github.com/users/kalle07/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kalle07/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kalle07/subscriptions",
"organizations_url": "https://api.github.com/users/kalle07/orgs",
"repos_url": "https://api.github.com/users/kalle07/repos",
"events_url": "https://api.github.com/users/kalle07/events{/privacy}",
"received_events_url": "https://api.github.com/users/kalle07/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-03-15T07:16:31
| 2024-03-15T11:29:27
| 2024-03-15T11:29:27
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What are you trying to do?
please what is significantly better or different
VS
https://lmstudio.ai/
it has a great gui so every one can handle it ;)
btw where is downlaoded the model?
ollama run llama2
### How should we solve this?
_No response_
### What is the impact of not solving this?
_No response_
### Anything else?
_No response_
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3163/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3163/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6617
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6617/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6617/comments
|
https://api.github.com/repos/ollama/ollama/issues/6617/events
|
https://github.com/ollama/ollama/pull/6617
| 2,503,844,337
|
PR_kwDOJ0Z1Ps56UMyn
| 6,617
|
Log system memory at info
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-09-03T21:43:13
| 2024-09-03T21:55:24
| 2024-09-03T21:55:21
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6617",
"html_url": "https://github.com/ollama/ollama/pull/6617",
"diff_url": "https://github.com/ollama/ollama/pull/6617.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6617.patch",
"merged_at": "2024-09-03T21:55:21"
}
|
On systems with low system memory, we can hit allocation failures that are difficult to diagnose without debug logs. This will make it easier to spot.
Resolves #6558
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6617/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6617/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1393
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1393/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1393/comments
|
https://api.github.com/repos/ollama/ollama/issues/1393/events
|
https://github.com/ollama/ollama/pull/1393
| 2,027,020,151
|
PR_kwDOJ0Z1Ps5hOQdR
| 1,393
|
fix: trim space in modelfile fields
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-12-05T19:58:30
| 2023-12-05T20:18:02
| 2023-12-05T20:18:01
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1393",
"html_url": "https://github.com/ollama/ollama/pull/1393",
"diff_url": "https://github.com/ollama/ollama/pull/1393.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1393.patch",
"merged_at": "2023-12-05T20:18:01"
}
|
only trim whitespace for FROM, ADAPTER, and PARAMETER since whitespace in LICENSE, TEMPLATE, SYSTEM might be significant
resolves #1390
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1393/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1393/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5085
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5085/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5085/comments
|
https://api.github.com/repos/ollama/ollama/issues/5085/events
|
https://github.com/ollama/ollama/issues/5085
| 2,355,854,394
|
I_kwDOJ0Z1Ps6Ma3w6
| 5,085
|
OllaMail - An email client powered by Ollama
|
{
"login": "perpendicularai",
"id": 146530480,
"node_id": "U_kgDOCLvgsA",
"avatar_url": "https://avatars.githubusercontent.com/u/146530480?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/perpendicularai",
"html_url": "https://github.com/perpendicularai",
"followers_url": "https://api.github.com/users/perpendicularai/followers",
"following_url": "https://api.github.com/users/perpendicularai/following{/other_user}",
"gists_url": "https://api.github.com/users/perpendicularai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/perpendicularai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/perpendicularai/subscriptions",
"organizations_url": "https://api.github.com/users/perpendicularai/orgs",
"repos_url": "https://api.github.com/users/perpendicularai/repos",
"events_url": "https://api.github.com/users/perpendicularai/events{/privacy}",
"received_events_url": "https://api.github.com/users/perpendicularai/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-06-16T16:07:11
| 2024-06-18T18:24:16
| 2024-06-18T11:38:40
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi Ollama Team,
Thank you for your time and effort in making sure that the Ollama API is always exceptional when a new version is released.
With that said, I'd like to inform you of an email client that reads and sends email using Ollama. I'll be making a version native to Windows available and wanted to know if you would like to add it to the README.md Community Integrations. It's called OllaMail. See git repo: https://github.com/perpendicularai/OllaMail
I have attached an image for your perusal.


|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5085/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5085/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7293
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7293/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7293/comments
|
https://api.github.com/repos/ollama/ollama/issues/7293/events
|
https://github.com/ollama/ollama/issues/7293
| 2,602,070,790
|
I_kwDOJ0Z1Ps6bGHMG
| 7,293
|
0.4.0rc0 arm64 andro termux compile error
|
{
"login": "fxmbsw7",
"id": 39368685,
"node_id": "MDQ6VXNlcjM5MzY4Njg1",
"avatar_url": "https://avatars.githubusercontent.com/u/39368685?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fxmbsw7",
"html_url": "https://github.com/fxmbsw7",
"followers_url": "https://api.github.com/users/fxmbsw7/followers",
"following_url": "https://api.github.com/users/fxmbsw7/following{/other_user}",
"gists_url": "https://api.github.com/users/fxmbsw7/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fxmbsw7/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fxmbsw7/subscriptions",
"organizations_url": "https://api.github.com/users/fxmbsw7/orgs",
"repos_url": "https://api.github.com/users/fxmbsw7/repos",
"events_url": "https://api.github.com/users/fxmbsw7/events{/privacy}",
"received_events_url": "https://api.github.com/users/fxmbsw7/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5755339642,
"node_id": "LA_kwDOJ0Z1Ps8AAAABVwuDeg",
"url": "https://api.github.com/repos/ollama/ollama/labels/linux",
"name": "linux",
"color": "516E70",
"default": false,
"description": ""
},
{
"id": 7700262114,
"node_id": "LA_kwDOJ0Z1Ps8AAAAByvis4g",
"url": "https://api.github.com/repos/ollama/ollama/labels/build",
"name": "build",
"color": "006b75",
"default": false,
"description": "Issues relating to building ollama from source"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 13
| 2024-10-21T10:21:45
| 2024-12-31T15:35:38
| 2024-11-12T18:31:54
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
the earlier worked easy , `go generate ./...` and `go build .`
whats the new style ? gcc .c instead go ?
readme doesnt seem to contain about compilement
i run `go generate ./...` it returns
ill retry w/o go
```~/ollama-0.4.0-rc0 $ go generate ./...
<rm cmd removed for discord>
make -f make/Makefile.default
make[1]: Entering directory '/data/data/com.termux/files/home/ollama-0.4.0-rc0/llama'
fatal: not a git repository (or any parent up to mount point /)
Stopping at filesystem boundary (GIT_DISCOVERY_ACROSS_FILESYSTEM not set).
GOARCH=arm64 go build -buildmode=pie "-ldflags=-w -s \"-X=github.com/ollama/ollama/version.Version=\" \"-X=github.com/ollama/ollama/llama.CpuFeatures=\" " -trimpath -o /data/data/com.termux/files/home/ollama-0.4.0-rc0/llama/build/linux-arm64/runners/cpu/ollama_llama_server ./runner
# github.com/ollama/ollama/llama
ggml-quants.c:4023:88: error: always_inline function 'vmmlaq_s32' requires target feature 'i8mm', but would be inlined into function 'ggml_vec_dot_q4_0_q8_0' that is compiled without support for 'i8mm'
ggml-quants.c:4023:76: error: always_inline function 'vmmlaq_s32' requires target feature 'i8mm', but would be inlined into function 'ggml_vec_dot_q4_0_q8_0' that is compiled without support for 'i8mm'
ggml-quants.c:4023:64: error: always_inline function 'vmmlaq_s32' requires target feature 'i8mm', but would be inlined into function 'ggml_vec_dot_q4_0_q8_0' that is compiled without support for 'i8mm'
ggml-quants.c:4023:52: error: always_inline function 'vmmlaq_s32' requires target feature 'i8mm', but would be inlined into function 'ggml_vec_dot_q4_0_q8_0' that is compiled without support for 'i8mm'
make[1]: *** [make/Makefile.default:27: /data/data/com.termux/files/home/ollama-0.4.0-rc0/llama/build/linux-arm64/runners/cpu/ollama_llama_server] Error 1
make[1]: Leaving directory '/data/data/com.termux/files/home/ollama-0.4.0-rc0/llama'
make: *** [Makefile:41: default] Error 2
llama/llama.go:3: running "make": exit status 2```
### OS
Linux
### GPU
Other
### CPU
Other
### Ollama version
0.4.0rc0
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7293/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7293/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7440
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7440/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7440/comments
|
https://api.github.com/repos/ollama/ollama/issues/7440/events
|
https://github.com/ollama/ollama/issues/7440
| 2,626,036,645
|
I_kwDOJ0Z1Ps6chiOl
| 7,440
|
[v0.4.0-rc6] CUDA OOM using x/llama3.2-vision:11b-instruct
|
{
"login": "thatjpk",
"id": 1297471,
"node_id": "MDQ6VXNlcjEyOTc0NzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1297471?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thatjpk",
"html_url": "https://github.com/thatjpk",
"followers_url": "https://api.github.com/users/thatjpk/followers",
"following_url": "https://api.github.com/users/thatjpk/following{/other_user}",
"gists_url": "https://api.github.com/users/thatjpk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thatjpk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thatjpk/subscriptions",
"organizations_url": "https://api.github.com/users/thatjpk/orgs",
"repos_url": "https://api.github.com/users/thatjpk/repos",
"events_url": "https://api.github.com/users/thatjpk/events{/privacy}",
"received_events_url": "https://api.github.com/users/thatjpk/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6849881759,
"node_id": "LA_kwDOJ0Z1Ps8AAAABmEjmnw",
"url": "https://api.github.com/repos/ollama/ollama/labels/memory",
"name": "memory",
"color": "5017EA",
"default": false,
"description": ""
}
] |
closed
| false
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 10
| 2024-10-31T05:59:40
| 2024-11-09T01:20:19
| 2024-11-05T03:45:47
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Attached log: [llama3.2-cuda-oom.log](https://github.com/user-attachments/files/17582524/llama3.2-cuda-oom.log)
I'm testing the `x/llama3.2-vision:11b-instruct-q4_K_M` and `x/llama3.2-vision:11b-instruct-q8_0` models from ollama.com, using ollama 0.4.0-rc6 via Open WebUI v0.3.35 (in docker).
```
~ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4c149404563a ghcr.io/open-webui/open-webui:main "bash start.sh" 14 minutes ago Up 14 minutes (healthy) 0.0.0.0:3000->8080/tcp, :::3000->8080/tcp open-webui
c4d43daa9ad5 ollama/ollama:0.4.0-rc6 "/bin/ollama serve" 14 minutes ago Up 14 minutes 11434/tcp ollama
~ docker --version
Docker version 24.0.7, build 24.0.7-0ubuntu2~22.04.1
~ nvidia-smi
Thu Oct 31 01:28:17 2024
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 550.120 Driver Version: 550.120 CUDA Version: 12.4 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA GeForce RTX 3080 Ti Off | 00000000:0B:00.0 On | N/A |
| 0% 32C P5 68W / 366W | 1990MiB / 12288MiB | 17% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
```
When ollama is running with CUDA enabled, and I post an image in a chat with a llama3.2-vision model, Open WebUI reports `Oops! No text generated from Ollama, Please try again.`, and ollama generates the [attached log](https://github.com/user-attachments/files/17582524/llama3.2-cuda-oom.log). A snippet of the log around the SIGSEGV is this:
```
Device 0: NVIDIA GeForce RTX 3080 Ti, compute capability 8.6, VMM: yes
llm_load_tensors: ggml ctx size = 0.36 MiB
llm_load_tensors: offloading 31 repeating layers to GPU
llm_load_tensors: offloaded 31/41 layers to GPU
llm_load_tensors: CPU buffer size = 5679.33 MiB
llm_load_tensors: CUDA0 buffer size = 3841.45 MiB
llama_new_context_with_model: n_ctx = 2048
llama_new_context_with_model: n_batch = 512
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 500000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: CUDA_Host KV buffer size = 156.06 MiB
llama_kv_cache_init: CUDA0 KV buffer size = 500.19 MiB
llama_new_context_with_model: KV self size = 656.25 MiB, K (f16): 328.12 MiB, V (f16): 328.12 MiB
llama_new_context_with_model: CUDA_Host output buffer size = 0.50 MiB
llama_new_context_with_model: CUDA0 compute buffer size = 669.48 MiB
llama_new_context_with_model: CUDA_Host compute buffer size = 12.01 MiB
llama_new_context_with_model: graph nodes = 1030
llama_new_context_with_model: graph splits = 95
mllama_model_load: model name: Llama-3.2-11B-Vision-Instruct
mllama_model_load: description: vision encoder for Mllama
mllama_model_load: GGUF version: 3
mllama_model_load: alignment: 32
mllama_model_load: n_tensors: 512
mllama_model_load: n_kv: 17
mllama_model_load: ftype: f16
mllama_model_load:
mllama_model_load: vision using CUDA backend
ggml_backend_cuda_buffer_type_alloc_buffer: allocating 2853.34 MiB on device 0: cudaMalloc failed: out of memory
ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 2991947904
mllama_model_load: compute allocated memory: 0.00 MB
time=2024-10-31T05:39:41.603Z level=INFO source=server.go:606 msg="llama runner started in 2.26 seconds"
SIGSEGV: segmentation violation
PC=0x634314838794 m=7 sigcode=1 addr=0x10
signal arrived during cgo execution
goroutine 18 gp=0xc000218000 m=7 mp=0xc000100808 [syscall]:
runtime.cgocall(0x634314832920, 0xc00002b360)
runtime/cgocall.go:157 +0x4b fp=0xc00002b338 sp=0xc00002b300 pc=0x6343145b53ab
github.com/ollama/ollama/llama._Cfunc_mllama_image_encode(0x78d73983e760, 0x10, 0x78d73c000ce0, 0xc0050ea000)
_cgo_gotypes.go:915 +0x4c fp=0xc00002b360 sp=0xc00002b338 pc=0x6343146b3d4c
github.com/ollama/ollama/llama.(*MllamaContext).NewEmbed.func3(0xc000014300?, 0xc000202130?, 0x78d73c000ce0, {0xc0050ea000, 0xc00002b400?, 0x6343146b949f?})
github.com/ollama/ollama/llama/llama.go:541 +0xa8 fp=0xc00002b3b8 sp=0xc00002b360 pc=0x6343146b7f48
github.com/ollama/ollama/llama.(*MllamaContext).NewEmbed(0xc000014300, 0xc000202130, {0xc00428e000, 0xe5b000, 0xe5b000}, 0x6)
github.com/ollama/ollama/llama/llama.go:541 +0x111 fp=0xc00002b448 sp=0xc00002b3b8 pc=0x6343146b7db1
main.(*ImageContext).NewEmbed(0xc0000d0dd0, 0xc000202130, {0xc00428e000, 0xe5b000, 0xe5b000}, 0x6)
github.com/ollama/ollama/llama/runner/image.go:78 +0x1a7 fp=0xc00002b4e0 sp=0xc00002b448 pc=0x63431482ad47
main.(*Server).inputs(0xc0000ea120, {0xc0001c8000, 0x86}, {0xc0000cf050, 0x1, 0x146138a5?})
github.com/ollama/ollama/llama/runner/runner.go:193 +0x28e fp=0xc00002b600 sp=0xc00002b4e0 pc=0x63431482c2ee
main.(*Server).NewSequence(0xc0000ea120, {0xc0001c8000, 0x86}, {0xc0000cf050, 0x1, 0x1}, {0x5000, {0x0, 0x0, 0x0}, ...})
github.com/ollama/ollama/llama/runner/runner.go:100 +0xb2 fp=0xc00002b7b8 sp=0xc00002b600 pc=0x63431482b8b2
main.(*Server).completion(0xc0000ea120, {0x634314b6acf0, 0xc0002342a0}, 0xc0002226c0)
github.com/ollama/ollama/llama/runner/runner.go:591 +0x52a fp=0xc00002bab8 sp=0xc00002b7b8 pc=0x63431482e7ca
main.(*Server).completion-fm({0x634314b6acf0?, 0xc0002342a0?}, 0x63431480a32d?)
<autogenerated>:1 +0x36 fp=0xc00002bae8 sp=0xc00002bab8 pc=0x634314831b96
net/http.HandlerFunc.ServeHTTP(0xc0000d0c30?, {0x634314b6acf0?, 0xc0002342a0?}, 0x10?)
net/http/server.go:2171 +0x29 fp=0xc00002bb10 sp=0xc00002bae8 pc=0x634314802dc9
net/http.(*ServeMux).ServeHTTP(0x6343145bef65?, {0x634314b6acf0, 0xc0002342a0}, 0xc0002226c0)
net/http/server.go:2688 +0x1ad fp=0xc00002bb60 sp=0xc00002bb10 pc=0x634314804c4d
net/http.serverHandler.ServeHTTP({0x634314b6a040?}, {0x634314b6acf0?, 0xc0002342a0?}, 0x6?)
net/http/server.go:3142 +0x8e fp=0xc00002bb90 sp=0xc00002bb60 pc=0x634314805c6e
net/http.(*conn).serve(0xc000212000, {0x634314b6b148, 0xc0000cedb0})
net/http/server.go:2044 +0x5e8 fp=0xc00002bfb8 sp=0xc00002bb90 pc=0x634314801a08
net/http.(*Server).Serve.gowrap3()
net/http/server.go:3290 +0x28 fp=0xc00002bfe0 sp=0xc00002bfb8 pc=0x6343148063e8
runtime.goexit({})
runtime/asm_amd64.s:1695 +0x1 fp=0xc00002bfe8 sp=0xc00002bfe0 pc=0x63431461ddc1
created by net/http.(*Server).Serve in goroutine 1
net/http/server.go:3290 +0x4b4
```
Some additional notes:
- I see `ggml_backend_cuda_buffer_type_alloc_buffer: allocating 2853.34 MiB on device 0: cudaMalloc failed: out of memory` in there, which doesn't add up to me because this GPU has 12GB of VRAM (about 10GB of which is usable as it's also running the KDE session).
- This happens on both the `q4_K_M` and `q8_0` quants of the model.
- This _doesn't_ happen when I run without CUDA. The model runs on the CPU and works, albeit slowly.
- Older vision models in this setup, like llava-llama3, work as they always have with or without CUDA.
All that said, I recognize this may be something to do with my setup. So if you have additional troubleshooting steps I can do to better isolate the behavior, please let me know. Thanks for taking a look!
### OS
Linux, Docker
### GPU
Nvidia
### CPU
AMD
### Ollama version
v0.4.0-rc6
|
{
"login": "thatjpk",
"id": 1297471,
"node_id": "MDQ6VXNlcjEyOTc0NzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1297471?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thatjpk",
"html_url": "https://github.com/thatjpk",
"followers_url": "https://api.github.com/users/thatjpk/followers",
"following_url": "https://api.github.com/users/thatjpk/following{/other_user}",
"gists_url": "https://api.github.com/users/thatjpk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thatjpk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thatjpk/subscriptions",
"organizations_url": "https://api.github.com/users/thatjpk/orgs",
"repos_url": "https://api.github.com/users/thatjpk/repos",
"events_url": "https://api.github.com/users/thatjpk/events{/privacy}",
"received_events_url": "https://api.github.com/users/thatjpk/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7440/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7440/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6085
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6085/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6085/comments
|
https://api.github.com/repos/ollama/ollama/issues/6085/events
|
https://github.com/ollama/ollama/pull/6085
| 2,438,972,450
|
PR_kwDOJ0Z1Ps527_at
| 6,085
|
commit
|
{
"login": "rpreslar4765",
"id": 89657947,
"node_id": "MDQ6VXNlcjg5NjU3OTQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/89657947?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rpreslar4765",
"html_url": "https://github.com/rpreslar4765",
"followers_url": "https://api.github.com/users/rpreslar4765/followers",
"following_url": "https://api.github.com/users/rpreslar4765/following{/other_user}",
"gists_url": "https://api.github.com/users/rpreslar4765/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rpreslar4765/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rpreslar4765/subscriptions",
"organizations_url": "https://api.github.com/users/rpreslar4765/orgs",
"repos_url": "https://api.github.com/users/rpreslar4765/repos",
"events_url": "https://api.github.com/users/rpreslar4765/events{/privacy}",
"received_events_url": "https://api.github.com/users/rpreslar4765/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-07-31T02:19:04
| 2024-07-31T20:12:15
| 2024-07-31T20:12:15
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6085",
"html_url": "https://github.com/ollama/ollama/pull/6085",
"diff_url": "https://github.com/ollama/ollama/pull/6085.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6085.patch",
"merged_at": null
}
| null |
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6085/reactions",
"total_count": 4,
"+1": 0,
"-1": 2,
"laugh": 1,
"hooray": 0,
"confused": 1,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6085/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6522
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6522/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6522/comments
|
https://api.github.com/repos/ollama/ollama/issues/6522/events
|
https://github.com/ollama/ollama/pull/6522
| 2,488,023,796
|
PR_kwDOJ0Z1Ps55gKa5
| 6,522
|
detect chat template from configs that contain lists
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-08-27T00:32:21
| 2024-08-28T18:04:20
| 2024-08-28T18:04:18
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6522",
"html_url": "https://github.com/ollama/ollama/pull/6522",
"diff_url": "https://github.com/ollama/ollama/pull/6522.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6522.patch",
"merged_at": "2024-08-28T18:04:18"
}
|
models like [hermes3](https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-8B/blob/main/tokenizer_config.json#L2053) have a list of chat templates
```json
"chat_template": [
{
"name": "default",
"template": "{{bos_token}}{% for message in messages %}{% if loop.first and messages[0]['role'] != 'system' %}{{ '<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n' }}{% endif %}{{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n'}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant\n' }}{% endif %}"
},
{
"name": "tool_use",
"template": "{%- macro json_to_python_type(json_spec) %}\n{%- set basic_type_map = {\n \"string\": \"str\",\n \"number\": \"float\",\n \"integer\": \"int\",\n \"boolean\": \"bool\"\n} %}\n\n{%- if basic_type_map[json_spec.type] is defined %}\n {{- basic_type_map[json_spec.type] }}\n{%- elif json_spec.type == \"array\" %}\n {{- \"list[\" + json_to_python_type(json_spec|items) + \"]\"}}\n{%- elif json_spec.type == \"object\" %}\n {%- if json_spec.additionalProperties is defined %}\n {{- \"dict[str, \" + json_to_python_type(json_spec.additionalProperties) + ']'}}\n {%- else %}\n {{- \"dict\" }}\n {%- endif %}\n{%- elif json_spec.type is iterable %}\n {{- \"Union[\" }}\n {%- for t in json_spec.type %}\n {{- json_to_python_type({\"type\": t}) }}\n {%- if not loop.last %}\n {{- \",\" }} \n {%- endif %}\n {%- endfor %}\n {{- \"]\" }}\n{%- else %}\n {{- \"Any\" }}\n{%- endif %}\n{%- endmacro %}\n\n\n{{- bos_token }}\n{{- \"You are a function calling AI model. You are provided with function signatures within <tools></tools> XML tags. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into functions. Here are the available tools: <tools> \" }}\n{%- for tool in tools %}\n {%- if tool.function is defined %}\n {%- set tool = tool.function %}\n {%- endif %}\n {{- '{\"type\": \"function\", \"function\": ' }}\n {{- '{\"name\": \"' + tool.name + '\", ' }}\n {{- '\"description\": \"' + tool.name + '(' }}\n {%- for param_name, param_fields in tool.parameters.properties|items %}\n {{- param_name + \": \" + json_to_python_type(param_fields) }}\n {%- if not loop.last %}\n {{- \", \" }}\n {%- endif %}\n {%- endfor %}\n {{- \")\" }}\n {%- if tool.return is defined %}\n {{- \" -> \" + json_to_python_type(tool.return) }}\n {%- endif %}\n {{- \" - \" + tool.description + \"\\n\\n\" }}\n {%- for param_name, param_fields in tool.parameters.properties|items %}\n {%- if loop.first %}\n {{- \" Args:\\n\" }}\n {%- endif %}\n {{- \" \" + param_name + \"(\" + json_to_python_type(param_fields) + \"): \" + param_fields.description|trim }}\n {%- endfor %}\n {%- if tool.return is defined and tool.return.description is defined %}\n {{- \"\\n Returns:\\n \" + tool.return.description }}\n {%- endif %}\n {{- '\"' }}\n {{- ', \"parameters\": ' }}\n {%- if tool.parameters.properties | length == 0 %}\n {{- \"{}\" }}\n {%- else %}\n {{- tool.parameters|tojson }}\n {%- endif %}\n {{- \"}\" }}\n {%- if not loop.last %}\n {{- \"\\n\" }}\n {%- endif %}\n{%- endfor %}\n{{- \" </tools>\" }}\n{{- 'Use the following pydantic model json schema for each tool call you will make: {\"properties\": {\"name\": {\"title\": \"Name\", \"type\": \"string\"}, \"arguments\": {\"title\": \"Arguments\", \"type\": \"object\"}}, \"required\": [\"name\", \"arguments\"], \"title\": \"FunctionCall\", \"type\": \"object\"}}\n' }}\n{{- \"For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags as follows:\n\" }}\n{{- \"<tool_call>\n\" }}\n{{- '{\"name\": <function-name>, \"arguments\": <args-dict>}\n' }}\n{{- '</tool_call><|im_end|>' }}\n{%- for message in messages %}\n {%- if message.role == \"user\" or message.role == \"system\" or (message.role == \"assistant\" and message.tool_calls is not defined) %}\n {{- '<|im_start|>' + message.role + '\\n' + message.content + '<|im_end|>' + '\\n' }}\n {%- elif message.role == \"assistant\" %}\n {{- '<|im_start|>' + message.role }}\n {%- for tool_call in message.tool_calls %}\n {{- '\n<tool_call>\n' }} {%- if tool_call.function is defined %}\n {%- set tool_call = tool_call.function %}\n {%- endif %}\n {{- '{' }}\n {{- '\"name\": \"' }}\n {{- tool_call.name }}\n {{- '\"}' }}\n {{- ', '}}\n {%- if tool_call.arguments is defined %}\n {{- '\"arguments\": ' }}\n {{- tool_call.arguments|tojson }}\n {%- endif %}\n {{- '\\n</tool_call>' }}\n {%- endfor %}\n {{- '<|im_end|>\\n' }}\n {%- elif message.role == \"tool\" %}\n {%- if not message.name is defined %}\n {{- raise_exception(\"Tool response dicts require a 'name' key indicating the name of the called function!\") }}\n {%- endif %}\n {%- if loop.previtem and loop.previtem.role != \"tool\" %}\n {{- '<|im_start|>tool\\n' }}\n {%- endif %}\n {{- '<tool_response>\\n' }}\n {{- message.content }}\n {%- if not loop.last %}\n {{- '\\n</tool_response>\\n' }}\n {%- else %}\n {{- '\\n</tool_response>' }}\n {%- endif %}\n {%- if not loop.last and loop.nextitem.role != \"tool\" %}\n {{- '<|im_end|>' }}\n {%- elif loop.last %}\n {{- '<|im_end|>' }}\n {%- endif %}\n {%- endif %}\n{%- endfor %}\n{%- if add_generation_prompt %}\n {{- '<|im_start|>assistant\\n' }}\n{%- endif %}\n"
}
],
```
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6522/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6522/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/273
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/273/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/273/comments
|
https://api.github.com/repos/ollama/ollama/issues/273/events
|
https://github.com/ollama/ollama/pull/273
| 1,835,857,757
|
PR_kwDOJ0Z1Ps5XJsAl
| 273
|
Create a sentiments example
|
{
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.github.com/users/technovangelist/followers",
"following_url": "https://api.github.com/users/technovangelist/following{/other_user}",
"gists_url": "https://api.github.com/users/technovangelist/gists{/gist_id}",
"starred_url": "https://api.github.com/users/technovangelist/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/technovangelist/subscriptions",
"organizations_url": "https://api.github.com/users/technovangelist/orgs",
"repos_url": "https://api.github.com/users/technovangelist/repos",
"events_url": "https://api.github.com/users/technovangelist/events{/privacy}",
"received_events_url": "https://api.github.com/users/technovangelist/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-08-03T23:39:06
| 2023-08-31T23:32:00
| 2023-08-31T23:31:59
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/273",
"html_url": "https://github.com/ollama/ollama/pull/273",
"diff_url": "https://github.com/ollama/ollama/pull/273.diff",
"patch_url": "https://github.com/ollama/ollama/pull/273.patch",
"merged_at": "2023-08-31T23:31:59"
}
|
A simple example for sentiments analysis and a writer of lists of 10 tweets
|
{
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.github.com/users/technovangelist/followers",
"following_url": "https://api.github.com/users/technovangelist/following{/other_user}",
"gists_url": "https://api.github.com/users/technovangelist/gists{/gist_id}",
"starred_url": "https://api.github.com/users/technovangelist/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/technovangelist/subscriptions",
"organizations_url": "https://api.github.com/users/technovangelist/orgs",
"repos_url": "https://api.github.com/users/technovangelist/repos",
"events_url": "https://api.github.com/users/technovangelist/events{/privacy}",
"received_events_url": "https://api.github.com/users/technovangelist/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/273/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/273/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/8290
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8290/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8290/comments
|
https://api.github.com/repos/ollama/ollama/issues/8290/events
|
https://github.com/ollama/ollama/issues/8290
| 2,766,879,372
|
I_kwDOJ0Z1Ps6k6zqM
| 8,290
|
pull model manifest: open /usr/local/bin/ollama/.ollama/xxx: not a directory
|
{
"login": "18279811184",
"id": 35674790,
"node_id": "MDQ6VXNlcjM1Njc0Nzkw",
"avatar_url": "https://avatars.githubusercontent.com/u/35674790?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/18279811184",
"html_url": "https://github.com/18279811184",
"followers_url": "https://api.github.com/users/18279811184/followers",
"following_url": "https://api.github.com/users/18279811184/following{/other_user}",
"gists_url": "https://api.github.com/users/18279811184/gists{/gist_id}",
"starred_url": "https://api.github.com/users/18279811184/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/18279811184/subscriptions",
"organizations_url": "https://api.github.com/users/18279811184/orgs",
"repos_url": "https://api.github.com/users/18279811184/repos",
"events_url": "https://api.github.com/users/18279811184/events{/privacy}",
"received_events_url": "https://api.github.com/users/18279811184/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info",
"name": "needs more info",
"color": "BA8041",
"default": false,
"description": "More information is needed to assist"
}
] |
closed
| false
| null |
[] | null | 1
| 2025-01-03T02:13:20
| 2025-01-24T09:51:31
| 2025-01-24T09:51:31
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
When I modified the configuration file/etc/systemd/system/ollama.service of ollama, I encountered an error when using ollama to pull the model after restarting:
pulling manifest
Error: pull model manifest: open /usr/local/bin/ollama/.ollama/id_ed25519: not a directory
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.5.4
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8290/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8290/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1231
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1231/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1231/comments
|
https://api.github.com/repos/ollama/ollama/issues/1231/events
|
https://github.com/ollama/ollama/issues/1231
| 2,005,302,301
|
I_kwDOJ0Z1Ps53hnwd
| 1,231
|
`ollama run llama2` on m1 macbook fails after fresh install
|
{
"login": "johnlarkin1",
"id": 18692931,
"node_id": "MDQ6VXNlcjE4NjkyOTMx",
"avatar_url": "https://avatars.githubusercontent.com/u/18692931?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/johnlarkin1",
"html_url": "https://github.com/johnlarkin1",
"followers_url": "https://api.github.com/users/johnlarkin1/followers",
"following_url": "https://api.github.com/users/johnlarkin1/following{/other_user}",
"gists_url": "https://api.github.com/users/johnlarkin1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/johnlarkin1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/johnlarkin1/subscriptions",
"organizations_url": "https://api.github.com/users/johnlarkin1/orgs",
"repos_url": "https://api.github.com/users/johnlarkin1/repos",
"events_url": "https://api.github.com/users/johnlarkin1/events{/privacy}",
"received_events_url": "https://api.github.com/users/johnlarkin1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 8
| 2023-11-21T23:19:18
| 2024-02-20T01:11:41
| 2024-02-20T01:11:40
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hello! I am getting the following issue after I've downloaded the desktop application and tried to do the following:
```
╰─➤ ollama run llama2
Error: llama runner process has terminated
```
It also seemingly borks my computer for a second, and I'm not even able to use my trackpad (probably due to personal memory constraints).
I can upload portions of my `server.log` upon request. Would love any help / workaround
```
╰─➤ tail -n 25 ~/.ollama/logs/server.log
ggml_metal_init: loaded kernel_mul_mm_q6_K_f32 0x1206d5bd0 | th_max = 768 | th_width = 32
ggml_metal_init: loaded kernel_rope_f32 0x1206d6370 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_rope_f16 0x1206d6b60 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_alibi_f32 0x1206d73d0 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_cpy_f32_f16 0x1206d7f50 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_cpy_f32_f32 0x1206d8ad0 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_cpy_f16_f16 0x1206d9650 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_concat 0x1206d9d30 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_sqr 0x1206da610 | th_max = 1024 | th_width = 32
ggml_metal_init: GPU name: Apple M1
ggml_metal_init: GPU family: MTLGPUFamilyApple7 (1007)
ggml_metal_init: hasUnifiedMemory = true
ggml_metal_init: recommendedMaxWorkingSetSize = 5461.34 MB
ggml_metal_init: maxTransferRate = built-in GPU
llama_new_context_with_model: compute buffer total size = 294.13 MB
llama_new_context_with_model: max tensor size = 102.54 MB
ggml_metal_add_buffer: allocated 'data ' buffer, size = 3648.58 MB, ( 3649.08 / 5461.34)
ggml_metal_add_buffer: allocated 'kv ' buffer, size = 2048.02 MB, ( 5697.09 / 5461.34), warning: current allocated size is greater than the recommended max working set size
ggml_metal_add_buffer: allocated 'alloc ' buffer, size = 288.02 MB, ( 5985.11 / 5461.34), warning: current allocated size is greater than the recommended max working set size
ggml_metal_graph_compute: command buffer 0 failed with status 5
GGML_ASSERT: /Users/jmorgan/workspace/ollama/llm/llama.cpp/gguf/ggml-metal.m:1508: false
2023/11/21 18:14:57 llama.go:435: signal: abort trap
2023/11/21 18:14:57 llama.go:443: error starting llama runner: llama runner process has terminated
2023/11/21 18:14:57 llama.go:509: llama runner stopped successfully
[GIN] 2023/11/21 - 18:14:57 | 500 | 6.678189916s | 127.0.0.1 | POST "/api/generate"
```
Also other version details:
```
╰─➤ ollama -v
ollama version 0.1.11
```
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1231/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1231/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7353
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7353/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7353/comments
|
https://api.github.com/repos/ollama/ollama/issues/7353/events
|
https://github.com/ollama/ollama/issues/7353
| 2,613,301,190
|
I_kwDOJ0Z1Ps6bw8_G
| 7,353
|
Does ollama have other model support plans?Such as TTS, graphics, video, etc
|
{
"login": "E218PQ",
"id": 110892042,
"node_id": "U_kgDOBpwUCg",
"avatar_url": "https://avatars.githubusercontent.com/u/110892042?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/E218PQ",
"html_url": "https://github.com/E218PQ",
"followers_url": "https://api.github.com/users/E218PQ/followers",
"following_url": "https://api.github.com/users/E218PQ/following{/other_user}",
"gists_url": "https://api.github.com/users/E218PQ/gists{/gist_id}",
"starred_url": "https://api.github.com/users/E218PQ/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/E218PQ/subscriptions",
"organizations_url": "https://api.github.com/users/E218PQ/orgs",
"repos_url": "https://api.github.com/users/E218PQ/repos",
"events_url": "https://api.github.com/users/E218PQ/events{/privacy}",
"received_events_url": "https://api.github.com/users/E218PQ/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
open
| false
| null |
[] | null | 2
| 2024-10-25T07:18:58
| 2024-11-05T00:52:06
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
We deeply appreciate the convenience, speed, and power of Olama. In order to meet more application scenarios, we hope that Olama can increase support for other model categories, such as text generated speech, text generated images, text generated videos, etc. With the rapid development of AI, the demand for AI will also increase. I hope you can carefully consider it.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7353/reactions",
"total_count": 3,
"+1": 2,
"-1": 0,
"laugh": 1,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7353/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/1293
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1293/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1293/comments
|
https://api.github.com/repos/ollama/ollama/issues/1293/events
|
https://github.com/ollama/ollama/issues/1293
| 2,013,296,184
|
I_kwDOJ0Z1Ps54AHY4
| 1,293
|
Ollama list modified column shows when the model was last pulled, rather than when last modified
|
{
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.github.com/users/technovangelist/followers",
"following_url": "https://api.github.com/users/technovangelist/following{/other_user}",
"gists_url": "https://api.github.com/users/technovangelist/gists{/gist_id}",
"starred_url": "https://api.github.com/users/technovangelist/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/technovangelist/subscriptions",
"organizations_url": "https://api.github.com/users/technovangelist/orgs",
"repos_url": "https://api.github.com/users/technovangelist/repos",
"events_url": "https://api.github.com/users/technovangelist/events{/privacy}",
"received_events_url": "https://api.github.com/users/technovangelist/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 0
| 2023-11-27T23:18:06
| 2023-11-27T23:18:06
| null |
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
If you pull a model, but there are no changes, then the modified column will show that the model was modified seconds ago, even if it hadn't actually been modified in weeks. It should show the last time the actual model was modified
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1293/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1293/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/6556
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6556/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6556/comments
|
https://api.github.com/repos/ollama/ollama/issues/6556/events
|
https://github.com/ollama/ollama/issues/6556
| 2,494,893,262
|
I_kwDOJ0Z1Ps6UtQzO
| 6,556
|
cuda_v12 returns poor results or crashes for Driver Version: 525.147.05
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg",
"url": "https://api.github.com/repos/ollama/ollama/labels/nvidia",
"name": "nvidia",
"color": "8CDB00",
"default": false,
"description": "Issues relating to Nvidia GPUs and CUDA"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 0
| 2024-08-29T15:34:49
| 2024-09-04T00:15:32
| 2024-09-04T00:15:32
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Between 0.3.7-rc5 and 0.3.7-rc6 the default CUDA driver was switched from v11 to v12 and results from a variety of models degraded. I first noticed this with 0.3.7-rc6 but the problem also exists in -rc4 if OLLAMA_LLM_LIBRARY is set to cuda_v12. The problem persists into 0.3.8.
```
$ for l in cuda_v11 cuda_v12 ; do for m in hermes3:8b-llama3.1-q4_0 llama3.1 qwen2:1.5b ; do echo $l $m ; OLLAMA_LLM_LIBRARY=$l OLLAMA_DOCKER_TAG=0.3.8 docker compose up -d ollama 2>/dev/null && sleep 2 && curl -s localhost:11434/api/chat -d '{"model":"'$m'","messages":[{"role":"user","content":"say 'hello'"}],"stream":false}' | jq '{"response":"\(.message.content)","error":"\(.error)"}' ; done ; done
cuda_v11 hermes3:8b-llama3.1-q4_0
{
"response": "\nHello! How can I assist you today?",
"error": "null"
}
cuda_v11 llama3.1
{
"response": "Hello! How can I assist you today?",
"error": "null"
}
cuda_v11 qwen2:1.5b
{
"response": "Hello! How can I help you today? Is there anything specific you'd like to talk about or learn more about? Please feel free to ask me any questions or provide more information.",
"error": "null"
}
cuda_v12 hermes3:8b-llama3.1-q4_0
{
"response": "null",
"error": "an unknown error was encountered while running the model CUDA error: an illegal memory access was encountered\n current device: 0, in function ggml_backend_cuda_synchronize at /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml/src/ggml-cuda.cu:2416\n cudaStreamSynchronize(cuda_ctx->stream())\n/go/src/github.com/ollama/ollama/llm/llama.cpp/ggml/src/ggml-cuda.cu:101: CUDA error"
}
cuda_v12 llama3.1
{
"response": "Hello! How can I'm happy to help you with something?",
"error": "null"
}
cuda_v12 qwen2:1.5b
{
"response": "null",
"error": "an unknown error was encountered while running the model CUDA error: an illegal memory access was encountered\n current device: 0, in function ggml_backend_cuda_synchronize at /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml/src/ggml-cuda.cu:2416\n cudaStreamSynchronize(cuda_ctx->stream())\n/go/src/github.com/ollama/ollama/llm/llama.cpp/ggml/src/ggml-cuda.cu:101: CUDA error"
}
```
```
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 525.147.05 Driver Version: 525.147.05 CUDA Version: 12.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA GeForce ... Off | 00000000:01:00.0 Off | N/A |
| 30% 44C P8 8W / 200W | 6251MiB / 12282MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 33633 C /app/.venv/bin/python 1070MiB |
| 0 N/A N/A 1902358 C ...a_v12/ollama_llama_server 5178MiB |
+-----------------------------------------------------------------------------+
```
The problem does not occur on systems with more recent Nvidia drivers (`NVIDIA-SMI 535.183.01 Driver Version: 535.183.01 CUDA Version: 12.2`, `NVIDIA-SMI 550.90.07 Driver Version: 550.90.07 CUDA Version: 12.4`):
```
cuda_v11 hermes3:8b-llama3.1-q4_0
{
"response": "\nHello! How can I assist you today?",
"error": "null"
}
cuda_v11 llama3.1
{
"response": "Hello! How can I assist you today?",
"error": "null"
}
cuda_v11 qwen2:1.5b
{
"response": "Hello! How can I assist you today?",
"error": "null"
}
cuda_v12 hermes3:8b-llama3.1-q4_0
{
"response": "\nHello! How can I assist you today?",
"error": "null"
}
cuda_v12 llama3.1
{
"response": "Hello! How can I assist you today?",
"error": "null"
}
cuda_v12 qwen2:1.5b
{
"response": "Hello! How can I assist you today?",
"error": "null"
}
```
This is more of a FYI since it can be worked around by setting OLLAMA_LLM_LIBRARY or (hopefully, I have yet to try) upgrading the nvidia driver.
### OS
Docker
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.3.8
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6556/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6556/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2127
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2127/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2127/comments
|
https://api.github.com/repos/ollama/ollama/issues/2127/events
|
https://github.com/ollama/ollama/pull/2127
| 2,092,735,971
|
PR_kwDOJ0Z1Ps5kqMy1
| 2,127
|
Combine the 2 Dockerfiles and add ROCm
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-01-21T19:39:53
| 2024-01-21T19:49:04
| 2024-01-21T19:49:01
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/2127",
"html_url": "https://github.com/ollama/ollama/pull/2127",
"diff_url": "https://github.com/ollama/ollama/pull/2127.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2127.patch",
"merged_at": "2024-01-21T19:49:01"
}
|
This renames Dockerfile.build to replace the old Dockerfile, and adds some new stages to support 2 modes of building - the build_linux.sh script uses intermediate stages to extract the artifacts for ./dist, and the default build generates a container image usable by both cuda and rocm cards. This required transitioning the x86 base to the rocm image to avoid layer bloat.
We should update our Hub landing page with instructions for ROCm. The host needs the ROCm driver.
```
docker run --privileged --device /dev/kfd ollama/ollama
```
(Both privileged and the device flag are necessary to access the rocm driver)
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2127/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2127/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4184
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4184/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4184/comments
|
https://api.github.com/repos/ollama/ollama/issues/4184/events
|
https://github.com/ollama/ollama/issues/4184
| 2,279,710,783
|
I_kwDOJ0Z1Ps6H4aA_
| 4,184
|
Warning: could not connect to a running Ollama instance
|
{
"login": "rkuo2000",
"id": 3485732,
"node_id": "MDQ6VXNlcjM0ODU3MzI=",
"avatar_url": "https://avatars.githubusercontent.com/u/3485732?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rkuo2000",
"html_url": "https://github.com/rkuo2000",
"followers_url": "https://api.github.com/users/rkuo2000/followers",
"following_url": "https://api.github.com/users/rkuo2000/following{/other_user}",
"gists_url": "https://api.github.com/users/rkuo2000/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rkuo2000/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rkuo2000/subscriptions",
"organizations_url": "https://api.github.com/users/rkuo2000/orgs",
"repos_url": "https://api.github.com/users/rkuo2000/repos",
"events_url": "https://api.github.com/users/rkuo2000/events{/privacy}",
"received_events_url": "https://api.github.com/users/rkuo2000/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5755339642,
"node_id": "LA_kwDOJ0Z1Ps8AAAABVwuDeg",
"url": "https://api.github.com/repos/ollama/ollama/labels/linux",
"name": "linux",
"color": "516E70",
"default": false,
"description": ""
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg",
"url": "https://api.github.com/repos/ollama/ollama/labels/nvidia",
"name": "nvidia",
"color": "8CDB00",
"default": false,
"description": "Issues relating to Nvidia GPUs and CUDA"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 8
| 2024-05-05T20:16:46
| 2024-05-07T20:10:21
| 2024-05-07T19:53:17
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
curl -fsSL https://ollama.com/install.sh | sh
ollama -v
Warning: could not connect to a running Ollama instance
Warning: client version is 0.1.33
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.33
|
{
"login": "rkuo2000",
"id": 3485732,
"node_id": "MDQ6VXNlcjM0ODU3MzI=",
"avatar_url": "https://avatars.githubusercontent.com/u/3485732?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rkuo2000",
"html_url": "https://github.com/rkuo2000",
"followers_url": "https://api.github.com/users/rkuo2000/followers",
"following_url": "https://api.github.com/users/rkuo2000/following{/other_user}",
"gists_url": "https://api.github.com/users/rkuo2000/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rkuo2000/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rkuo2000/subscriptions",
"organizations_url": "https://api.github.com/users/rkuo2000/orgs",
"repos_url": "https://api.github.com/users/rkuo2000/repos",
"events_url": "https://api.github.com/users/rkuo2000/events{/privacy}",
"received_events_url": "https://api.github.com/users/rkuo2000/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4184/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4184/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1673
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1673/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1673/comments
|
https://api.github.com/repos/ollama/ollama/issues/1673/events
|
https://github.com/ollama/ollama/pull/1673
| 2,053,991,502
|
PR_kwDOJ0Z1Ps5ip8DP
| 1,673
|
docs: add Helm Chart link to Package managers list
|
{
"login": "jdetroyes",
"id": 24377095,
"node_id": "MDQ6VXNlcjI0Mzc3MDk1",
"avatar_url": "https://avatars.githubusercontent.com/u/24377095?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jdetroyes",
"html_url": "https://github.com/jdetroyes",
"followers_url": "https://api.github.com/users/jdetroyes/followers",
"following_url": "https://api.github.com/users/jdetroyes/following{/other_user}",
"gists_url": "https://api.github.com/users/jdetroyes/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jdetroyes/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jdetroyes/subscriptions",
"organizations_url": "https://api.github.com/users/jdetroyes/orgs",
"repos_url": "https://api.github.com/users/jdetroyes/repos",
"events_url": "https://api.github.com/users/jdetroyes/events{/privacy}",
"received_events_url": "https://api.github.com/users/jdetroyes/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2023-12-22T14:19:04
| 2024-02-20T03:05:14
| 2024-02-20T03:05:14
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1673",
"html_url": "https://github.com/ollama/ollama/pull/1673",
"diff_url": "https://github.com/ollama/ollama/pull/1673.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1673.patch",
"merged_at": "2024-02-20T03:05:14"
}
|
Add a link to ArtifactHub in the Package managers section for Helm Chart.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1673/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1673/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/8025
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8025/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8025/comments
|
https://api.github.com/repos/ollama/ollama/issues/8025/events
|
https://github.com/ollama/ollama/issues/8025
| 2,729,448,527
|
I_kwDOJ0Z1Ps6isBRP
| 8,025
|
Ollama run very very slow in ARM cpu (KunPeng 920 CPU)
|
{
"login": "feikiss",
"id": 2208663,
"node_id": "MDQ6VXNlcjIyMDg2NjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/2208663?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/feikiss",
"html_url": "https://github.com/feikiss",
"followers_url": "https://api.github.com/users/feikiss/followers",
"following_url": "https://api.github.com/users/feikiss/following{/other_user}",
"gists_url": "https://api.github.com/users/feikiss/gists{/gist_id}",
"starred_url": "https://api.github.com/users/feikiss/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/feikiss/subscriptions",
"organizations_url": "https://api.github.com/users/feikiss/orgs",
"repos_url": "https://api.github.com/users/feikiss/repos",
"events_url": "https://api.github.com/users/feikiss/events{/privacy}",
"received_events_url": "https://api.github.com/users/feikiss/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info",
"name": "needs more info",
"color": "BA8041",
"default": false,
"description": "More information is needed to assist"
}
] |
closed
| false
| null |
[] | null | 5
| 2024-12-10T08:57:15
| 2025-01-13T01:37:27
| 2025-01-13T01:37:27
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
The ollama is extremely slow on my ARM server (KunPeng-920 series) even I use 8 cores. I use model "qwen-2.5-0.5b_q4" model
server details:
```text
Collecting environment information...
PyTorch version: 2.5.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (aarch64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.1
Libc version: glibc-2.35
Python version: 3.10.12 (main, Nov 6 2024, 20:22:13) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.10.0-136.23.0.99.u37.fos23.aarch64-aarch64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: aarch64
CPU op-mode(s): 64-bit
Byte Order: Little Endian
CPU(s): 256
On-line CPU(s) list: 0-255
Vendor ID: HiSilicon
Model name: Kunpeng-920
Model: 0
Thread(s) per core: 1
Core(s) per cluster: 64
Socket(s): -
Cluster(s): 4
Stepping: 0x1
Frequency boost: disabled
CPU max MHz: 2600.0000
CPU min MHz: 200.0000
BogoMIPS: 200.00
Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma dcpop asimddp asimdfhm ssbs
L1d cache: 16 MiB (256 instances)
L1i cache: 16 MiB (256 instances)
L2 cache: 128 MiB (256 instances)
L3 cache: 256 MiB (8 instances)
NUMA node(s): 8
NUMA node0 CPU(s): 0-31
NUMA node1 CPU(s): 32-63
NUMA node2 CPU(s): 64-95
NUMA node3 CPU(s): 96-127
NUMA node4 CPU(s): 128-159
NUMA node5 CPU(s): 160-191
NUMA node6 CPU(s): 192-223
NUMA node7 CPU(s): 224-255
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; __user pointer sanitization
Vulnerability Spectre v2: Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] pyzmq==26.2.0
[pip3] torch==2.5.1
[pip3] torchvision==0.20.1
[pip3] transformers==4.46.3
[conda] Could not collect
ROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.6.4.post2.dev152+g1f6584ee.d20241127
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
Could not collect
VLLM_CPU_KVCACHE_SPACE=1
LD_LIBRARY_PATH=/usr/local/lib/python3.10/dist-packages/cv2/../../lib64:
```
### OS
Linux
### GPU
_No response_
### CPU
Other
### Ollama version
0.4.2
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8025/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8025/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2718
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2718/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2718/comments
|
https://api.github.com/repos/ollama/ollama/issues/2718/events
|
https://github.com/ollama/ollama/issues/2718
| 2,151,989,584
|
I_kwDOJ0Z1Ps6ARMFQ
| 2,718
|
Doc permission requirements for Rocm Docker Image to access /dev/dri and /dev/kfd
|
{
"login": "3lpsy",
"id": 8757851,
"node_id": "MDQ6VXNlcjg3NTc4NTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/8757851?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/3lpsy",
"html_url": "https://github.com/3lpsy",
"followers_url": "https://api.github.com/users/3lpsy/followers",
"following_url": "https://api.github.com/users/3lpsy/following{/other_user}",
"gists_url": "https://api.github.com/users/3lpsy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/3lpsy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/3lpsy/subscriptions",
"organizations_url": "https://api.github.com/users/3lpsy/orgs",
"repos_url": "https://api.github.com/users/3lpsy/repos",
"events_url": "https://api.github.com/users/3lpsy/events{/privacy}",
"received_events_url": "https://api.github.com/users/3lpsy/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 5
| 2024-02-24T00:30:35
| 2024-03-24T18:15:05
| 2024-03-24T18:15:05
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
TLDR: The 0.1.27-rocm cannot find the correct version of rocm libraries.
I start the docker image using the following command:
```
sudo -H -u ollama /usr/bin/podman --runtime /usr/bin/crun run --gpus all --rm -v /usr/share/ollama/.ollama:/root/.ollama -p 11434:11434 --name ollama 'ollama/ollama:0.1.27-rocm'
```
Ollama appears to identify the AMD GPU without issue
```
...omitted for brevity...
msg="Extracting dynamic libraries..."
time=2024-02-24T00:19:07.462Z level=INFO source=payload_common.go:146 msg="Dynamic LLM libraries [cpu_avx cuda_v11 rocm_v5 cpu cpu_avx2 rocm_v6]"
time=2024-02-24T00:19:07.462Z level=INFO source=gpu.go:94 msg="Detecting GPU type"
time=2024-02-24T00:19:07.462Z level=INFO source=gpu.go:265 msg="Searching for GPU management library libnvidia-ml.so"
time=2024-02-24T00:19:07.486Z level=INFO source=gpu.go:311 msg="Discovered GPU libraries: []"
time=2024-02-24T00:19:07.486Z level=INFO source=gpu.go:265 msg="Searching for GPU management library librocm_smi64.so"
time=2024-02-24T00:19:07.492Z level=INFO source=gpu.go:311 msg="Discovered GPU libraries: [/opt/rocm/lib/librocm_smi64.so.5.0.50701 /opt/rocm-5.7.1/lib/librocm_smi64.so.5.0.50701]"
time=2024-02-24T00:19:07.504Z level=INFO source=gpu.go:109 msg="Radeon GPU detected"
time=2024-02-24T00:19:07.504Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
```
Then I attempt to run ollama from the client via:
```
echo 'test' | ollama run llama2
```
And observe the following errors:
```
time=2024-02-24T00:19:40.892Z level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: /tmp/ollama1271109174/rocm_v5/libex
t_server.so"
time=2024-02-24T00:19:40.892Z level=INFO source=dyn_ext_server.go:150 msg="Initializing llama server"
time=2024-02-24T00:19:40.892Z level=WARN source=llm.go:162 msg="Failed to load dynamic library /tmp/ollama1271109174/rocm_v5/libext_serve
r.so Unable to init GPU: invalid device ordinal"
time=2024-02-24T00:19:40.893Z level=WARN source=llm.go:162 msg="Failed to load dynamic library /tmp/ollama1271109174/rocm_v6/libext_serve
r.so Unable to load dynamic library: Unable to load dynamic server library: libhipblas.so.2: cannot open shared object file: No such fil
e or directory"
time=2024-02-24T00:19:40.894Z level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: /tmp/ollama1271109174/cpu_avx2/libe
xt_server.so"
time=2024-02-24T00:19:40.894Z level=INFO source=dyn_ext_server.go:150 msg="Initializing llama server"
llama_model_loader: loaded meta data with 23 key-value pairs and 291 tensors from /root/.ollama/models/blobs/sha256:8934d96d3f08982e95922
b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
...omitted for brevity...
```
Note the error:
```
Unable to load dynamic library: Unable to load dynamic server library: libhipblas.so.2
```
If I grab a shell on the image, I can see that the `libhipblas.so.2` library does not exist, only .1 and .0 do:
```
$ ls /opt/rocm/lib/libhipblas* | cat
/opt/rocm/lib/libhipblaslt.so
/opt/rocm/lib/libhipblaslt.so.0
/opt/rocm/lib/libhipblaslt.so.0.3.50701
/opt/rocm/lib/libhipblas.so
/opt/rocm/lib/libhipblas.so.1
/opt/rocm/lib/libhipblas.so.1.1.0.50701
```
I ran into a similar issue with a mismatch between library versions when running outside of docker which I was able to mitigate as described here: https://github.com/ollama/ollama/issues/2685#issuecomment-1961666228 (TLDR: just symlinking new versions to old versions). I believe even if I fixed the libhipblas.so issue, the other libraries would also need to be fixed as they were in the linked comment. Additionally, the issue here appears to be the opposite of the scenario described in the linked comment where the old versions exist but ollama wants to new versions (I believe).
I've looked at the `Dockerfile` and won't quite understand how the 0.1.27-rocm image is built so am not able to offer guidance on a fix.
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2718/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2718/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4996
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4996/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4996/comments
|
https://api.github.com/repos/ollama/ollama/issues/4996/events
|
https://github.com/ollama/ollama/issues/4996
| 2,348,008,604
|
I_kwDOJ0Z1Ps6L88Sc
| 4,996
|
Apple Silicon macs with 8GB or 16GB slow down when loading larger models
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 0
| 2024-06-12T07:17:14
| 2024-06-12T07:17:14
| null |
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Less of the model should be loaded to Metal to avoid causing lag
### OS
macOS
### GPU
_No response_
### CPU
_No response_
### Ollama version
_No response_
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4996/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4996/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/3516
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3516/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3516/comments
|
https://api.github.com/repos/ollama/ollama/issues/3516/events
|
https://github.com/ollama/ollama/issues/3516
| 2,229,370,133
|
I_kwDOJ0Z1Ps6E4X0V
| 3,516
|
[Linux] Switch systemd service unit to EnvironmentFile and start providing it in the repository instead
|
{
"login": "C0rn3j",
"id": 1641362,
"node_id": "MDQ6VXNlcjE2NDEzNjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1641362?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/C0rn3j",
"html_url": "https://github.com/C0rn3j",
"followers_url": "https://api.github.com/users/C0rn3j/followers",
"following_url": "https://api.github.com/users/C0rn3j/following{/other_user}",
"gists_url": "https://api.github.com/users/C0rn3j/gists{/gist_id}",
"starred_url": "https://api.github.com/users/C0rn3j/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/C0rn3j/subscriptions",
"organizations_url": "https://api.github.com/users/C0rn3j/orgs",
"repos_url": "https://api.github.com/users/C0rn3j/repos",
"events_url": "https://api.github.com/users/C0rn3j/events{/privacy}",
"received_events_url": "https://api.github.com/users/C0rn3j/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 0
| 2024-04-06T19:12:15
| 2024-04-19T15:41:14
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What are you trying to do?
Run Llama service on Linux via systemd and being able to configure it without unit overrides through a configuration file.
### How should we solve this?
Stop using `Environment`, use `EnvironmentFile` instead in https://github.com/ollama/ollama/blob/cb03fc9571814edd5af1109bf1a562e813ecb816/scripts/install.sh#L100-L116
https://www.freedesktop.org/software/systemd/man/latest/systemd.exec.html#EnvironmentFile=
Edit documentation accordingly - https://github.com/ollama/ollama/blob/main/docs/faq.md#setting-environment-variables-on-linux
Stop carrying the unit file in install.sh, add it to the repository, this allows distributions to cleanly package the unit.
Download the new configuration file as `/etc/ollama.conf.example` and if /etc/ollama.conf does not exist, copy the example there, otherwise keep user configuration intact.
```systemd
[Service]
EnvironmentFile=/etc/ollama.conf
```
This also allows for having all of the options in the configuration neatly visible, instead of having to wade through help/documentation for simple things.
Example config of mine:
```shell
# /etc/ollama.conf
# The host:port to bind to (default "127.0.0.1:11434")
OLLAMA_HOST=0.0.0.0:11434
# A comma separated list of allowed origins.
OLLAMA_ORIGINS=*://localhost,*://192.168.1.40,*://192.168.1.10
# The path to the models directory (default is "~/.ollama/models")
OLLAMA_MODELS=/models/ollama
HOME=/var/lib/ollama
GIN_MODE=release
```
### What is the impact of not solving this?
Distribution packaging is hard and configuration of ollama has to be done by overriding systemd units.
### Anything else?
Adapting hardening from the Arch Linux unit might be a good idea, already saw people in this repository trying to give Ollama permissions to their home folders which the unit prevents.
https://gitlab.archlinux.org/archlinux/packaging/packages/ollama/-/blob/7418e63fb87fd43277a6051466325081680d1627/ollama.service
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3516/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3516/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/8074
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8074/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8074/comments
|
https://api.github.com/repos/ollama/ollama/issues/8074/events
|
https://github.com/ollama/ollama/issues/8074
| 2,736,411,159
|
I_kwDOJ0Z1Ps6jGlIX
| 8,074
|
Windows NUMA 4 socket, 144 core system, default thread count causes very poor performance
|
{
"login": "Panican-Whyasker",
"id": 191496755,
"node_id": "U_kgDOC2oCMw",
"avatar_url": "https://avatars.githubusercontent.com/u/191496755?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Panican-Whyasker",
"html_url": "https://github.com/Panican-Whyasker",
"followers_url": "https://api.github.com/users/Panican-Whyasker/followers",
"following_url": "https://api.github.com/users/Panican-Whyasker/following{/other_user}",
"gists_url": "https://api.github.com/users/Panican-Whyasker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Panican-Whyasker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Panican-Whyasker/subscriptions",
"organizations_url": "https://api.github.com/users/Panican-Whyasker/orgs",
"repos_url": "https://api.github.com/users/Panican-Whyasker/repos",
"events_url": "https://api.github.com/users/Panican-Whyasker/events{/privacy}",
"received_events_url": "https://api.github.com/users/Panican-Whyasker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5808482718,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWjZpng",
"url": "https://api.github.com/repos/ollama/ollama/labels/performance",
"name": "performance",
"color": "A5B5C6",
"default": false,
"description": ""
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg",
"url": "https://api.github.com/repos/ollama/ollama/labels/windows",
"name": "windows",
"color": "0052CC",
"default": false,
"description": ""
}
] |
open
| false
| null |
[] | null | 8
| 2024-12-12T16:48:02
| 2024-12-13T20:02:11
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
A 135M-parameter model only yielded 4 words after running for 3.5 hours on one 36-core CPU @ 100% load.
A 3.8B model yielded only 10 words after 10.5 hours on the same machine.
Prompt in both cases: "Introduce yourself."
Windows Server 2016 OS (direct install, no Docker).
Ollama 0.5.1 on 4 Xeon Gold 6140 CPUs (144 logical cores in total) and 768 GB of system RAM (6-channel, NUMA architecture).
No GPU.
Tried two small LMMs for starters, namely smollm:135m and phi3.5 (3.8B).
The correct runner for that CPU type was loaded (cpu_avx2).
smollm:135m was saying (after 3.5 h): "I'm thrilled to introduce..."
phi3.5 (3.8B) was saying (after 10.5 h): "Hello! I am Phi, an artificial intelligence designed to interact..."
I have run larger LLMs with Q4 and FP16 quantizations on a much older server machine running Windows 10 with dual Xeons 5600 (intel Westmere, no AVX), 288 GB of RAM (and no GPU), and the "cpu" runner worked fine. Indeed, a 30B, Q4 model runs very slow (~one word/second), but nothing like one word/hour!!!
On the newer machine (Win Server 2016), Ollama seems to run 288 parallel threads on one of four 36-core (logical) CPU; here's an excerpt from the server.log:
time=2024-12-12T16:47:26.192+01:00 level=INFO source=runner.go:942 msg=system info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | cgo(clang)" threads=288
On the older machine (Win 10 Pro x64), Ollama used both CPUs and the load peaked at ~60%. RAM is DDR3 @ 1333 MHz, 3 channels/CPU (6 channels for DDR4 @ 2666 MHz on the newer machine).
### OS
Windows
### GPU
Other
### CPU
Intel
### Ollama version
0.5.1
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8074/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8074/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/4040
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4040/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4040/comments
|
https://api.github.com/repos/ollama/ollama/issues/4040/events
|
https://github.com/ollama/ollama/pull/4040
| 2,270,734,561
|
PR_kwDOJ0Z1Ps5uGtpd
| 4,040
|
docs: add Guix package manager in README.
|
{
"login": "tusharhero",
"id": 54012021,
"node_id": "MDQ6VXNlcjU0MDEyMDIx",
"avatar_url": "https://avatars.githubusercontent.com/u/54012021?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tusharhero",
"html_url": "https://github.com/tusharhero",
"followers_url": "https://api.github.com/users/tusharhero/followers",
"following_url": "https://api.github.com/users/tusharhero/following{/other_user}",
"gists_url": "https://api.github.com/users/tusharhero/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tusharhero/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tusharhero/subscriptions",
"organizations_url": "https://api.github.com/users/tusharhero/orgs",
"repos_url": "https://api.github.com/users/tusharhero/repos",
"events_url": "https://api.github.com/users/tusharhero/events{/privacy}",
"received_events_url": "https://api.github.com/users/tusharhero/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-04-30T07:40:33
| 2024-05-09T18:10:24
| 2024-05-09T18:10:24
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4040",
"html_url": "https://github.com/ollama/ollama/pull/4040",
"diff_url": "https://github.com/ollama/ollama/pull/4040.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4040.patch",
"merged_at": "2024-05-09T18:10:24"
}
| null |
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4040/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4040/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7884
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7884/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7884/comments
|
https://api.github.com/repos/ollama/ollama/issues/7884/events
|
https://github.com/ollama/ollama/pull/7884
| 2,706,378,944
|
PR_kwDOJ0Z1Ps6Dm84b
| 7,884
|
server: move /api/version to use http.Handler
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 0
| 2024-11-29T23:31:14
| 2025-01-14T06:24:37
| null |
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7884",
"html_url": "https://github.com/ollama/ollama/pull/7884",
"diff_url": "https://github.com/ollama/ollama/pull/7884.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7884.patch",
"merged_at": null
}
|
also adds tests for the /api/version endpoint
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7884/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7884/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7313
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7313/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7313/comments
|
https://api.github.com/repos/ollama/ollama/issues/7313/events
|
https://github.com/ollama/ollama/pull/7313
| 2,605,028,865
|
PR_kwDOJ0Z1Ps5_cSjp
| 7,313
|
Add support for RWKV
|
{
"login": "MollySophia",
"id": 20746884,
"node_id": "MDQ6VXNlcjIwNzQ2ODg0",
"avatar_url": "https://avatars.githubusercontent.com/u/20746884?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MollySophia",
"html_url": "https://github.com/MollySophia",
"followers_url": "https://api.github.com/users/MollySophia/followers",
"following_url": "https://api.github.com/users/MollySophia/following{/other_user}",
"gists_url": "https://api.github.com/users/MollySophia/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MollySophia/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MollySophia/subscriptions",
"organizations_url": "https://api.github.com/users/MollySophia/orgs",
"repos_url": "https://api.github.com/users/MollySophia/repos",
"events_url": "https://api.github.com/users/MollySophia/events{/privacy}",
"received_events_url": "https://api.github.com/users/MollySophia/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 3
| 2024-10-22T10:33:15
| 2025-01-11T00:37:02
| 2024-12-21T06:04:56
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7313",
"html_url": "https://github.com/ollama/ollama/pull/7313",
"diff_url": "https://github.com/ollama/ollama/pull/7313.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7313.patch",
"merged_at": null
}
|
Changes in this PR:
- Added a patch on llama.cpp with commits upstream: [llama.cpp:
10433e8
](https://github.com/ggerganov/llama.cpp/commit/10433e8b457c4cfd759cbb41fc55fc398db4a5da) and [4ff7fe1](https://github.com/ggerganov/llama.cpp/commit/4ff7fe1fb36b04ddd158b2de881c348c5f0ff5e4), [11d4705](https://github.com/ggerganov/llama.cpp/commit/11d47057a51f3d9b9231e6b57d0ca36020c0ee99). These fixes the problem that rwkv gguf model cannot be loaded, and that conversations cannot be correctly stopped. I guess this patch can be removed after ollama syncs llama.cpp submodule next time.
- Added a simple template for chatting for RWKV models.
I'm not sure if these are the correct way to fix the problems. Thanks in advance for any suggestions and reviews!
This closes #7223
|
{
"login": "MollySophia",
"id": 20746884,
"node_id": "MDQ6VXNlcjIwNzQ2ODg0",
"avatar_url": "https://avatars.githubusercontent.com/u/20746884?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MollySophia",
"html_url": "https://github.com/MollySophia",
"followers_url": "https://api.github.com/users/MollySophia/followers",
"following_url": "https://api.github.com/users/MollySophia/following{/other_user}",
"gists_url": "https://api.github.com/users/MollySophia/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MollySophia/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MollySophia/subscriptions",
"organizations_url": "https://api.github.com/users/MollySophia/orgs",
"repos_url": "https://api.github.com/users/MollySophia/repos",
"events_url": "https://api.github.com/users/MollySophia/events{/privacy}",
"received_events_url": "https://api.github.com/users/MollySophia/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7313/reactions",
"total_count": 8,
"+1": 7,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/ollama/ollama/issues/7313/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2604
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2604/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2604/comments
|
https://api.github.com/repos/ollama/ollama/issues/2604/events
|
https://github.com/ollama/ollama/pull/2604
| 2,143,533,698
|
PR_kwDOJ0Z1Ps5nWaPn
| 2,604
|
Support for `bert` and `nomic-bert` embedding models
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-02-20T04:46:07
| 2024-02-21T02:37:30
| 2024-02-21T02:37:29
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/2604",
"html_url": "https://github.com/ollama/ollama/pull/2604",
"diff_url": "https://github.com/ollama/ollama/pull/2604.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2604.patch",
"merged_at": "2024-02-21T02:37:29"
}
|
Fixes #327
This adds initial support for embedding models using the `/api/embeddings` endpoint.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2604/reactions",
"total_count": 6,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 6,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2604/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5906
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5906/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5906/comments
|
https://api.github.com/repos/ollama/ollama/issues/5906/events
|
https://github.com/ollama/ollama/issues/5906
| 2,427,053,469
|
I_kwDOJ0Z1Ps6QqeWd
| 5,906
|
Sth wrong with using Ollama +qdrant:Vector dimension error: expected dim: 1536, got 768
|
{
"login": "AI-Beans",
"id": 58964439,
"node_id": "MDQ6VXNlcjU4OTY0NDM5",
"avatar_url": "https://avatars.githubusercontent.com/u/58964439?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AI-Beans",
"html_url": "https://github.com/AI-Beans",
"followers_url": "https://api.github.com/users/AI-Beans/followers",
"following_url": "https://api.github.com/users/AI-Beans/following{/other_user}",
"gists_url": "https://api.github.com/users/AI-Beans/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AI-Beans/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AI-Beans/subscriptions",
"organizations_url": "https://api.github.com/users/AI-Beans/orgs",
"repos_url": "https://api.github.com/users/AI-Beans/repos",
"events_url": "https://api.github.com/users/AI-Beans/events{/privacy}",
"received_events_url": "https://api.github.com/users/AI-Beans/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 0
| 2024-07-24T09:28:49
| 2024-07-24T09:30:36
| 2024-07-24T09:30:36
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?

I use ollama embedingmodel and chatmodel,get right response 。
But response form Qdrant:Vector dimension error: expected dim: 1536, got 768
where can I config the para ?

### OS
_No response_
### GPU
Nvidia, Intel
### CPU
Intel
### Ollama version
0.2.8
|
{
"login": "AI-Beans",
"id": 58964439,
"node_id": "MDQ6VXNlcjU4OTY0NDM5",
"avatar_url": "https://avatars.githubusercontent.com/u/58964439?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AI-Beans",
"html_url": "https://github.com/AI-Beans",
"followers_url": "https://api.github.com/users/AI-Beans/followers",
"following_url": "https://api.github.com/users/AI-Beans/following{/other_user}",
"gists_url": "https://api.github.com/users/AI-Beans/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AI-Beans/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AI-Beans/subscriptions",
"organizations_url": "https://api.github.com/users/AI-Beans/orgs",
"repos_url": "https://api.github.com/users/AI-Beans/repos",
"events_url": "https://api.github.com/users/AI-Beans/events{/privacy}",
"received_events_url": "https://api.github.com/users/AI-Beans/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5906/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5906/timeline
| null |
completed
| false
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.