url
stringlengths
51
54
repository_url
stringclasses
1 value
labels_url
stringlengths
65
68
comments_url
stringlengths
60
63
events_url
stringlengths
58
61
html_url
stringlengths
39
44
id
int64
1.78B
2.82B
node_id
stringlengths
18
19
number
int64
1
8.69k
title
stringlengths
1
382
user
dict
labels
listlengths
0
5
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
listlengths
0
2
milestone
null
comments
int64
0
323
created_at
timestamp[s]
updated_at
timestamp[s]
closed_at
timestamp[s]
author_association
stringclasses
4 values
sub_issues_summary
dict
active_lock_reason
null
draft
bool
2 classes
pull_request
dict
body
stringlengths
2
118k
closed_by
dict
reactions
dict
timeline_url
stringlengths
60
63
performed_via_github_app
null
state_reason
stringclasses
4 values
is_pull_request
bool
2 classes
https://api.github.com/repos/ollama/ollama/issues/655
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/655/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/655/comments
https://api.github.com/repos/ollama/ollama/issues/655/events
https://github.com/ollama/ollama/issues/655
1,920,161,554
I_kwDOJ0Z1Ps5yc1cS
655
Question: where is ollama.ai website source?
{ "login": "jamesbraza", "id": 8990777, "node_id": "MDQ6VXNlcjg5OTA3Nzc=", "avatar_url": "https://avatars.githubusercontent.com/u/8990777?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jamesbraza", "html_url": "https://github.com/jamesbraza", "followers_url": "https://api.github.com/users...
[]
closed
false
null
[]
null
3
2023-09-30T07:45:04
2023-12-04T19:56:41
2023-12-04T19:56:40
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I was going to try to make some docs PRs into [ollama.ai](https://ollama.ai/). Where is the source code for the website?
{ "login": "technovangelist", "id": 633681, "node_id": "MDQ6VXNlcjYzMzY4MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4", "gravatar_id": "", "url": "https://api.github.com/users/technovangelist", "html_url": "https://github.com/technovangelist", "followers_url": "https://api.git...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/655/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/655/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/294
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/294/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/294/comments
https://api.github.com/repos/ollama/ollama/issues/294/events
https://github.com/ollama/ollama/issues/294
1,838,026,789
I_kwDOJ0Z1Ps5tjhAl
294
Streaming responses should have `Content-Type` set to `application/x-ndjson `
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 5667396210, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2acg...
closed
false
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[ { "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.g...
null
4
2023-08-06T03:26:04
2024-01-27T00:21:55
2023-08-09T04:38:40
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Currently streaming responses return `text/plain` but they should return `application/x-ndjson `. Later we should consider `application/json` (see #281) or `text/event-stream` for browser based clients
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/294/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/294/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1100
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1100/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1100/comments
https://api.github.com/repos/ollama/ollama/issues/1100/events
https://github.com/ollama/ollama/issues/1100
1,989,515,984
I_kwDOJ0Z1Ps52lZrQ
1,100
asking a LLM to process a csv file as a source for data
{ "login": "igorschlum", "id": 2884312, "node_id": "MDQ6VXNlcjI4ODQzMTI=", "avatar_url": "https://avatars.githubusercontent.com/u/2884312?v=4", "gravatar_id": "", "url": "https://api.github.com/users/igorschlum", "html_url": "https://github.com/igorschlum", "followers_url": "https://api.github.com/users...
[ { "id": 5667396220, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA", "url": "https://api.github.com/repos/ollama/ollama/labels/question", "name": "question", "color": "d876e3", "default": true, "description": "General questions" } ]
closed
false
null
[]
null
2
2023-11-12T18:24:04
2024-05-06T23:26:32
2024-05-06T23:26:31
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I don't know how to ask Ollama to process a csv file. When I ask Falcon of Llama2 to do so, they give me instructions that are not functional. I tried a prompt like this: "$(cat /Users/igor/Documents/text.txt)" please translate this text in English. Falcon: Yes, I can translate it for you. However, I need the...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1100/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1100/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/753
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/753/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/753/comments
https://api.github.com/repos/ollama/ollama/issues/753/events
https://github.com/ollama/ollama/pull/753
1,936,541,042
PR_kwDOJ0Z1Ps5ccs1t
753
rename the examples to be more descriptive
{ "login": "technovangelist", "id": 633681, "node_id": "MDQ6VXNlcjYzMzY4MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4", "gravatar_id": "", "url": "https://api.github.com/users/technovangelist", "html_url": "https://github.com/technovangelist", "followers_url": "https://api.git...
[]
closed
false
null
[]
null
1
2023-10-11T00:40:54
2023-10-12T18:24:13
2023-10-12T18:24:12
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/753", "html_url": "https://github.com/ollama/ollama/pull/753", "diff_url": "https://github.com/ollama/ollama/pull/753.diff", "patch_url": "https://github.com/ollama/ollama/pull/753.patch", "merged_at": "2023-10-12T18:24:12" }
also add a few readmes.
{ "login": "technovangelist", "id": 633681, "node_id": "MDQ6VXNlcjYzMzY4MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4", "gravatar_id": "", "url": "https://api.github.com/users/technovangelist", "html_url": "https://github.com/technovangelist", "followers_url": "https://api.git...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/753/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/753/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5469
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5469/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5469/comments
https://api.github.com/repos/ollama/ollama/issues/5469/events
https://github.com/ollama/ollama/pull/5469
2,389,580,481
PR_kwDOJ0Z1Ps50YM8_
5,469
Prevent loading models larger than total memory
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[]
closed
false
null
[]
null
9
2024-07-03T22:16:39
2024-08-06T20:42:03
2024-07-05T15:22:20
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5469", "html_url": "https://github.com/ollama/ollama/pull/5469", "diff_url": "https://github.com/ollama/ollama/pull/5469.diff", "patch_url": "https://github.com/ollama/ollama/pull/5469.patch", "merged_at": "2024-07-05T15:22:20" }
Users may not realize the shiny new model they're trying to load fits on their disk, but can't load into system+GPU memory. Today we crash, but with this fix, we'll give them a better error message before even trying to load it. Fixes #3837 #4955 Verified by using `stress-ng` to saturate system memory, and lo...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5469/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5469/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/6968
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6968/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6968/comments
https://api.github.com/repos/ollama/ollama/issues/6968/events
https://github.com/ollama/ollama/issues/6968
2,549,149,437
I_kwDOJ0Z1Ps6X8O79
6,968
Adjust templates for FIM models to acknowledge existence of suffix
{ "login": "sestinj", "id": 33237525, "node_id": "MDQ6VXNlcjMzMjM3NTI1", "avatar_url": "https://avatars.githubusercontent.com/u/33237525?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sestinj", "html_url": "https://github.com/sestinj", "followers_url": "https://api.github.com/users/sestin...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
0
2024-09-25T23:12:18
2024-09-25T23:12:18
null
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? CodeGemma (for example, it's not the only one) supports both FIM and chat. Ollama uses the FIM template for codegemma:2b and the chat template for codegemma:7b. This feels like the right default decision, but in cases where a suffix is provided, it can be confidently assumed that a FIM format is...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6968/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6968/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/4289
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4289/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4289/comments
https://api.github.com/repos/ollama/ollama/issues/4289/events
https://github.com/ollama/ollama/pull/4289
2,287,981,952
PR_kwDOJ0Z1Ps5vAaP5
4,289
Doc container usage and workaround for nvidia errors
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[]
closed
false
null
[]
null
0
2024-05-09T15:52:08
2024-05-09T16:27:32
2024-05-09T16:27:30
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/4289", "html_url": "https://github.com/ollama/ollama/pull/4289", "diff_url": "https://github.com/ollama/ollama/pull/4289.diff", "patch_url": "https://github.com/ollama/ollama/pull/4289.patch", "merged_at": "2024-05-09T16:27:30" }
Fixes #4242
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4289/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4289/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/1939
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1939/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1939/comments
https://api.github.com/repos/ollama/ollama/issues/1939/events
https://github.com/ollama/ollama/issues/1939
2,077,812,700
I_kwDOJ0Z1Ps572Ofc
1,939
Unable to load dynamic library error when using container
{ "login": "otavio-silva", "id": 22914610, "node_id": "MDQ6VXNlcjIyOTE0NjEw", "avatar_url": "https://avatars.githubusercontent.com/u/22914610?v=4", "gravatar_id": "", "url": "https://api.github.com/users/otavio-silva", "html_url": "https://github.com/otavio-silva", "followers_url": "https://api.github.c...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
26
2024-01-12T00:17:28
2024-01-19T21:41:09
2024-01-19T21:41:09
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
# Description When trying to run a model using the container, it gives the an error about loading a dynamic library. Ollama is able to list the available models but not run them. The container can see the GPU as `nvidia-smi` gives the expected output. # Current output ```cpp Error: Unable to load dynamic library:...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1939/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1939/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3021
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3021/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3021/comments
https://api.github.com/repos/ollama/ollama/issues/3021/events
https://github.com/ollama/ollama/issues/3021
2,177,134,712
I_kwDOJ0Z1Ps6BxHB4
3,021
API endpoint for encoding and decoding tokens
{ "login": "Hansson0728", "id": 9604420, "node_id": "MDQ6VXNlcjk2MDQ0MjA=", "avatar_url": "https://avatars.githubusercontent.com/u/9604420?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Hansson0728", "html_url": "https://github.com/Hansson0728", "followers_url": "https://api.github.com/us...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
3
2024-03-09T08:26:05
2024-09-04T04:34:44
2024-09-04T04:34:44
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Please please someone who knows go... add so the internal llamacpp encode endpoint is avilable to the ollama api, so we can use the llm tokenizer to measure how much context we are using accuratly, so we can pick and choose in our memory instead of only trimming from the beginning of our messages, please please
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3021/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3021/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3333
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3333/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3333/comments
https://api.github.com/repos/ollama/ollama/issues/3333/events
https://github.com/ollama/ollama/pull/3333
2,204,828,951
PR_kwDOJ0Z1Ps5qm8bd
3,333
doc: specify ADAPTER is optional
{ "login": "bmizerany", "id": 46, "node_id": "MDQ6VXNlcjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/46?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bmizerany", "html_url": "https://github.com/bmizerany", "followers_url": "https://api.github.com/users/bmizerany/followers"...
[]
closed
false
null
[]
null
0
2024-03-25T03:53:16
2024-03-25T16:43:19
2024-03-25T16:43:19
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/3333", "html_url": "https://github.com/ollama/ollama/pull/3333", "diff_url": "https://github.com/ollama/ollama/pull/3333.diff", "patch_url": "https://github.com/ollama/ollama/pull/3333.patch", "merged_at": "2024-03-25T16:43:19" }
null
{ "login": "bmizerany", "id": 46, "node_id": "MDQ6VXNlcjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/46?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bmizerany", "html_url": "https://github.com/bmizerany", "followers_url": "https://api.github.com/users/bmizerany/followers"...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3333/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3333/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5248
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5248/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5248/comments
https://api.github.com/repos/ollama/ollama/issues/5248/events
https://github.com/ollama/ollama/pull/5248
2,369,211,125
PR_kwDOJ0Z1Ps5zT90G
5,248
cmd: defer stating model info until necessary
{ "login": "bmizerany", "id": 46, "node_id": "MDQ6VXNlcjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/46?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bmizerany", "html_url": "https://github.com/bmizerany", "followers_url": "https://api.github.com/users/bmizerany/followers"...
[]
closed
false
null
[]
null
1
2024-06-24T05:00:57
2024-06-25T03:14:04
2024-06-25T03:14:03
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5248", "html_url": "https://github.com/ollama/ollama/pull/5248", "diff_url": "https://github.com/ollama/ollama/pull/5248.diff", "patch_url": "https://github.com/ollama/ollama/pull/5248.patch", "merged_at": "2024-06-25T03:14:03" }
This commit changes the 'ollama run' command to defer fetching model information until it really needs it. That is, when in interactive mode. This positively impacts the performance of the command: ; time ./before run llama3 'hi' Hi! It's nice to meet you. Is there something I can help you with, or would...
{ "login": "bmizerany", "id": 46, "node_id": "MDQ6VXNlcjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/46?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bmizerany", "html_url": "https://github.com/bmizerany", "followers_url": "https://api.github.com/users/bmizerany/followers"...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5248/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5248/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/4556
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4556/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4556/comments
https://api.github.com/repos/ollama/ollama/issues/4556/events
https://github.com/ollama/ollama/issues/4556
2,308,080,352
I_kwDOJ0Z1Ps6JkoLg
4,556
Plugins
{ "login": "zorgoz", "id": 1569170, "node_id": "MDQ6VXNlcjE1NjkxNzA=", "avatar_url": "https://avatars.githubusercontent.com/u/1569170?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zorgoz", "html_url": "https://github.com/zorgoz", "followers_url": "https://api.github.com/users/zorgoz/foll...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
open
false
null
[]
null
1
2024-05-21T11:33:56
2024-06-07T16:47:51
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hello, are "tools" and "tool_choice" API supported, and if not, is there any roadmap for them?
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4556/reactions", "total_count": 4, "+1": 4, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4556/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/4731
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4731/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4731/comments
https://api.github.com/repos/ollama/ollama/issues/4731/events
https://github.com/ollama/ollama/pull/4731
2,326,581,769
PR_kwDOJ0Z1Ps5xD2Y4
4,731
Update llama.cpp submodule to `5921b8f0`
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[]
closed
false
null
[]
null
0
2024-05-30T21:40:26
2024-05-30T23:20:23
2024-05-30T23:20:22
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/4731", "html_url": "https://github.com/ollama/ollama/pull/4731", "diff_url": "https://github.com/ollama/ollama/pull/4731.diff", "patch_url": "https://github.com/ollama/ollama/pull/4731.patch", "merged_at": "2024-05-30T23:20:22" }
null
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4731/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4731/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/1785
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1785/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1785/comments
https://api.github.com/repos/ollama/ollama/issues/1785/events
https://github.com/ollama/ollama/pull/1785
2,066,007,927
PR_kwDOJ0Z1Ps5jPgql
1,785
Load dynamic cpu lib on windows
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[]
closed
false
null
[]
null
0
2024-01-04T16:51:13
2024-01-04T16:55:18
2024-01-04T16:55:02
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/1785", "html_url": "https://github.com/ollama/ollama/pull/1785", "diff_url": "https://github.com/ollama/ollama/pull/1785.diff", "patch_url": "https://github.com/ollama/ollama/pull/1785.patch", "merged_at": "2024-01-04T16:55:02" }
On linux, we link the CPU library in to the Go app and fall back to it when no GPU match is found. On windows we do not link in the CPU library so that we can better control our dependencies for the CLI. This fixes the logic so we correctly fallback to the dynamic CPU library on windows.
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1785/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1785/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5013
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5013/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5013/comments
https://api.github.com/repos/ollama/ollama/issues/5013/events
https://github.com/ollama/ollama/issues/5013
2,350,097,352
I_kwDOJ0Z1Ps6ME6PI
5,013
How to prevent the model from automatically releasing after 5 minutes when requesting an OpenAI package?
{ "login": "GoEnthusiast", "id": 132556615, "node_id": "U_kgDOB-anRw", "avatar_url": "https://avatars.githubusercontent.com/u/132556615?v=4", "gravatar_id": "", "url": "https://api.github.com/users/GoEnthusiast", "html_url": "https://github.com/GoEnthusiast", "followers_url": "https://api.github.com/use...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
closed
false
null
[]
null
3
2024-06-13T04:06:14
2024-07-09T16:26:00
2024-07-09T16:25:59
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
from openai import OpenAI client = OpenAI( base_url='http://localhost:11434/v1/', # required but ignored api_key='ollama', ) chat_completion = client.chat.completions.create( messages=[ { 'role': 'user', 'content': 'Say this is a test', } ], ...
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5013/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5013/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/350
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/350/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/350/comments
https://api.github.com/repos/ollama/ollama/issues/350/events
https://github.com/ollama/ollama/pull/350
1,850,684,186
PR_kwDOJ0Z1Ps5X7j7T
350
update llama.cpp
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[]
closed
false
null
[]
null
0
2023-08-14T23:09:45
2023-08-14T23:15:52
2023-08-14T23:15:52
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/350", "html_url": "https://github.com/ollama/ollama/pull/350", "diff_url": "https://github.com/ollama/ollama/pull/350.diff", "patch_url": "https://github.com/ollama/ollama/pull/350.patch", "merged_at": "2023-08-14T23:15:52" }
null
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/350/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/350/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/6760
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6760/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6760/comments
https://api.github.com/repos/ollama/ollama/issues/6760/events
https://github.com/ollama/ollama/pull/6760
2,520,525,018
PR_kwDOJ0Z1Ps57NEi5
6,760
IBM granite/granitemoe architecture support
{ "login": "gabe-l-hart", "id": 1254484, "node_id": "MDQ6VXNlcjEyNTQ0ODQ=", "avatar_url": "https://avatars.githubusercontent.com/u/1254484?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gabe-l-hart", "html_url": "https://github.com/gabe-l-hart", "followers_url": "https://api.github.com/us...
[]
closed
false
null
[]
null
11
2024-09-11T18:59:32
2024-10-21T04:39:35
2024-10-17T18:59:52
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/6760", "html_url": "https://github.com/ollama/ollama/pull/6760", "diff_url": "https://github.com/ollama/ollama/pull/6760.diff", "patch_url": "https://github.com/ollama/ollama/pull/6760.patch", "merged_at": "2024-10-17T18:59:52" }
## Special Note Since this PR bumps `llama.cpp` past the tip of `master` (`6026da52` as of writing this), it includes the recent changes to overhaul `sampling` and logging. I updated `server.cpp` so that it compiles and can run the models successfully. I also updated all of the patches to apply to the updated `llama...
{ "login": "jessegross", "id": 6468499, "node_id": "MDQ6VXNlcjY0Njg0OTk=", "avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jessegross", "html_url": "https://github.com/jessegross", "followers_url": "https://api.github.com/users...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6760/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6760/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/7135
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7135/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7135/comments
https://api.github.com/repos/ollama/ollama/issues/7135/events
https://github.com/ollama/ollama/issues/7135
2,573,472,075
I_kwDOJ0Z1Ps6ZZBFL
7,135
use the macOS electron app for Windows and Linux
{ "login": "hichemfantar", "id": 34947993, "node_id": "MDQ6VXNlcjM0OTQ3OTkz", "avatar_url": "https://avatars.githubusercontent.com/u/34947993?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hichemfantar", "html_url": "https://github.com/hichemfantar", "followers_url": "https://api.github.c...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
open
false
null
[]
null
0
2024-10-08T14:59:07
2024-10-08T15:03:18
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I don't understand why the electron app is only for macOS when electron is perfectly capable of running on Windows and Linux. features like #7097 can easily be adopted for all platforms if electron is used.
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7135/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7135/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/3944
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3944/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3944/comments
https://api.github.com/repos/ollama/ollama/issues/3944/events
https://github.com/ollama/ollama/issues/3944
2,265,803,830
I_kwDOJ0Z1Ps6HDWw2
3,944
/api/embeddings hangs when prompt is only whitespace
{ "login": "alexmavr", "id": 680441, "node_id": "MDQ6VXNlcjY4MDQ0MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/680441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alexmavr", "html_url": "https://github.com/alexmavr", "followers_url": "https://api.github.com/users/alexmav...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[ { "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.g...
null
5
2024-04-26T13:34:09
2024-06-29T22:53:16
2024-06-29T22:53:16
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? The following invocation hangs indefinitely: ``` $ curl http://localhost:11434/api/embeddings -d '{ "model": "all-minilm", "prompt": " " }' ``` Same behavior for model "mxbai-embed-large" Relevant debug logs: ``` {"function":"process_single_task","level":"INFO","line":151...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3944/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3944/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/8686
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8686/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8686/comments
https://api.github.com/repos/ollama/ollama/issues/8686/events
https://github.com/ollama/ollama/issues/8686
2,820,001,072
I_kwDOJ0Z1Ps6oFc0w
8,686
Support Deepseek Janus Pro Series (7B & 1B)
{ "login": "zytoh0", "id": 90326544, "node_id": "MDQ6VXNlcjkwMzI2NTQ0", "avatar_url": "https://avatars.githubusercontent.com/u/90326544?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zytoh0", "html_url": "https://github.com/zytoh0", "followers_url": "https://api.github.com/users/zytoh0/fo...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
open
false
null
[]
null
2
2025-01-30T06:17:54
2025-01-30T08:28:58
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hello, good day to you all. I would like to request that ollama add support for Deepseek Janus Pro Series (currently only 7B & 1B): 1. https://huggingface.co/deepseek-ai/Janus-Pro-1B 2. https://huggingface.co/deepseek-ai/Janus-Pro-7B
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8686/reactions", "total_count": 6, "+1": 6, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8686/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/6086
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6086/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6086/comments
https://api.github.com/repos/ollama/ollama/issues/6086/events
https://github.com/ollama/ollama/issues/6086
2,438,997,377
I_kwDOJ0Z1Ps6RYCWB
6,086
yi:9b Abnormal content output `<|im_end()>`
{ "login": "wszgrcy", "id": 9607121, "node_id": "MDQ6VXNlcjk2MDcxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/9607121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wszgrcy", "html_url": "https://github.com/wszgrcy", "followers_url": "https://api.github.com/users/wszgrcy/...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
0
2024-07-31T02:49:58
2024-07-31T02:49:58
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I'm not quite sure if this is due to a model issue (normal reply) Or some definitions are still problematic log ```log 2024-07-31 10:42:39.338 [info] time=2024-07-31T10:42:39.338+08:00 level=DEBUG source=routes.go:1336 msg="chat request" images=0 prompt="<|im_start|>system\n不需要考虑输入的内容的含义只需要将...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6086/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6086/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/6539
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6539/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6539/comments
https://api.github.com/repos/ollama/ollama/issues/6539/events
https://github.com/ollama/ollama/pull/6539
2,490,646,168
PR_kwDOJ0Z1Ps55o4AN
6,539
fix: validate modelpath
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[]
closed
false
null
[]
null
0
2024-08-28T00:58:32
2024-08-28T21:38:28
2024-08-28T21:38:27
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/6539", "html_url": "https://github.com/ollama/ollama/pull/6539", "diff_url": "https://github.com/ollama/ollama/pull/6539.diff", "patch_url": "https://github.com/ollama/ollama/pull/6539.patch", "merged_at": "2024-08-28T21:38:27" }
ensure model path resolves to a local path
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6539/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6539/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/535
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/535/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/535/comments
https://api.github.com/repos/ollama/ollama/issues/535/events
https://github.com/ollama/ollama/pull/535
1,899,070,851
PR_kwDOJ0Z1Ps5aeaCZ
535
only add a layer if there is actual data
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
[]
closed
false
null
[]
null
0
2023-09-15T20:59:14
2023-09-18T20:47:46
2023-09-18T20:47:46
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/535", "html_url": "https://github.com/ollama/ollama/pull/535", "diff_url": "https://github.com/ollama/ollama/pull/535.diff", "patch_url": "https://github.com/ollama/ollama/pull/535.patch", "merged_at": "2023-09-18T20:47:46" }
This is a simple change which checks the layer size before adding it to the overall model. Registry balks if you try to send it an empty layer on an `ollama push`.
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/535/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/535/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/3185
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3185/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3185/comments
https://api.github.com/repos/ollama/ollama/issues/3185/events
https://github.com/ollama/ollama/issues/3185
2,190,192,859
I_kwDOJ0Z1Ps6Ci7Db
3,185
ollama doesn't distribute notice licenses in its release artifacts
{ "login": "jart", "id": 49262, "node_id": "MDQ6VXNlcjQ5MjYy", "avatar_url": "https://avatars.githubusercontent.com/u/49262?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jart", "html_url": "https://github.com/jart", "followers_url": "https://api.github.com/users/jart/followers", "follo...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
[ { "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api...
null
0
2024-03-16T19:13:26
2024-03-21T08:42:52
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? ollama uses projects like llama.cpp as a statically linked dependency. The terms of the MIT license require that it distribute the copyright notice in both source and binary form. Yet if I `grep` for "Georgi Gerganov" on my Linux and Windows installation folders for ollama, the copyright notices...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3185/reactions", "total_count": 27, "+1": 27, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3185/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/6727
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6727/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6727/comments
https://api.github.com/repos/ollama/ollama/issues/6727/events
https://github.com/ollama/ollama/issues/6727
2,516,529,897
I_kwDOJ0Z1Ps6V_zLp
6,727
Does ollama check for free disk space BEFORE pulling a new model?
{ "login": "bulrush15", "id": 7031486, "node_id": "MDQ6VXNlcjcwMzE0ODY=", "avatar_url": "https://avatars.githubusercontent.com/u/7031486?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bulrush15", "html_url": "https://github.com/bulrush15", "followers_url": "https://api.github.com/users/bu...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
1
2024-09-10T13:45:19
2024-09-12T00:39:42
2024-09-12T00:39:42
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I actually have 3 drives in my Windows 11 system. Does ollama check for free disk on the drive it's installed space BEFORE Pulling a new model? Before it pulls a model it should check that the user has at least 2-3GB of free disk space after pulling the model. If the user doesn't have that, then ollama should show ...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6727/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6727/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7833
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7833/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7833/comments
https://api.github.com/repos/ollama/ollama/issues/7833/events
https://github.com/ollama/ollama/pull/7833
2,692,437,708
PR_kwDOJ0Z1Ps6DGp7e
7,833
server: fix proxy not being set from environment
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[]
closed
false
null
[]
null
1
2024-11-25T22:29:45
2024-11-26T00:10:26
2024-11-26T00:10:26
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/7833", "html_url": "https://github.com/ollama/ollama/pull/7833", "diff_url": "https://github.com/ollama/ollama/pull/7833.diff", "patch_url": "https://github.com/ollama/ollama/pull/7833.patch", "merged_at": null }
Fixes https://github.com/ollama/ollama/issues/7829 Fixes https://github.com/ollama/ollama/issues/7788
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7833/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7833/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/2173
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2173/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2173/comments
https://api.github.com/repos/ollama/ollama/issues/2173/events
https://github.com/ollama/ollama/issues/2173
2,098,751,675
I_kwDOJ0Z1Ps59GGi7
2,173
Issues with OllamaEmbedding
{ "login": "RonHein", "id": 27790393, "node_id": "MDQ6VXNlcjI3NzkwMzkz", "avatar_url": "https://avatars.githubusercontent.com/u/27790393?v=4", "gravatar_id": "", "url": "https://api.github.com/users/RonHein", "html_url": "https://github.com/RonHein", "followers_url": "https://api.github.com/users/RonHei...
[ { "id": 5895046125, "node_id": "LA_kwDOJ0Z1Ps8AAAABX19D7Q", "url": "https://api.github.com/repos/ollama/ollama/labels/integration", "name": "integration", "color": "92E43A", "default": false, "description": "" } ]
closed
false
null
[]
null
3
2024-01-24T17:27:51
2024-05-10T23:32:34
2024-05-10T23:32:34
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hi, I am having trouble using OllamaEmbedding. I am unable to retrieve the correct vectors and the the similarity score is really high. I was able to get the correct vectors with OpenAIEmbedding but I am hoping to get OllamaEmbedding working. Is there something that I am missing? Below is a simple loader with chroma...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2173/reactions", "total_count": 4, "+1": 4, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2173/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3151
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3151/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3151/comments
https://api.github.com/repos/ollama/ollama/issues/3151/events
https://github.com/ollama/ollama/issues/3151
2,187,206,177
I_kwDOJ0Z1Ps6CXh4h
3,151
Doubt about openai compatibility with temperature parameter
{ "login": "ejgutierrez74", "id": 11474846, "node_id": "MDQ6VXNlcjExNDc0ODQ2", "avatar_url": "https://avatars.githubusercontent.com/u/11474846?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ejgutierrez74", "html_url": "https://github.com/ejgutierrez74", "followers_url": "https://api.githu...
[]
closed
false
null
[]
null
3
2024-03-14T20:22:49
2024-03-15T18:33:55
2024-03-15T01:41:57
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I have one doubt, about the use of temperature. As i have understood, temperature in llama2 is from 0.0 to 1.0. But if you use chat.completion from openai ( https://github.com/ollama/ollama/blob/main/docs/openai.md), if you read the documentation temperature values range from 0.0 to 2.0, so seems a little mismatch....
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3151/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3151/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5682
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5682/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5682/comments
https://api.github.com/repos/ollama/ollama/issues/5682/events
https://github.com/ollama/ollama/issues/5682
2,407,159,821
I_kwDOJ0Z1Ps6PelgN
5,682
Add model metadata which indicated model purpose to /api/tags endpoint.
{ "login": "CannonFodderr", "id": 36086310, "node_id": "MDQ6VXNlcjM2MDg2MzEw", "avatar_url": "https://avatars.githubusercontent.com/u/36086310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/CannonFodderr", "html_url": "https://github.com/CannonFodderr", "followers_url": "https://api.githu...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" }, { "id": 7706482389, "node_id": ...
open
false
null
[]
null
1
2024-07-13T20:49:33
2024-11-06T01:07:04
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
It would be nice if part of the **/api/tags** response detals would include metadata such as: `type: embedding | general | code | math | vision | audio` etc... `languages: [en-US, ...]` This could help with model sorting and selection. If metadata is not available maybe add an option to mark it with metadata by...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5682/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5682/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/5049
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5049/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5049/comments
https://api.github.com/repos/ollama/ollama/issues/5049/events
https://github.com/ollama/ollama/pull/5049
2,354,096,366
PR_kwDOJ0Z1Ps5yhXVZ
5,049
Cuda v12
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[]
closed
false
null
[]
null
9
2024-06-14T20:56:22
2024-08-20T18:06:58
2024-08-19T18:14:24
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5049", "html_url": "https://github.com/ollama/ollama/pull/5049", "diff_url": "https://github.com/ollama/ollama/pull/5049.diff", "patch_url": "https://github.com/ollama/ollama/pull/5049.patch", "merged_at": "2024-08-19T18:14:24" }
This builds upon the new linux packaging model in #5631 to support building 2 different CUDA runners: v11 for support going back to CC 5.0, and v12 for CC 6.0 and up GPUs. This allows us to start enabling new features such as `GGML_CUDA_USE_GRAPHS` which require cuda v12 support without dropping support for older GPUs...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5049/reactions", "total_count": 3, "+1": 0, "-1": 0, "laugh": 0, "hooray": 3, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5049/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5971
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5971/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5971/comments
https://api.github.com/repos/ollama/ollama/issues/5971/events
https://github.com/ollama/ollama/pull/5971
2,431,342,426
PR_kwDOJ0Z1Ps52iH3p
5,971
Add template for llama3.1:70B model
{ "login": "eust-w", "id": 39115651, "node_id": "MDQ6VXNlcjM5MTE1NjUx", "avatar_url": "https://avatars.githubusercontent.com/u/39115651?v=4", "gravatar_id": "", "url": "https://api.github.com/users/eust-w", "html_url": "https://github.com/eust-w", "followers_url": "https://api.github.com/users/eust-w/fo...
[]
closed
false
null
[]
null
1
2024-07-26T03:36:15
2024-08-14T16:39:38
2024-08-14T16:39:38
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5971", "html_url": "https://github.com/ollama/ollama/pull/5971", "diff_url": "https://github.com/ollama/ollama/pull/5971.diff", "patch_url": "https://github.com/ollama/ollama/pull/5971.patch", "merged_at": null }
- Added a new template for llama3.1:70B model in llama3.1-instruct.gotmpl. - Updated index.json to include the new template configuration. - Ensured compatibility with the existing llama3-instruct template structure. This addition provides support for the llama3.1:70B model, allowing for more advanced model instru...
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5971/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5971/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5923
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5923/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5923/comments
https://api.github.com/repos/ollama/ollama/issues/5923/events
https://github.com/ollama/ollama/issues/5923
2,428,324,395
I_kwDOJ0Z1Ps6QvUor
5,923
Slow Model Loading Speed on macOS System
{ "login": "ghost", "id": 10137, "node_id": "MDQ6VXNlcjEwMTM3", "avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ghost", "html_url": "https://github.com/ghost", "followers_url": "https://api.github.com/users/ghost/followers", "f...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
3
2024-07-24T19:30:31
2024-07-28T10:12:41
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I am experiencing slow model loading speeds when using Ollama on my macOS system. Here are the specifications of my setup: macOS Version: 14.5 Processor: M3 Max Memory: 128GB Storage: 2TB (with performance on par with the 8TB version) Ollama version: 0.2.8 Despite having sufficient har...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5923/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/ollama/ollama/issues/5923/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/6140
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6140/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6140/comments
https://api.github.com/repos/ollama/ollama/issues/6140/events
https://github.com/ollama/ollama/issues/6140
2,444,428,226
I_kwDOJ0Z1Ps6RswPC
6,140
unable to pull model
{ "login": "jdzhang1221", "id": 29417118, "node_id": "MDQ6VXNlcjI5NDE3MTE4", "avatar_url": "https://avatars.githubusercontent.com/u/29417118?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jdzhang1221", "html_url": "https://github.com/jdzhang1221", "followers_url": "https://api.github.com/...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
7
2024-08-02T08:47:52
2024-08-04T06:20:54
2024-08-02T14:59:05
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? ollama pull mxbai-embed-large pulling manifest Error: pull model manifest: Get "https://registry.ollama.ai/v2/library/mxbai-embed-large/manifests/latest": read tcp 192.168.1.94:50046->104.21.75.227:443: read: connection reset by peer ### OS macOS ### GPU Intel ### CPU Intel ### Ollama ...
{ "login": "jdzhang1221", "id": 29417118, "node_id": "MDQ6VXNlcjI5NDE3MTE4", "avatar_url": "https://avatars.githubusercontent.com/u/29417118?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jdzhang1221", "html_url": "https://github.com/jdzhang1221", "followers_url": "https://api.github.com/...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6140/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6140/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5412
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5412/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5412/comments
https://api.github.com/repos/ollama/ollama/issues/5412/events
https://github.com/ollama/ollama/pull/5412
2,384,417,979
PR_kwDOJ0Z1Ps50GhSA
5,412
Update README.md: Add Ollama-GUI to web & desktop
{ "login": "chyok", "id": 32629225, "node_id": "MDQ6VXNlcjMyNjI5MjI1", "avatar_url": "https://avatars.githubusercontent.com/u/32629225?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chyok", "html_url": "https://github.com/chyok", "followers_url": "https://api.github.com/users/chyok/follow...
[]
closed
false
null
[]
null
0
2024-07-01T17:51:05
2024-11-21T08:19:24
2024-11-21T08:19:24
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5412", "html_url": "https://github.com/ollama/ollama/pull/5412", "diff_url": "https://github.com/ollama/ollama/pull/5412.diff", "patch_url": "https://github.com/ollama/ollama/pull/5412.patch", "merged_at": "2024-11-21T08:19:24" }
Hi, ollama-gui is a very simple client, implemented using the built-in Python tkinter library, with no additional dependencies. Provide with the simplest possible visual Ollama interface. Repository: https://github.com/chyok/ollama-gui Screenshots(current version): ![ollama-gui-1 2 0](https://github.com/user-a...
{ "login": "mchiang0610", "id": 3325447, "node_id": "MDQ6VXNlcjMzMjU0NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mchiang0610", "html_url": "https://github.com/mchiang0610", "followers_url": "https://api.github.com/us...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5412/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5412/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/4813
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4813/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4813/comments
https://api.github.com/repos/ollama/ollama/issues/4813/events
https://github.com/ollama/ollama/issues/4813
2,333,483,407
I_kwDOJ0Z1Ps6LFiGP
4,813
Support intel cpu
{ "login": "kannon92", "id": 3780425, "node_id": "MDQ6VXNlcjM3ODA0MjU=", "avatar_url": "https://avatars.githubusercontent.com/u/3780425?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kannon92", "html_url": "https://github.com/kannon92", "followers_url": "https://api.github.com/users/kanno...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
open
false
null
[]
null
2
2024-06-04T13:03:20
2024-06-04T20:06:14
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
https://github.com/ollama/ollama/pull/3897 It would be good to document how to support intel cpus (using both intel compilers and mkl). My PR demonstrates how to compile with intel but I was told that we should move this to a feature request. im happy to help if I get some guidance on how this should go.
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4813/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4813/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/8503
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8503/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8503/comments
https://api.github.com/repos/ollama/ollama/issues/8503/events
https://github.com/ollama/ollama/issues/8503
2,799,547,686
I_kwDOJ0Z1Ps6m3bUm
8,503
Cannot overcome Ollama error : ollama._types.ResponseError: POST predict: Post "http://127.0.0.1:35843/completion": EOF / panic: failed to decode batch: could not find a kv cache slot
{ "login": "user-33948", "id": 193742694, "node_id": "U_kgDOC4xHZg", "avatar_url": "https://avatars.githubusercontent.com/u/193742694?v=4", "gravatar_id": "", "url": "https://api.github.com/users/user-33948", "html_url": "https://github.com/user-33948", "followers_url": "https://api.github.com/users/use...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
7
2025-01-20T15:18:43
2025-01-27T09:57:17
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I believe there is a bug in ollama's processing which flags the following two errors: (result from running python file:) Ollama error : ollama._types.ResponseError: POST predict: Post "http://127.0.0.1:35843/completion": EOF (error in ollama logs:) panic: failed to decode batch: could not fin...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8503/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8503/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/4147
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4147/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4147/comments
https://api.github.com/repos/ollama/ollama/issues/4147/events
https://github.com/ollama/ollama/pull/4147
2,278,707,111
PR_kwDOJ0Z1Ps5uhx48
4,147
Adding '/website' to serve up static files under a directory defined with the env OLLAMA_WEBSITE
{ "login": "1feralcat", "id": 51179976, "node_id": "MDQ6VXNlcjUxMTc5OTc2", "avatar_url": "https://avatars.githubusercontent.com/u/51179976?v=4", "gravatar_id": "", "url": "https://api.github.com/users/1feralcat", "html_url": "https://github.com/1feralcat", "followers_url": "https://api.github.com/users/...
[]
closed
false
null
[]
null
0
2024-05-04T03:38:07
2024-05-04T04:27:42
2024-05-04T04:27:42
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/4147", "html_url": "https://github.com/ollama/ollama/pull/4147", "diff_url": "https://github.com/ollama/ollama/pull/4147.diff", "patch_url": "https://github.com/ollama/ollama/pull/4147.patch", "merged_at": null }
null
{ "login": "1feralcat", "id": 51179976, "node_id": "MDQ6VXNlcjUxMTc5OTc2", "avatar_url": "https://avatars.githubusercontent.com/u/51179976?v=4", "gravatar_id": "", "url": "https://api.github.com/users/1feralcat", "html_url": "https://github.com/1feralcat", "followers_url": "https://api.github.com/users/...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4147/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4147/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/7035
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7035/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7035/comments
https://api.github.com/repos/ollama/ollama/issues/7035/events
https://github.com/ollama/ollama/issues/7035
2,554,953,798
I_kwDOJ0Z1Ps6YSYBG
7,035
Support AMD GPUs via WSL
{ "login": "vignessh", "id": 1451706, "node_id": "MDQ6VXNlcjE0NTE3MDY=", "avatar_url": "https://avatars.githubusercontent.com/u/1451706?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vignessh", "html_url": "https://github.com/vignessh", "followers_url": "https://api.github.com/users/vigne...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" }, { "id": 5860134234, "node_id": ...
closed
false
null
[]
null
1
2024-09-29T14:07:29
2024-09-30T16:57:47
2024-09-30T16:57:32
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hello, I'm running a Windows 11 workstation based on an AMD RX 7900XTX GPU. I installed the latest Ollama for Windows and with that I can see the GPU getting used for any queries. I also tried the Linux install for WSL following [this](https://community.amd.com/t5/ai/running-llms-locally-on-amd-gpus-with-ollama/ba-p...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7035/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7035/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7469
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7469/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7469/comments
https://api.github.com/repos/ollama/ollama/issues/7469/events
https://github.com/ollama/ollama/pull/7469
2,630,255,833
PR_kwDOJ0Z1Ps6AsIcO
7,469
Fix unsafe.Slice error with mllama
{ "login": "jessegross", "id": 6468499, "node_id": "MDQ6VXNlcjY0Njg0OTk=", "avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jessegross", "html_url": "https://github.com/jessegross", "followers_url": "https://api.github.com/users...
[]
closed
false
null
[]
null
2
2024-11-02T06:22:05
2024-11-02T23:04:49
2024-11-02T20:37:56
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/7469", "html_url": "https://github.com/ollama/ollama/pull/7469", "diff_url": "https://github.com/ollama/ollama/pull/7469.diff", "patch_url": "https://github.com/ollama/ollama/pull/7469.patch", "merged_at": "2024-11-02T20:37:55" }
Fix the error and also improve error handling for the llama.cpp CGo layer.
{ "login": "jessegross", "id": 6468499, "node_id": "MDQ6VXNlcjY0Njg0OTk=", "avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jessegross", "html_url": "https://github.com/jessegross", "followers_url": "https://api.github.com/users...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7469/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7469/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/6142
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6142/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6142/comments
https://api.github.com/repos/ollama/ollama/issues/6142/events
https://github.com/ollama/ollama/issues/6142
2,444,984,504
I_kwDOJ0Z1Ps6Ru4C4
6,142
BitDefender false positive when downloading Ollama Windows installer
{ "login": "E-Nyamsuren", "id": 14015501, "node_id": "MDQ6VXNlcjE0MDE1NTAx", "avatar_url": "https://avatars.githubusercontent.com/u/14015501?v=4", "gravatar_id": "", "url": "https://api.github.com/users/E-Nyamsuren", "html_url": "https://github.com/E-Nyamsuren", "followers_url": "https://api.github.com/...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 5860134234, "node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg...
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
2
2024-08-02T13:37:55
2024-08-02T20:44:12
2024-08-02T20:44:11
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? BitDefender detects a false positive(?) in the downloaded Windows installer of Ollama. I also have Ollama installer for Windows downloaded on 27 June 2024. That installer does not have this issue. **Where**: The current downloadable installer of Ollama on the Ollama website. **Threat name...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6142/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6142/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6168
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6168/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6168/comments
https://api.github.com/repos/ollama/ollama/issues/6168/events
https://github.com/ollama/ollama/issues/6168
2,447,607,963
I_kwDOJ0Z1Ps6R44ib
6,168
Installation via scoop fails
{ "login": "kawadumax", "id": 11693767, "node_id": "MDQ6VXNlcjExNjkzNzY3", "avatar_url": "https://avatars.githubusercontent.com/u/11693767?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kawadumax", "html_url": "https://github.com/kawadumax", "followers_url": "https://api.github.com/users/...
[ { "id": 5667396220, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA", "url": "https://api.github.com/repos/ollama/ollama/labels/question", "name": "question", "color": "d876e3", "default": true, "description": "General questions" }, { "id": 5860134234, "node_id": "LA_kwDOJ0Z1Ps8AAAABXU...
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
1
2024-08-05T05:06:48
2024-08-09T20:38:23
2024-08-09T20:37:53
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Related : https://github.com/ScoopInstaller/Main/issues/6074 Error quates; ``` Scoop was updated successfully! Installing 'ollama' (0.3.3) [64bit] from 'main' bucket OllamaSetup.exe (298.5 MB) [==================================================================================] 100% Chec...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6168/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6168/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3922
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3922/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3922/comments
https://api.github.com/repos/ollama/ollama/issues/3922/events
https://github.com/ollama/ollama/issues/3922
2,264,551,533
I_kwDOJ0Z1Ps6G-lBt
3,922
JSON list of available models
{ "login": "ricardobalk", "id": 14904229, "node_id": "MDQ6VXNlcjE0OTA0MjI5", "avatar_url": "https://avatars.githubusercontent.com/u/14904229?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ricardobalk", "html_url": "https://github.com/ricardobalk", "followers_url": "https://api.github.com/...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" }, { "id": 6573197867, "node_id": ...
closed
false
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
[ { "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api...
null
5
2024-04-25T21:47:11
2025-01-24T08:46:48
2024-05-09T22:12:02
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I saw that there's a list of available models at https://ollama.com/library. Is there a JSON-formatted file for this list? I would like to integrate it into a Python application I'm building. **Edit: I forgot to mention that I with _file_, I actually mean a _JSON-formatted response on a public endpoint_, so that my ...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3922/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3922/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4449
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4449/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4449/comments
https://api.github.com/repos/ollama/ollama/issues/4449/events
https://github.com/ollama/ollama/issues/4449
2,297,473,248
I_kwDOJ0Z1Ps6I8Kjg
4,449
openai.error.InvalidRequestError: model 'deepseek-coder:6.7b' not found, try pulling it first
{ "login": "userandpass", "id": 26294920, "node_id": "MDQ6VXNlcjI2Mjk0OTIw", "avatar_url": "https://avatars.githubusercontent.com/u/26294920?v=4", "gravatar_id": "", "url": "https://api.github.com/users/userandpass", "html_url": "https://github.com/userandpass", "followers_url": "https://api.github.com/...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
1
2024-05-15T10:20:31
2024-07-31T17:58:26
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? docker run -d --gpus="device=0" -v ollama:/root/.ollama -p 8010:11434 --name ollama ollama/ollama docker exec -it ollama ollama run deepseek-coder:6.7b I got the error in the title when I called it on port 8010 on another computer ### OS Linux ### GPU Nvidia ### CPU _No response_ ### ...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4449/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4449/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/6110
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6110/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6110/comments
https://api.github.com/repos/ollama/ollama/issues/6110/events
https://github.com/ollama/ollama/pull/6110
2,441,153,721
PR_kwDOJ0Z1Ps53DPgF
6,110
llama: Get embeddings working
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[]
closed
false
null
[]
null
0
2024-08-01T00:04:05
2024-08-01T14:59:38
2024-08-01T14:59:35
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/6110", "html_url": "https://github.com/ollama/ollama/pull/6110", "diff_url": "https://github.com/ollama/ollama/pull/6110.diff", "patch_url": "https://github.com/ollama/ollama/pull/6110.patch", "merged_at": "2024-08-01T14:59:35" }
Truncation doesn't pass, but the other embeddings tests pass
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6110/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6110/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/6677
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6677/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6677/comments
https://api.github.com/repos/ollama/ollama/issues/6677/events
https://github.com/ollama/ollama/issues/6677
2,510,405,967
I_kwDOJ0Z1Ps6VocFP
6,677
VG
{ "login": "vioricavg", "id": 163665189, "node_id": "U_kgDOCcFVJQ", "avatar_url": "https://avatars.githubusercontent.com/u/163665189?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vioricavg", "html_url": "https://github.com/vioricavg", "followers_url": "https://api.github.com/users/vioric...
[]
closed
false
null
[]
null
2
2024-09-06T13:11:03
2024-09-09T18:37:38
2024-09-06T21:16:18
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
# Use a pipeline as a high-level helper from transformers import pipeline messages = [ {"role": "user", "content": "Who are you?"}, ] pipe = pipeline("text-generation", model="LGAI-EXAONE/EXAONE-3.0-7.8B-Instruct", trust_remote_code=True) pipe(messages)
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6677/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6677/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3809
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3809/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3809/comments
https://api.github.com/repos/ollama/ollama/issues/3809/events
https://github.com/ollama/ollama/issues/3809
2,255,511,041
I_kwDOJ0Z1Ps6GcF4B
3,809
AMD gfx90a unrecognized (seen as gfx9010)
{ "login": "simark", "id": 1758287, "node_id": "MDQ6VXNlcjE3NTgyODc=", "avatar_url": "https://avatars.githubusercontent.com/u/1758287?v=4", "gravatar_id": "", "url": "https://api.github.com/users/simark", "html_url": "https://github.com/simark", "followers_url": "https://api.github.com/users/simark/foll...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6433346500, "node_id": "LA_kwDOJ0Z1Ps8AAAABf3UTxA...
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
0
2024-04-22T03:42:52
2024-04-24T18:07:50
2024-04-24T18:07:50
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I have a system with 3 AMD cards, a gfx900, a gfx906 and a gfx90a. When I launch `ollama serve`, I see: time=2024-04-21T23:32:10.765-04:00 level=INFO source=amd_linux.go:121 msg="amdgpu [3] gfx900 is supported" time=2024-04-21T23:32:10.765-04:00 level=INFO source=amd_linux.go:121 ms...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3809/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3809/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2508
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2508/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2508/comments
https://api.github.com/repos/ollama/ollama/issues/2508/events
https://github.com/ollama/ollama/issues/2508
2,135,638,897
I_kwDOJ0Z1Ps5_S0Nx
2,508
OLLAMA_KEEP_ALIVE ENV feature
{ "login": "uxfion", "id": 44778029, "node_id": "MDQ6VXNlcjQ0Nzc4MDI5", "avatar_url": "https://avatars.githubusercontent.com/u/44778029?v=4", "gravatar_id": "", "url": "https://api.github.com/users/uxfion", "html_url": "https://github.com/uxfion", "followers_url": "https://api.github.com/users/uxfion/fo...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
2
2024-02-15T04:53:03
2024-03-13T20:29:41
2024-03-13T20:29:41
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Does anyone know how to set `keep_alive` in the openai API? It seems that this feature is not supported in the openai API. It would be better if we could set `OLLAMA_KEEP_ALIVE` in the environment variables, since the `/v1/chat/completions` endpoint is difficult to support customized parameters. https://github.c...
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2508/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/ollama/ollama/issues/2508/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/8576
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8576/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8576/comments
https://api.github.com/repos/ollama/ollama/issues/8576/events
https://github.com/ollama/ollama/issues/8576
2,810,803,952
I_kwDOJ0Z1Ps6niXbw
8,576
How to save chat history / conversations to a file when running ollama models from terminal?
{ "login": "dimyself", "id": 36783626, "node_id": "MDQ6VXNlcjM2NzgzNjI2", "avatar_url": "https://avatars.githubusercontent.com/u/36783626?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dimyself", "html_url": "https://github.com/dimyself", "followers_url": "https://api.github.com/users/dim...
[]
open
false
null
[]
null
1
2025-01-25T06:29:28
2025-01-25T08:43:26
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hello! I feel like this is a stupid question, but i can't find the answer. I don't really see documentation on this. When I run ollama models on linux in terminal, is there a way to save the chat / conversation to a file?? I tried /save and that didn't work. I checked .ollama/history, but that is only ollama prompts....
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8576/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8576/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/4397
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4397/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4397/comments
https://api.github.com/repos/ollama/ollama/issues/4397/events
https://github.com/ollama/ollama/issues/4397
2,292,380,803
I_kwDOJ0Z1Ps6IovSD
4,397
how to keep system prompt permanently after setting SYSTEM
{ "login": "taozhiyuai", "id": 146583103, "node_id": "U_kgDOCLyuPw", "avatar_url": "https://avatars.githubusercontent.com/u/146583103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/taozhiyuai", "html_url": "https://github.com/taozhiyuai", "followers_url": "https://api.github.com/users/tao...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
2
2024-05-13T10:08:20
2024-05-14T00:46:33
2024-05-14T00:26:16
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? 1. /set SYSTEM 2. /show modelfile, I can see the setting is updated. 3. if I /bye, and ollama run model again, and /show modelfile , it seems SYSTEM in modelfile is restored. is that possible the model can keep my new SYSTEM setting permanently? ### OS macOS ### GPU Apple ### CPU App...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4397/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4397/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/8454
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8454/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8454/comments
https://api.github.com/repos/ollama/ollama/issues/8454/events
https://github.com/ollama/ollama/pull/8454
2,792,255,964
PR_kwDOJ0Z1Ps6H-0JU
8,454
Align file position to general.alignment at end of decoding.
{ "login": "rick-github", "id": 14946854, "node_id": "MDQ6VXNlcjE0OTQ2ODU0", "avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rick-github", "html_url": "https://github.com/rick-github", "followers_url": "https://api.github.com/...
[]
open
false
null
[]
null
0
2025-01-16T10:03:24
2025-01-16T17:20:02
null
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/8454", "html_url": "https://github.com/ollama/ollama/pull/8454", "diff_url": "https://github.com/ollama/ollama/pull/8454.diff", "patch_url": "https://github.com/ollama/ollama/pull/8454.patch", "merged_at": null }
Align the file position at the end of DecodeGGML with `general.alignment`. Fixes: #8456 Fixes: #5939
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8454/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8454/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/4144
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4144/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4144/comments
https://api.github.com/repos/ollama/ollama/issues/4144/events
https://github.com/ollama/ollama/pull/4144
2,278,576,977
PR_kwDOJ0Z1Ps5uhWm1
4,144
Make maximum pending request configurable
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[]
closed
false
null
[]
null
0
2024-05-03T23:37:51
2024-05-05T17:53:47
2024-05-05T17:53:44
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/4144", "html_url": "https://github.com/ollama/ollama/pull/4144", "diff_url": "https://github.com/ollama/ollama/pull/4144.diff", "patch_url": "https://github.com/ollama/ollama/pull/4144.patch", "merged_at": "2024-05-05T17:53:44" }
Bump the maximum queued requests to 512 (from 10) Make it configurable with a new env var `OLLAMA_MAX_QUEUE` Return a 503 when the server is too busy instead of more generic 500. Fixes #4124 With the added integration test, here are some quick memory stats on linux: - Just starting ollama RSS 429.0m - Load o...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4144/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4144/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/1660
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1660/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1660/comments
https://api.github.com/repos/ollama/ollama/issues/1660/events
https://github.com/ollama/ollama/issues/1660
2,052,754,121
I_kwDOJ0Z1Ps56WorJ
1,660
Docker image for quantize/convert no longer working
{ "login": "technovangelist", "id": 633681, "node_id": "MDQ6VXNlcjYzMzY4MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4", "gravatar_id": "", "url": "https://api.github.com/users/technovangelist", "html_url": "https://github.com/technovangelist", "followers_url": "https://api.git...
[]
closed
false
null
[]
null
2
2023-12-21T16:44:33
2023-12-21T17:21:41
2023-12-21T17:21:41
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I have an older version of the image on my Mac and converting a model works fine. But I pulled to a new machine and getting an error about protobufs. ``` You are using the default legacy behaviour of the <class 'transformers.models.llama.tokenization_llama.LlamaTokenizer'>. This is expected, and simply means that ...
{ "login": "technovangelist", "id": 633681, "node_id": "MDQ6VXNlcjYzMzY4MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4", "gravatar_id": "", "url": "https://api.github.com/users/technovangelist", "html_url": "https://github.com/technovangelist", "followers_url": "https://api.git...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1660/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1660/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2340
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2340/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2340/comments
https://api.github.com/repos/ollama/ollama/issues/2340/events
https://github.com/ollama/ollama/pull/2340
2,116,716,437
PR_kwDOJ0Z1Ps5l7VJQ
2,340
Add llm-ollama plugin for Datasette's LLM CLI to README
{ "login": "easp", "id": 414705, "node_id": "MDQ6VXNlcjQxNDcwNQ==", "avatar_url": "https://avatars.githubusercontent.com/u/414705?v=4", "gravatar_id": "", "url": "https://api.github.com/users/easp", "html_url": "https://github.com/easp", "followers_url": "https://api.github.com/users/easp/followers", ...
[]
closed
false
null
[]
null
1
2024-02-03T23:37:18
2024-02-03T23:40:51
2024-02-03T23:40:50
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/2340", "html_url": "https://github.com/ollama/ollama/pull/2340", "diff_url": "https://github.com/ollama/ollama/pull/2340.diff", "patch_url": "https://github.com/ollama/ollama/pull/2340.patch", "merged_at": "2024-02-03T23:40:50" }
The Datasette project's LLM cli provides a common interface to a variety of LLM APIs and local LLMs. This PR adds a link to an Ollama plugin for that tool.
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2340/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2340/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/3481
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3481/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3481/comments
https://api.github.com/repos/ollama/ollama/issues/3481/events
https://github.com/ollama/ollama/pull/3481
2,224,297,232
PR_kwDOJ0Z1Ps5rpGQd
3,481
CI subprocess path fix
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[]
closed
false
null
[]
null
1
2024-04-04T02:13:10
2024-04-04T02:29:13
2024-04-04T02:29:10
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/3481", "html_url": "https://github.com/ollama/ollama/pull/3481", "diff_url": "https://github.com/ollama/ollama/pull/3481.diff", "patch_url": "https://github.com/ollama/ollama/pull/3481.patch", "merged_at": "2024-04-04T02:29:10" }
null
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3481/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3481/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/6770
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6770/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6770/comments
https://api.github.com/repos/ollama/ollama/issues/6770/events
https://github.com/ollama/ollama/issues/6770
2,521,230,892
I_kwDOJ0Z1Ps6WRu4s
6,770
Library missing from ollama when running it in Docker
{ "login": "factor3", "id": 138332567, "node_id": "U_kgDOCD7Jlw", "avatar_url": "https://avatars.githubusercontent.com/u/138332567?v=4", "gravatar_id": "", "url": "https://api.github.com/users/factor3", "html_url": "https://github.com/factor3", "followers_url": "https://api.github.com/users/factor3/foll...
[ { "id": 5667396220, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA", "url": "https://api.github.com/repos/ollama/ollama/labels/question", "name": "question", "color": "d876e3", "default": true, "description": "General questions" }, { "id": 5755339642, "node_id": "LA_kwDOJ0Z1Ps8AAAABVw...
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
3
2024-09-12T03:06:53
2024-09-12T23:19:48
2024-09-12T23:19:48
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I am attempting to run the docker version of ollama with a nvidia GPU. I have put in all software as described at https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html I attempted to run the ollama container: docker run -d --gpus=all -v ./ollama:...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6770/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6770/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5050
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5050/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5050/comments
https://api.github.com/repos/ollama/ollama/issues/5050/events
https://github.com/ollama/ollama/issues/5050
2,354,119,365
I_kwDOJ0Z1Ps6MUQLF
5,050
Windows Based Ollama Updates Imposing Unjust Authority over Independent Applications
{ "login": "Soul2294", "id": 15517546, "node_id": "MDQ6VXNlcjE1NTE3NTQ2", "avatar_url": "https://avatars.githubusercontent.com/u/15517546?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Soul2294", "html_url": "https://github.com/Soul2294", "followers_url": "https://api.github.com/users/Sou...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 5860134234, "node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg...
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
3
2024-06-14T21:08:59
2024-06-19T16:13:41
2024-06-19T16:13:41
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Every time ollama needs to be restarted for an update it either directly requests I close OBS or outright shuts it down mid-streaming, I couldn't for the life of me determine why, but perhaps you're both using the same &/or conflicting libraries? Regardless of the why, it is completely sensel...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5050/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5050/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3178
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3178/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3178/comments
https://api.github.com/repos/ollama/ollama/issues/3178/events
https://github.com/ollama/ollama/pull/3178
2,189,825,683
PR_kwDOJ0Z1Ps5p0UUC
3,178
Add Saddle
{ "login": "jikkuatwork", "id": 113770409, "node_id": "U_kgDOBsf_qQ", "avatar_url": "https://avatars.githubusercontent.com/u/113770409?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jikkuatwork", "html_url": "https://github.com/jikkuatwork", "followers_url": "https://api.github.com/users/...
[]
closed
false
null
[]
null
0
2024-03-16T06:55:03
2024-03-25T18:54:09
2024-03-25T18:54:09
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/3178", "html_url": "https://github.com/ollama/ollama/pull/3178", "diff_url": "https://github.com/ollama/ollama/pull/3178.diff", "patch_url": "https://github.com/ollama/ollama/pull/3178.patch", "merged_at": "2024-03-25T18:54:09" }
Another simple, no build, no setup web interface.
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3178/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3178/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/7284
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7284/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7284/comments
https://api.github.com/repos/ollama/ollama/issues/7284/events
https://github.com/ollama/ollama/issues/7284
2,601,193,879
I_kwDOJ0Z1Ps6bCxGX
7,284
Is default install location configurable
{ "login": "wgong", "id": 329928, "node_id": "MDQ6VXNlcjMyOTkyOA==", "avatar_url": "https://avatars.githubusercontent.com/u/329928?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wgong", "html_url": "https://github.com/wgong", "followers_url": "https://api.github.com/users/wgong/followers"...
[ { "id": 5667396220, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA", "url": "https://api.github.com/repos/ollama/ollama/labels/question", "name": "question", "color": "d876e3", "default": true, "description": "General questions" } ]
closed
false
null
[]
null
2
2024-10-21T03:29:07
2024-10-22T18:46:05
2024-10-22T18:46:05
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I run Ollama on Ubuntu, where its default home is `/usr/share/ollama`. Recently, I ran out of space on that partition after experimenting with quite a few models. I then modified the `install.sh` script to install Ollama to `/opt/ollama`. This worked when starting Ollama on the terminal by running `ollama serve`; ...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7284/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7284/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6697
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6697/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6697/comments
https://api.github.com/repos/ollama/ollama/issues/6697/events
https://github.com/ollama/ollama/issues/6697
2,512,230,580
I_kwDOJ0Z1Ps6VvZi0
6,697
IGPUMemLimit/rocmMinimumMemory are undefined
{ "login": "wangzd0209", "id": 99313728, "node_id": "U_kgDOBetoQA", "avatar_url": "https://avatars.githubusercontent.com/u/99313728?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wangzd0209", "html_url": "https://github.com/wangzd0209", "followers_url": "https://api.github.com/users/wangz...
[ { "id": 5667396220, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA", "url": "https://api.github.com/repos/ollama/ollama/labels/question", "name": "question", "color": "d876e3", "default": true, "description": "General questions" } ]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
1
2024-09-08T06:52:06
2024-09-09T16:15:44
2024-09-09T16:15:17
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? when i first run the code with goland,i can not compile it. it said the IGPUMemLimit/rocmMinimumMemory is undefined. GPUMemLimit or rocmMinimumMemory is just a const. Can anyone help me? ![屏幕截图 2024-09-08 145031](https://github.com/user-attachments/assets/016bc32f-9f31-4837-813a-9004c6ea99e5) ...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6697/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6697/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4379
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4379/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4379/comments
https://api.github.com/repos/ollama/ollama/issues/4379/events
https://github.com/ollama/ollama/pull/4379
2,291,383,212
PR_kwDOJ0Z1Ps5vLtrr
4,379
Update `LlamaScript` to point to new link from Legacy link.
{ "login": "zanderlewis", "id": 158775116, "node_id": "U_kgDOCXa3TA", "avatar_url": "https://avatars.githubusercontent.com/u/158775116?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zanderlewis", "html_url": "https://github.com/zanderlewis", "followers_url": "https://api.github.com/users/...
[]
closed
false
null
[]
null
0
2024-05-12T15:25:07
2024-05-14T01:08:32
2024-05-14T01:08:32
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/4379", "html_url": "https://github.com/ollama/ollama/pull/4379", "diff_url": "https://github.com/ollama/ollama/pull/4379.diff", "patch_url": "https://github.com/ollama/ollama/pull/4379.patch", "merged_at": "2024-05-14T01:08:32" }
Still used Legacy link.
{ "login": "joshyan1", "id": 76125168, "node_id": "MDQ6VXNlcjc2MTI1MTY4", "avatar_url": "https://avatars.githubusercontent.com/u/76125168?v=4", "gravatar_id": "", "url": "https://api.github.com/users/joshyan1", "html_url": "https://github.com/joshyan1", "followers_url": "https://api.github.com/users/jos...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4379/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4379/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5083
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5083/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5083/comments
https://api.github.com/repos/ollama/ollama/issues/5083/events
https://github.com/ollama/ollama/issues/5083
2,355,837,403
I_kwDOJ0Z1Ps6Maznb
5,083
Cannot run in musl and busybox core systems
{ "login": "asimovc", "id": 142914286, "node_id": "U_kgDOCISy7g", "avatar_url": "https://avatars.githubusercontent.com/u/142914286?v=4", "gravatar_id": "", "url": "https://api.github.com/users/asimovc", "html_url": "https://github.com/asimovc", "followers_url": "https://api.github.com/users/asimovc/foll...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" }, { "id": 5755339642, "node_id": ...
open
false
null
[]
null
7
2024-06-16T15:24:14
2024-06-24T06:16:51
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? When i install ollama in my musl based [distro](kisslinux.github.io) it installs but cannot execute, i think is because the binarie is linked against glibc in system. Also ollamas need `lspci -d` option in the script install for the detection of GPU and busybox don't have that option, so the i...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5083/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5083/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/7157
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7157/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7157/comments
https://api.github.com/repos/ollama/ollama/issues/7157/events
https://github.com/ollama/ollama/pull/7157
2,577,084,140
PR_kwDOJ0Z1Ps5-Iy5-
7,157
Remove submodule and shift to Go server - 0.4.0
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[]
closed
false
null
[]
null
2
2024-10-09T22:33:44
2024-10-30T17:34:32
2024-10-30T17:34:28
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/7157", "html_url": "https://github.com/ollama/ollama/pull/7157", "diff_url": "https://github.com/ollama/ollama/pull/7157.diff", "patch_url": "https://github.com/ollama/ollama/pull/7157.patch", "merged_at": "2024-10-30T17:34:28" }
The Go server is now available in RC form at https://github.com/ollama/ollama/releases with 0.4.0 These changes are also in [dhiltgen/remove_submodule](https://github.com/ollama/ollama/tree/dhiltgen/remove_submodule) which is currently being used to build the RC's for the release. As we near finalizing the release,...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7157/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7157/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/2139
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2139/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2139/comments
https://api.github.com/repos/ollama/ollama/issues/2139/events
https://github.com/ollama/ollama/issues/2139
2,094,478,009
I_kwDOJ0Z1Ps581zK5
2,139
Ollama doesn't generate text in newer version of llama index
{ "login": "Bearsaerker", "id": 92314812, "node_id": "U_kgDOBYCcvA", "avatar_url": "https://avatars.githubusercontent.com/u/92314812?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Bearsaerker", "html_url": "https://github.com/Bearsaerker", "followers_url": "https://api.github.com/users/Be...
[]
closed
false
null
[]
null
0
2024-01-22T18:12:17
2024-01-22T18:13:41
2024-01-22T18:13:41
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I have a peculiar problem. As of llama index version 0.9.22 ollama is not able to produce text for me. I downgraded and tested almost all versions from 0.9.1 up to 0.9.21. 0.9.21 is the last version in which ollama is able to produce text with llama index. I have it integrated as " llm = Ollama(model="Solar", tempera...
{ "login": "Bearsaerker", "id": 92314812, "node_id": "U_kgDOBYCcvA", "avatar_url": "https://avatars.githubusercontent.com/u/92314812?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Bearsaerker", "html_url": "https://github.com/Bearsaerker", "followers_url": "https://api.github.com/users/Be...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2139/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2139/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5640
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5640/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5640/comments
https://api.github.com/repos/ollama/ollama/issues/5640/events
https://github.com/ollama/ollama/issues/5640
2,404,211,759
I_kwDOJ0Z1Ps6PTVwv
5,640
Pass array of messages as an argument
{ "login": "M3cubo", "id": 1382596, "node_id": "MDQ6VXNlcjEzODI1OTY=", "avatar_url": "https://avatars.githubusercontent.com/u/1382596?v=4", "gravatar_id": "", "url": "https://api.github.com/users/M3cubo", "html_url": "https://github.com/M3cubo", "followers_url": "https://api.github.com/users/M3cubo/foll...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
2
2024-07-11T22:03:16
2024-07-15T11:01:24
2024-07-15T11:01:24
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
In the Docs, it shows that using the API you can pass an array of messages: "messages": [ { "role": "user", "content": "why is the sky blue?" } ] My question is, how can I do it with the CLI? It is possible? I'm looking into something like: > ollama run "model" "prompt" "messages" where the argum...
{ "login": "M3cubo", "id": 1382596, "node_id": "MDQ6VXNlcjEzODI1OTY=", "avatar_url": "https://avatars.githubusercontent.com/u/1382596?v=4", "gravatar_id": "", "url": "https://api.github.com/users/M3cubo", "html_url": "https://github.com/M3cubo", "followers_url": "https://api.github.com/users/M3cubo/foll...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5640/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5640/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2400
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2400/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2400/comments
https://api.github.com/repos/ollama/ollama/issues/2400/events
https://github.com/ollama/ollama/issues/2400
2,124,080,088
I_kwDOJ0Z1Ps5-muPY
2,400
Sending empty prompt to `llm.Predict` hangs
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
1
2024-02-07T23:25:46
2024-02-21T00:03:54
2024-02-21T00:03:53
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
This is a less severe/internal version of https://github.com/ollama/ollama/issues/2397, where sending an empty prompt `""` to the runner causes a hang.
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2400/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2400/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4673
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4673/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4673/comments
https://api.github.com/repos/ollama/ollama/issues/4673/events
https://github.com/ollama/ollama/issues/4673
2,320,186,365
I_kwDOJ0Z1Ps6KSzv9
4,673
BUG: PHI-3
{ "login": "MichaelFomenko", "id": 12229584, "node_id": "MDQ6VXNlcjEyMjI5NTg0", "avatar_url": "https://avatars.githubusercontent.com/u/12229584?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MichaelFomenko", "html_url": "https://github.com/MichaelFomenko", "followers_url": "https://api.gi...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
3
2024-05-28T05:55:38
2024-06-25T08:09:29
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? When I start the Conversation in German Language, Phi-3 Mini and Medium working fine. But after some Conversations, the Models starting producing slowly Gibberish and Nonsens and repeating phrases, word and tokens and don't answering my Quaestiones anymore. When I start a new Conversation, it wo...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4673/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4673/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/365
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/365/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/365/comments
https://api.github.com/repos/ollama/ollama/issues/365/events
https://github.com/ollama/ollama/issues/365
1,854,045,776
I_kwDOJ0Z1Ps5ugn5Q
365
nous-hermes wrong model name?
{ "login": "carbocation", "id": 218804, "node_id": "MDQ6VXNlcjIxODgwNA==", "avatar_url": "https://avatars.githubusercontent.com/u/218804?v=4", "gravatar_id": "", "url": "https://api.github.com/users/carbocation", "html_url": "https://github.com/carbocation", "followers_url": "https://api.github.com/user...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
1
2023-08-17T00:01:57
2023-08-17T03:42:33
2023-08-17T03:42:33
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
The README currently says: https://github.com/jmorganca/ollama/blob/5ee611642049e9e4b8facb865325b33cb7343f06/README.md?plain=1#L42 But that pulls a 3GB model. Shouldn't this instead be suffixed with `:13b` like so? | Model | Parameters | Size | Download | | ---------...
{ "login": "mchiang0610", "id": 3325447, "node_id": "MDQ6VXNlcjMzMjU0NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mchiang0610", "html_url": "https://github.com/mchiang0610", "followers_url": "https://api.github.com/us...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/365/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/365/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3787
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3787/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3787/comments
https://api.github.com/repos/ollama/ollama/issues/3787/events
https://github.com/ollama/ollama/issues/3787
2,254,727,557
I_kwDOJ0Z1Ps6GZGmF
3,787
OOM with mixtral 8x22b
{ "login": "bozo32", "id": 102033973, "node_id": "U_kgDOBhTqNQ", "avatar_url": "https://avatars.githubusercontent.com/u/102033973?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bozo32", "html_url": "https://github.com/bozo32", "followers_url": "https://api.github.com/users/bozo32/follower...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6430601766, "node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg...
closed
false
null
[]
null
7
2024-04-20T21:22:53
2024-05-18T07:25:03
2024-05-18T07:25:03
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? OOM with mixtral on an A100 80gb. gets 47/57 layers onto the GPU and then chokes. running off the binary. just redownloaded it and re-ran and still got the same issue no probs with models that fit entirely into vram. (base) tamas002@gpun201:~/ai$ ./ollama run mixtral:8x22b-instruct-v0.1-q5...
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3787/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3787/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/996
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/996/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/996/comments
https://api.github.com/repos/ollama/ollama/issues/996/events
https://github.com/ollama/ollama/pull/996
1,977,265,901
PR_kwDOJ0Z1Ps5el7il
996
Add gen.nvim as community contribution
{ "login": "David-Kunz", "id": 1009936, "node_id": "MDQ6VXNlcjEwMDk5MzY=", "avatar_url": "https://avatars.githubusercontent.com/u/1009936?v=4", "gravatar_id": "", "url": "https://api.github.com/users/David-Kunz", "html_url": "https://github.com/David-Kunz", "followers_url": "https://api.github.com/users...
[]
closed
false
null
[]
null
1
2023-11-04T10:08:26
2023-11-06T18:51:41
2023-11-06T18:51:41
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/996", "html_url": "https://github.com/ollama/ollama/pull/996", "diff_url": "https://github.com/ollama/ollama/pull/996.diff", "patch_url": "https://github.com/ollama/ollama/pull/996.patch", "merged_at": "2023-11-06T18:51:41" }
Hi, [gen.nvim](https://github.com/David-Kunz/gen.nvim) is a Neovim extension from which you can invoke Ollama. Best regards, David
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/996/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/996/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/7558
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7558/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7558/comments
https://api.github.com/repos/ollama/ollama/issues/7558/events
https://github.com/ollama/ollama/issues/7558
2,641,293,364
I_kwDOJ0Z1Ps6dbvA0
7,558
llama3.2-vision crash on multiple cuda GPUs - unspecified launch failure
{ "login": "HuronExplodium", "id": 124458994, "node_id": "U_kgDOB2sX8g", "avatar_url": "https://avatars.githubusercontent.com/u/124458994?v=4", "gravatar_id": "", "url": "https://api.github.com/users/HuronExplodium", "html_url": "https://github.com/HuronExplodium", "followers_url": "https://api.github.c...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 5755339642, "node_id": "LA_kwDOJ0Z1Ps8AAAABVwuDeg...
closed
false
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[ { "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/...
null
13
2024-11-07T15:00:28
2024-11-14T17:40:05
2024-11-14T17:40:05
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Running on 3.2-vision:11b works with text and images Running on 3.2-vision:90b works with text, segfault on images Running llava: works with text and images Debug log from segfault with text and image: mllama_model_load: description: vision encoder for Mllama mllama_model_load: GGUF ve...
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7558/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7558/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7981
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7981/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7981/comments
https://api.github.com/repos/ollama/ollama/issues/7981/events
https://github.com/ollama/ollama/issues/7981
2,724,269,697
I_kwDOJ0Z1Ps6iYQ6B
7,981
Internet Access To The Model
{ "login": "dragonked2", "id": 66541902, "node_id": "MDQ6VXNlcjY2NTQxOTAy", "avatar_url": "https://avatars.githubusercontent.com/u/66541902?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dragonked2", "html_url": "https://github.com/dragonked2", "followers_url": "https://api.github.com/use...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
2
2024-12-07T02:22:24
2024-12-20T22:17:02
2024-12-20T22:17:01
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
i request to add Internet Access To The Model so it can use browser and crawl required data
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7981/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7981/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3345
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3345/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3345/comments
https://api.github.com/repos/ollama/ollama/issues/3345/events
https://github.com/ollama/ollama/pull/3345
2,206,483,529
PR_kwDOJ0Z1Ps5qsnvd
3,345
[wip] adds a welcome message to the interactive mode
{ "login": "xbasset", "id": 8493278, "node_id": "MDQ6VXNlcjg0OTMyNzg=", "avatar_url": "https://avatars.githubusercontent.com/u/8493278?v=4", "gravatar_id": "", "url": "https://api.github.com/users/xbasset", "html_url": "https://github.com/xbasset", "followers_url": "https://api.github.com/users/xbasset/...
[]
closed
false
null
[]
null
0
2024-03-25T19:05:11
2024-03-27T16:15:41
2024-03-27T16:15:31
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
true
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/3345", "html_url": "https://github.com/ollama/ollama/pull/3345", "diff_url": "https://github.com/ollama/ollama/pull/3345.diff", "patch_url": "https://github.com/ollama/ollama/pull/3345.patch", "merged_at": null }
Suggestion to add a welcome message to give clarity on the model / version of the model currently used in interactive model Following that conversation on twitter https://x.com/xbasset/status/1771934995738706322?s=20
{ "login": "xbasset", "id": 8493278, "node_id": "MDQ6VXNlcjg0OTMyNzg=", "avatar_url": "https://avatars.githubusercontent.com/u/8493278?v=4", "gravatar_id": "", "url": "https://api.github.com/users/xbasset", "html_url": "https://github.com/xbasset", "followers_url": "https://api.github.com/users/xbasset/...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3345/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3345/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/7062
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7062/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7062/comments
https://api.github.com/repos/ollama/ollama/issues/7062/events
https://github.com/ollama/ollama/issues/7062
2,559,321,173
I_kwDOJ0Z1Ps6YjCRV
7,062
Mistral Pixtral 12B
{ "login": "RajbirSehrawat", "id": 18544802, "node_id": "MDQ6VXNlcjE4NTQ0ODAy", "avatar_url": "https://avatars.githubusercontent.com/u/18544802?v=4", "gravatar_id": "", "url": "https://api.github.com/users/RajbirSehrawat", "html_url": "https://github.com/RajbirSehrawat", "followers_url": "https://api.gi...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
closed
false
null
[]
null
2
2024-10-01T13:51:40
2024-10-03T16:58:44
2024-10-03T16:58:44
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Can you please add Pixtral 12B in the list, while i am trying to install not able to use this model.
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7062/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7062/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/8218
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8218/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8218/comments
https://api.github.com/repos/ollama/ollama/issues/8218/events
https://github.com/ollama/ollama/issues/8218
2,756,025,838
I_kwDOJ0Z1Ps6kRZ3u
8,218
Question: Commercial Usage License Confirmation and Data Collection Clarification
{ "login": "ttamoud", "id": 57901415, "node_id": "MDQ6VXNlcjU3OTAxNDE1", "avatar_url": "https://avatars.githubusercontent.com/u/57901415?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ttamoud", "html_url": "https://github.com/ttamoud", "followers_url": "https://api.github.com/users/ttamou...
[]
open
false
null
[]
null
0
2024-12-23T12:58:03
2024-12-23T12:58:03
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
First, I want to express my sincere appreciation for Ollama. I've been using it daily in my development workflow, and it has significantly improved my productivity. The speed and ease of use are remarkable, and I'm constantly impressed by the ongoing improvements. Details: As a happy user looking to expand usage, I h...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8218/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8218/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/4964
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4964/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4964/comments
https://api.github.com/repos/ollama/ollama/issues/4964/events
https://github.com/ollama/ollama/issues/4964
2,344,458,655
I_kwDOJ0Z1Ps6LvZmf
4,964
ollama run qwen2:72b-instruct-q2_K but Error: llama runner process has terminated: signal: aborted (core dumped)
{ "login": "mikestut", "id": 88723510, "node_id": "MDQ6VXNlcjg4NzIzNTEw", "avatar_url": "https://avatars.githubusercontent.com/u/88723510?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mikestut", "html_url": "https://github.com/mikestut", "followers_url": "https://api.github.com/users/mik...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
5
2024-06-10T17:24:32
2024-07-01T08:55:43
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? 6月 11 01:17:54 Venue-vPro ollama[2760]: time=2024-06-11T01:17:54.332+08:00 level=INFO source=server.go:567 msg="waiting for server to become available" status="ll> 6月 11 01:17:54 Venue-vPro ollama[2760]: llm_load_vocab: special tokens cache size = 421 6月 11 01:17:54 Venue-vPro ollama[2760]: ll...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4964/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4964/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/7449
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7449/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7449/comments
https://api.github.com/repos/ollama/ollama/issues/7449/events
https://github.com/ollama/ollama/issues/7449
2,627,008,382
I_kwDOJ0Z1Ps6clPd-
7,449
Support for BGE-Multilingual-Gemma2
{ "login": "JPC612", "id": 177754485, "node_id": "U_kgDOCphRdQ", "avatar_url": "https://avatars.githubusercontent.com/u/177754485?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JPC612", "html_url": "https://github.com/JPC612", "followers_url": "https://api.github.com/users/JPC612/follower...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
open
false
null
[]
null
2
2024-10-31T14:30:22
2024-11-18T08:49:13
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I would be very grateful for the support of the BGE-Multilingual-Gemma2, an LLM-based multilingual embedding model. https://huggingface.co/BAAI/bge-multilingual-gemma2
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7449/reactions", "total_count": 6, "+1": 6, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7449/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/3676
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3676/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3676/comments
https://api.github.com/repos/ollama/ollama/issues/3676/events
https://github.com/ollama/ollama/issues/3676
2,246,498,648
I_kwDOJ0Z1Ps6F5tlY
3,676
gork ai support in ollama
{ "login": "olumolu", "id": 162728301, "node_id": "U_kgDOCbMJbQ", "avatar_url": "https://avatars.githubusercontent.com/u/162728301?v=4", "gravatar_id": "", "url": "https://api.github.com/users/olumolu", "html_url": "https://github.com/olumolu", "followers_url": "https://api.github.com/users/olumolu/foll...
[]
closed
false
null
[]
null
1
2024-04-16T16:51:00
2024-04-16T20:41:23
2024-04-16T20:41:22
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What model would you like? https://huggingface.co/xai-org/grok-1
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3676/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3676/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/8532
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8532/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8532/comments
https://api.github.com/repos/ollama/ollama/issues/8532/events
https://github.com/ollama/ollama/issues/8532
2,803,786,232
I_kwDOJ0Z1Ps6nHmH4
8,532
ollama only using cpu even with gpu found
{ "login": "nyllewin", "id": 22198088, "node_id": "MDQ6VXNlcjIyMTk4MDg4", "avatar_url": "https://avatars.githubusercontent.com/u/22198088?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nyllewin", "html_url": "https://github.com/nyllewin", "followers_url": "https://api.github.com/users/nyl...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
8
2025-01-22T08:58:01
2025-01-29T12:57:01
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? hello, this has been reported in the past at least two times, I am here to report it a third time because something doesnt seem right. relevant issues: https://github.com/ollama/ollama/issues/8485 https://github.com/ollama/ollama/issues/8467 same error, same fix with ' just reinstalling within...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8532/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8532/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/4005
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4005/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4005/comments
https://api.github.com/repos/ollama/ollama/issues/4005/events
https://github.com/ollama/ollama/issues/4005
2,267,611,165
I_kwDOJ0Z1Ps6HKQAd
4,005
curl: (7) Failed to connect to 172.16.105.65 port 11434 after 0 ms: Couldn't connect to server
{ "login": "moye12325", "id": 43414308, "node_id": "MDQ6VXNlcjQzNDE0MzA4", "avatar_url": "https://avatars.githubusercontent.com/u/43414308?v=4", "gravatar_id": "", "url": "https://api.github.com/users/moye12325", "html_url": "https://github.com/moye12325", "followers_url": "https://api.github.com/users/...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
5
2024-04-28T14:45:00
2025-01-30T07:28:12
2024-04-28T15:01:17
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? **I have set up listening and port settings** ![image](https://github.com/ollama/ollama/assets/43414308/5e7adbf3-e29b-4fc2-9bae-490a82064cbf) **And I can use it on my server** ![image](https://github.com/ollama/ollama/assets/43414308/fde96efd-c4c4-4b04-8e17-ac6fdf8ea591) ![image](https:/...
{ "login": "moye12325", "id": 43414308, "node_id": "MDQ6VXNlcjQzNDE0MzA4", "avatar_url": "https://avatars.githubusercontent.com/u/43414308?v=4", "gravatar_id": "", "url": "https://api.github.com/users/moye12325", "html_url": "https://github.com/moye12325", "followers_url": "https://api.github.com/users/...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4005/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4005/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4366
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4366/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4366/comments
https://api.github.com/repos/ollama/ollama/issues/4366/events
https://github.com/ollama/ollama/pull/4366
2,291,062,570
PR_kwDOJ0Z1Ps5vKtbh
4,366
case sensitive filepaths
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[]
closed
false
null
[]
null
0
2024-05-11T20:51:28
2024-05-11T21:12:37
2024-05-11T21:12:37
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/4366", "html_url": "https://github.com/ollama/ollama/pull/4366", "diff_url": "https://github.com/ollama/ollama/pull/4366.diff", "patch_url": "https://github.com/ollama/ollama/pull/4366.patch", "merged_at": "2024-05-11T21:12:37" }
TODO: filenames can be case sensitive but filepaths should not. however this needs to be backwards compatible. it currently is not so fix the regression first resolves #4346
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4366/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4366/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/908
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/908/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/908/comments
https://api.github.com/repos/ollama/ollama/issues/908/events
https://github.com/ollama/ollama/issues/908
1,962,610,243
I_kwDOJ0Z1Ps50-w5D
908
Whether the chatglm2 model can be supported
{ "login": "ddv404", "id": 97394404, "node_id": "U_kgDOBc4e5A", "avatar_url": "https://avatars.githubusercontent.com/u/97394404?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ddv404", "html_url": "https://github.com/ddv404", "followers_url": "https://api.github.com/users/ddv404/followers"...
[]
closed
false
null
[]
null
1
2023-10-26T02:58:43
2023-10-26T03:12:05
2023-10-26T03:12:05
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Whether the chatglm2 model can be supported
{ "login": "mchiang0610", "id": 3325447, "node_id": "MDQ6VXNlcjMzMjU0NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mchiang0610", "html_url": "https://github.com/mchiang0610", "followers_url": "https://api.github.com/us...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/908/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/908/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5417
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5417/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5417/comments
https://api.github.com/repos/ollama/ollama/issues/5417/events
https://github.com/ollama/ollama/issues/5417
2,384,659,698
I_kwDOJ0Z1Ps6OIwTy
5,417
Cloudflare Tunnel + Vercel AI SDK = `[AI_JSONParseError]: JSON parsing failed`
{ "login": "KastanDay", "id": 13607221, "node_id": "MDQ6VXNlcjEzNjA3MjIx", "avatar_url": "https://avatars.githubusercontent.com/u/13607221?v=4", "gravatar_id": "", "url": "https://api.github.com/users/KastanDay", "html_url": "https://github.com/KastanDay", "followers_url": "https://api.github.com/users/...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
1
2024-07-01T20:17:12
2024-07-24T22:29:24
2024-07-24T22:29:23
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? ## Intro Streaming from `Ollama -> Cloudflare Tunnel -> Vercel AI SDK` errors when parsing the stream: `[AI_JSONParseError]: JSON parsing failed`. My working hypothesis is that Cloudflare tunnels is not respecting the proper `chunk size` of each message when streaming it, causing the JSON pa...
{ "login": "KastanDay", "id": 13607221, "node_id": "MDQ6VXNlcjEzNjA3MjIx", "avatar_url": "https://avatars.githubusercontent.com/u/13607221?v=4", "gravatar_id": "", "url": "https://api.github.com/users/KastanDay", "html_url": "https://github.com/KastanDay", "followers_url": "https://api.github.com/users/...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5417/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 2 }
https://api.github.com/repos/ollama/ollama/issues/5417/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/540
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/540/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/540/comments
https://api.github.com/repos/ollama/ollama/issues/540/events
https://github.com/ollama/ollama/pull/540
1,899,497,724
PR_kwDOJ0Z1Ps5afvdV
540
Allow setting ollama home directory through environment var OLLAMA_HOME.
{ "login": "JayNakrani", "id": 6269279, "node_id": "MDQ6VXNlcjYyNjkyNzk=", "avatar_url": "https://avatars.githubusercontent.com/u/6269279?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JayNakrani", "html_url": "https://github.com/JayNakrani", "followers_url": "https://api.github.com/users...
[]
closed
false
null
[]
null
3
2023-09-16T16:47:56
2023-10-25T22:39:15
2023-10-25T22:34:41
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/540", "html_url": "https://github.com/ollama/ollama/pull/540", "diff_url": "https://github.com/ollama/ollama/pull/540.diff", "patch_url": "https://github.com/ollama/ollama/pull/540.patch", "merged_at": null }
It would be great to be able to specify a different directory location than the default user-home directory.
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/540/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/540/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/1821
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1821/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1821/comments
https://api.github.com/repos/ollama/ollama/issues/1821/events
https://github.com/ollama/ollama/issues/1821
2,068,522,984
I_kwDOJ0Z1Ps57Syfo
1,821
amd64 binary for version 0.1.18 won't work with rocm-6.0.0
{ "login": "chirvo", "id": 1088243, "node_id": "MDQ6VXNlcjEwODgyNDM=", "avatar_url": "https://avatars.githubusercontent.com/u/1088243?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chirvo", "html_url": "https://github.com/chirvo", "followers_url": "https://api.github.com/users/chirvo/foll...
[]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
3
2024-01-06T10:27:48
2024-01-11T22:00:49
2024-01-11T22:00:49
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
This happens when using the Linux binary downloaded from [the web page](https://ollama.ai/download/ollama-linux-amd64). ``` 2024/01/06 09:03:56 images.go:834: total blobs: 0 ...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1821/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1821/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5757
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5757/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5757/comments
https://api.github.com/repos/ollama/ollama/issues/5757/events
https://github.com/ollama/ollama/pull/5757
2,414,669,823
PR_kwDOJ0Z1Ps51spWK
5,757
bump go version to 1.22.5 to fix security vulnerabilities in docker
{ "login": "lreed-mdsol", "id": 72270603, "node_id": "MDQ6VXNlcjcyMjcwNjAz", "avatar_url": "https://avatars.githubusercontent.com/u/72270603?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lreed-mdsol", "html_url": "https://github.com/lreed-mdsol", "followers_url": "https://api.github.com/...
[]
closed
false
null
[]
null
0
2024-07-17T21:59:06
2024-07-22T23:32:43
2024-07-22T23:32:43
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5757", "html_url": "https://github.com/ollama/ollama/pull/5757", "diff_url": "https://github.com/ollama/ollama/pull/5757.diff", "patch_url": "https://github.com/ollama/ollama/pull/5757.patch", "merged_at": "2024-07-22T23:32:43" }
The existing Version of 1.22.1 Is showing Security Vulnerabilities when scanned by Prisma Scan results for: image ollama/ollama:latest sha256:56505af4d7ed5e66de96c124c21312aee6cdd518098efd0fa524738f24b1a701 Vulnerabilities | CVE | SEVERITY | CVSS | PACKAGE | VERSION | ...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5757/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5757/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5304
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5304/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5304/comments
https://api.github.com/repos/ollama/ollama/issues/5304/events
https://github.com/ollama/ollama/issues/5304
2,375,630,707
I_kwDOJ0Z1Ps6NmT9z
5,304
Support for multimodal embedding models
{ "login": "k0marov", "id": 95040709, "node_id": "U_kgDOBao0xQ", "avatar_url": "https://avatars.githubusercontent.com/u/95040709?v=4", "gravatar_id": "", "url": "https://api.github.com/users/k0marov", "html_url": "https://github.com/k0marov", "followers_url": "https://api.github.com/users/k0marov/follow...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
open
false
null
[]
null
3
2024-06-26T15:13:57
2024-11-12T19:04:22
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hi! It seems I'm not able to find a REST API endpoint for generating embeddings for an image, in other words, providing functionality for using models like CLIP which can take both text and images as input. But these models are very useful in many applications, such as semantic image search, classification, etc. ...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5304/reactions", "total_count": 28, "+1": 28, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5304/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/197
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/197/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/197/comments
https://api.github.com/repos/ollama/ollama/issues/197/events
https://github.com/ollama/ollama/pull/197
1,818,958,365
PR_kwDOJ0Z1Ps5WQqRx
197
remove file on digest mismatch
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
[]
closed
false
null
[]
null
0
2023-07-24T18:54:33
2023-09-08T15:13:26
2023-07-24T19:59:12
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/197", "html_url": "https://github.com/ollama/ollama/pull/197", "diff_url": "https://github.com/ollama/ollama/pull/197.diff", "patch_url": "https://github.com/ollama/ollama/pull/197.patch", "merged_at": "2023-07-24T19:59:12" }
ideally this never happens (the download resume should prevent this), but if there is a digest mismatch the specific blob should be removed rather than the user manually removing it related to #170
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/197/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/197/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/2637
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2637/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2637/comments
https://api.github.com/repos/ollama/ollama/issues/2637/events
https://github.com/ollama/ollama/issues/2637
2,146,959,786
I_kwDOJ0Z1Ps5_-AGq
2,637
Integrated AMD GPU support
{ "login": "DocMAX", "id": 5351323, "node_id": "MDQ6VXNlcjUzNTEzMjM=", "avatar_url": "https://avatars.githubusercontent.com/u/5351323?v=4", "gravatar_id": "", "url": "https://api.github.com/users/DocMAX", "html_url": "https://github.com/DocMAX", "followers_url": "https://api.github.com/users/DocMAX/foll...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" }, { "id": 6433346500, "node_id": ...
open
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
144
2024-02-21T14:56:12
2025-01-07T09:01:33
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Opening a new issue (see https://github.com/ollama/ollama/pull/2195) to track support for integrated GPUs. I have a AMD 5800U CPU with integrated graphics. As far as i did research ROCR lately does support integrated graphics too. Currently Ollama seems to ignore iGPUs in general.
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2637/reactions", "total_count": 32, "+1": 31, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/ollama/ollama/issues/2637/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/5250
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5250/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5250/comments
https://api.github.com/repos/ollama/ollama/issues/5250/events
https://github.com/ollama/ollama/issues/5250
2,369,600,313
I_kwDOJ0Z1Ps6NPTs5
5,250
best performence with which gpu or cpu? for notebook
{ "login": "olumolu", "id": 162728301, "node_id": "U_kgDOCbMJbQ", "avatar_url": "https://avatars.githubusercontent.com/u/162728301?v=4", "gravatar_id": "", "url": "https://api.github.com/users/olumolu", "html_url": "https://github.com/olumolu", "followers_url": "https://api.github.com/users/olumolu/foll...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
1
2024-06-24T08:35:24
2024-06-25T16:22:11
2024-06-25T16:22:07
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
could not find the best performance with ollama tried to run with ollama docker with rcom amd gpu `docker run -d --device /dev/kfd --device /dev/dri -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama:rocm` also with cpu only mode `docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ol...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5250/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5250/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/8495
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8495/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8495/comments
https://api.github.com/repos/ollama/ollama/issues/8495/events
https://github.com/ollama/ollama/issues/8495
2,798,176,005
I_kwDOJ0Z1Ps6myMcF
8,495
Why do I keep getting "@@@@" as responses?
{ "login": "Jetbuzz", "id": 53119016, "node_id": "MDQ6VXNlcjUzMTE5MDE2", "avatar_url": "https://avatars.githubusercontent.com/u/53119016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Jetbuzz", "html_url": "https://github.com/Jetbuzz", "followers_url": "https://api.github.com/users/Jetbuz...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
6
2025-01-20T04:47:09
2025-01-24T15:01:40
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I have attached the screenshot to what is happening. I have an Nvidia 980m 4gb. Running latest version of Windows 10 and ollama. ![Image](https://github.com/user-attachments/assets/e670af15-7220-4a98-ab74-9cc398ffb52e) ### OS Windows ### GPU Nvidia ### CPU Intel ### Ollama version 0.5.7
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8495/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8495/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/3114
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3114/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3114/comments
https://api.github.com/repos/ollama/ollama/issues/3114/events
https://github.com/ollama/ollama/issues/3114
2,184,347,419
I_kwDOJ0Z1Ps6CMn8b
3,114
Using INT4 Quantization to Save VRAM with ollama
{ "login": "TraceRecursion", "id": 66545369, "node_id": "MDQ6VXNlcjY2NTQ1MzY5", "avatar_url": "https://avatars.githubusercontent.com/u/66545369?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TraceRecursion", "html_url": "https://github.com/TraceRecursion", "followers_url": "https://api.gi...
[]
closed
false
null
[]
null
3
2024-03-13T15:50:06
2024-03-14T15:05:39
2024-03-14T15:05:39
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hello ollama team, I am currently exploring the use of ollama to run models and am interested in implementing INT4 quantization to save on VRAM usage. I have read through the documentation but would appreciate some guidance on how to properly apply INT4 quantization during the model run. Could you provide some in...
{ "login": "TraceRecursion", "id": 66545369, "node_id": "MDQ6VXNlcjY2NTQ1MzY5", "avatar_url": "https://avatars.githubusercontent.com/u/66545369?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TraceRecursion", "html_url": "https://github.com/TraceRecursion", "followers_url": "https://api.gi...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3114/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3114/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4797
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4797/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4797/comments
https://api.github.com/repos/ollama/ollama/issues/4797/events
https://github.com/ollama/ollama/issues/4797
2,331,212,648
I_kwDOJ0Z1Ps6K83to
4,797
Stop token behavior changes when specifying list of stop tokens
{ "login": "ccreutzi", "id": 89011131, "node_id": "MDQ6VXNlcjg5MDExMTMx", "avatar_url": "https://avatars.githubusercontent.com/u/89011131?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ccreutzi", "html_url": "https://github.com/ccreutzi", "followers_url": "https://api.github.com/users/ccr...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 7706482389, "node_id": "LA_kwDOJ0Z1Ps8AAAABy1eW1Q...
closed
false
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[ { "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.g...
null
1
2024-06-03T13:48:43
2024-11-12T01:35:36
2024-11-12T01:35:35
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Setting a stop token of `"k=1 was"` for this call has no effect, as expected: ``` $ curl -s http://localhost:11434/api/chat -d '{ > "model": "mistral","options": {"top_k":1,"stop":["k=1 was"]}, > "stream": false, > "messages":[{"role":"user","cont...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4797/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4797/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3825
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3825/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3825/comments
https://api.github.com/repos/ollama/ollama/issues/3825/events
https://github.com/ollama/ollama/issues/3825
2,256,736,953
I_kwDOJ0Z1Ps6GgxK5
3,825
Updating to docker 0.1.29-rocm and beyond breaks detection of GPU (Radeon Pro W6600)
{ "login": "ic4-y", "id": 61844926, "node_id": "MDQ6VXNlcjYxODQ0OTI2", "avatar_url": "https://avatars.githubusercontent.com/u/61844926?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ic4-y", "html_url": "https://github.com/ic4-y", "followers_url": "https://api.github.com/users/ic4-y/follow...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6433346500, "node_id": "LA_kwDOJ0Z1Ps8AAAABf3UTxA...
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
19
2024-04-22T14:47:12
2024-05-04T21:20:20
2024-05-04T21:20:19
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? When updating my docker stack from using the image `0.1.24-rocm` to newer versions in order to run some embeddings models that crashed otherwise, I noticed that `0.1.29-rocm` and above break GPU detection on my Radeon Pro W6600. The GPU works fine in `0.1.28-rocm` On `0.1.32-rocm` I get th...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3825/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3825/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6264
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6264/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6264/comments
https://api.github.com/repos/ollama/ollama/issues/6264/events
https://github.com/ollama/ollama/pull/6264
2,456,695,353
PR_kwDOJ0Z1Ps534qxf
6,264
Parse cpuinfo and set default threads
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[]
closed
false
null
[]
null
1
2024-08-08T21:51:23
2024-10-15T18:36:11
2024-10-15T18:36:08
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/6264", "html_url": "https://github.com/ollama/ollama/pull/6264", "diff_url": "https://github.com/ollama/ollama/pull/6264.diff", "patch_url": "https://github.com/ollama/ollama/pull/6264.patch", "merged_at": "2024-10-15T18:36:08" }
Set the default thread count to the number of performance cores detected on the system. Without this change, the new Go server winds up picking `runtime.NumCPU` from Go, which equates to logical processors, and that results in thrashing on hyperthreading CPUs and poor CPU inference speed. We need to reduce down to ...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6264/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6264/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5575
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5575/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5575/comments
https://api.github.com/repos/ollama/ollama/issues/5575/events
https://github.com/ollama/ollama/pull/5575
2,398,478,044
PR_kwDOJ0Z1Ps502TqN
5,575
Update README.md
{ "login": "elearningshow", "id": 766298, "node_id": "MDQ6VXNlcjc2NjI5OA==", "avatar_url": "https://avatars.githubusercontent.com/u/766298?v=4", "gravatar_id": "", "url": "https://api.github.com/users/elearningshow", "html_url": "https://github.com/elearningshow", "followers_url": "https://api.github.co...
[]
closed
false
null
[]
null
1
2024-07-09T15:10:35
2024-11-21T08:31:27
2024-11-21T08:31:27
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5575", "html_url": "https://github.com/ollama/ollama/pull/5575", "diff_url": "https://github.com/ollama/ollama/pull/5575.diff", "patch_url": "https://github.com/ollama/ollama/pull/5575.patch", "merged_at": "2024-11-21T08:31:27" }
I have created an easy to use GUI in python that would make a great addition to the Community Integrations Web & Desktop section. - [Ollama-Kis](https://github.com/elearningshow/ollama-kis) (A simple easy to use GUI with sample custom LLM for Drivers Education)
{ "login": "mchiang0610", "id": 3325447, "node_id": "MDQ6VXNlcjMzMjU0NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mchiang0610", "html_url": "https://github.com/mchiang0610", "followers_url": "https://api.github.com/us...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5575/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5575/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/204
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/204/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/204/comments
https://api.github.com/repos/ollama/ollama/issues/204/events
https://github.com/ollama/ollama/issues/204
1,819,267,669
I_kwDOJ0Z1Ps5sb9JV
204
Consider Using Standard Config Format
{ "login": "nazimamin", "id": 4207188, "node_id": "MDQ6VXNlcjQyMDcxODg=", "avatar_url": "https://avatars.githubusercontent.com/u/4207188?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nazimamin", "html_url": "https://github.com/nazimamin", "followers_url": "https://api.github.com/users/na...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" }, { "id": 6100196012, "node_id": ...
open
false
null
[]
null
7
2023-07-24T22:58:37
2024-05-31T21:49:50
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Thank you for your work, this is great and will be very helpful for the OSS community. The custom configuration file named "Modelfile" works well in the context of this project. I would like to discuss the possibility of using a standardized config format such as JSON5, TOML, YAML, or another similar standard. Thos...
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/204/reactions", "total_count": 5, "+1": 4, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/204/timeline
null
reopened
false
https://api.github.com/repos/ollama/ollama/issues/7569
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7569/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7569/comments
https://api.github.com/repos/ollama/ollama/issues/7569/events
https://github.com/ollama/ollama/issues/7569
2,643,207,265
I_kwDOJ0Z1Ps6djCRh
7,569
I wanted to the add Donut LLM model which seems to be not supported at the moment
{ "login": "KIC", "id": 10957396, "node_id": "MDQ6VXNlcjEwOTU3Mzk2", "avatar_url": "https://avatars.githubusercontent.com/u/10957396?v=4", "gravatar_id": "", "url": "https://api.github.com/users/KIC", "html_url": "https://github.com/KIC", "followers_url": "https://api.github.com/users/KIC/followers", ...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
open
false
null
[]
null
0
2024-11-08T08:01:23
2024-11-08T08:01:23
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
after cloning: https://huggingface.co/docs/transformers/en/model_doc/donut I have tried to run `docker run --rm -v .:/model ollama/quantize -q q8_0 /model` but it fails with: `unknown architecture VisionEncoderDecoderModel` I think one can never have enough vision models, so please add support for Donut models...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7569/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7569/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/1503
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1503/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1503/comments
https://api.github.com/repos/ollama/ollama/issues/1503/events
https://github.com/ollama/ollama/issues/1503
2,039,983,770
I_kwDOJ0Z1Ps55l66a
1,503
Invalid Opcode Error in Ubuntu Server
{ "login": "Gyarados", "id": 5567681, "node_id": "MDQ6VXNlcjU1Njc2ODE=", "avatar_url": "https://avatars.githubusercontent.com/u/5567681?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Gyarados", "html_url": "https://github.com/Gyarados", "followers_url": "https://api.github.com/users/Gyara...
[]
closed
false
null
[]
null
3
2023-12-13T15:43:36
2023-12-13T17:28:38
2023-12-13T17:28:37
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
When trying to run any model in Ubuntu Server, locally and in a container, I get the following messages in the Ollama logs: ``` $ journalctl -u ollama -f Dec 13 15:28:54 desimachine ollama[1471335]: 2023/12/13 15:28:54 download.go:123: downloading 58e1b82a691f in 1 18 B part(s) Dec 13 15:28:58 desimachine ollama[...
{ "login": "Gyarados", "id": 5567681, "node_id": "MDQ6VXNlcjU1Njc2ODE=", "avatar_url": "https://avatars.githubusercontent.com/u/5567681?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Gyarados", "html_url": "https://github.com/Gyarados", "followers_url": "https://api.github.com/users/Gyara...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1503/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1503/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5626
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5626/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5626/comments
https://api.github.com/repos/ollama/ollama/issues/5626/events
https://github.com/ollama/ollama/pull/5626
2,402,232,109
PR_kwDOJ0Z1Ps51C7KL
5,626
sched: error on over-allocation of system memory when on Linux
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[]
closed
false
null
[]
null
0
2024-07-11T04:40:08
2024-07-11T07:53:14
2024-07-11T07:53:12
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5626", "html_url": "https://github.com/ollama/ollama/pull/5626", "diff_url": "https://github.com/ollama/ollama/pull/5626.diff", "patch_url": "https://github.com/ollama/ollama/pull/5626.patch", "merged_at": "2024-07-11T07:53:12" }
Model switching no longer works on CPU-only machines and the scheduler instead errors with `requested model is too large for this system` error: ``` $ ollama run gemma2 Error: requested model (8.4 GiB) is too large for this system (1.9 GiB) ``` This PR changes this behavior to only stop a new model from loadi...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5626/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5626/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/529
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/529/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/529/comments
https://api.github.com/repos/ollama/ollama/issues/529/events
https://github.com/ollama/ollama/pull/529
1,896,668,950
PR_kwDOJ0Z1Ps5aWTfc
529
add examples of streaming in python and node
{ "login": "technovangelist", "id": 633681, "node_id": "MDQ6VXNlcjYzMzY4MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4", "gravatar_id": "", "url": "https://api.github.com/users/technovangelist", "html_url": "https://github.com/technovangelist", "followers_url": "https://api.git...
[]
closed
false
null
[]
null
1
2023-09-14T14:13:26
2023-09-18T16:53:41
2023-09-18T16:53:41
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/529", "html_url": "https://github.com/ollama/ollama/pull/529", "diff_url": "https://github.com/ollama/ollama/pull/529.diff", "patch_url": "https://github.com/ollama/ollama/pull/529.patch", "merged_at": null }
null
{ "login": "mchiang0610", "id": 3325447, "node_id": "MDQ6VXNlcjMzMjU0NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mchiang0610", "html_url": "https://github.com/mchiang0610", "followers_url": "https://api.github.com/us...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/529/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/529/timeline
null
null
true